Skip to content

Releases: openebs/zfs-localpv

v0.9.0-RC1

09 Jul 05:17
Compare
Choose a tag to compare
v0.9.0-RC1 Pre-release
Pre-release
chore(doc): adding btrfs filesystem in the doc

Signed-off-by: Pawan <[email protected]>

v0.8.0

13 Jun 07:54
Compare
Choose a tag to compare

Change Summary

Key Improvements:

  • changing image pull policy to IfNotPresent to make it not pull the image again and again (#124,@pawanpraka1)
  • moving to legacy mount (#151,@pawanpraka1)
  • Fixes an issue where volumes meant to be filesystem datasets got created as zvols and generally makes - - storageclass parameter spelling insensitive to case (#144,@cruwe)
  • include pvc name in volume events (#150,@pawanpraka1)
  • Fixes an issue where PVC was bound to unusable PV created using incorrect values provided in PVC/Storageclass (#121,@pawanpraka1)
  • adding v1 CRD for ZFS-LocalPV (#140,@pawanpraka1)
  • add contributing checkout list (#138,@Icedroid)
  • fixing golint warnings (#133,@Icedroid)
  • removing unnecessary printer columns from ZFSVolume (#128,@pawanpraka1)
  • fixing stale ZFSVolume resource issue when deleting the pvc in pending state (#145,@pawanpraka1)
  • Updated the doc for custom-topology support (#122,@w3aman)
  • adding operator yaml for centos7 and centos8 (#149,@pawanpraka1)
  • honouring readonly flag for ZFS-LocalPV (#137,@pawanpraka1)

Install using kubectl

kubectl apply -f https://raw.githubusercontent.com/openebs/zfs-localpv/v0.8.x/deploy/zfs-operator.yaml

Upgrade

https://github.com/openebs/zfs-localpv/tree/master/upgrade

v0.8.0-RC2

12 Jun 02:40
Compare
Choose a tag to compare
v0.8.0-RC2 Pre-release
Pre-release
feat(modules): migrate to go modules and bump go version 1.14.4

- migrate to go module
- bump go version 1.14.4

Signed-off-by: prateekpandey14 <[email protected]>

v0.8.0-RC1

10 Jun 04:55
Compare
Choose a tag to compare
v0.8.0-RC1 Pre-release
Pre-release
feat(modules): migrate to go modules and bump go version 1.14.4

- migrate to go module
- bump go version 1.14.4

Signed-off-by: prateekpandey14 <[email protected]>

v0.7.0

14 May 18:37
Compare
Choose a tag to compare

Change Summary

Key Improvements:

Install using kubectl

kubectl apply -f https://raw.githubusercontent.com/openebs/zfs-localpv/v0.7.x/deploy/zfs-operator.yaml

Upgrade

https://github.com/openebs/zfs-localpv/tree/master/upgrade

v0.7.0-RC2

13 May 03:11
Compare
Choose a tag to compare
v0.7.0-RC2 Pre-release
Pre-release
chore(doc): adding raw block volume details in README

also added detailed upgrade steps.

Signed-off-by: Pawan <[email protected]>

v0.7.0-RC1

07 May 19:16
Compare
Choose a tag to compare
v0.7.0-RC1 Pre-release
Pre-release
chore(doc): adding raw block volume details in README

also added detailed upgrade steps.

Signed-off-by: Pawan <[email protected]>

0.6.1

22 Apr 19:20
Compare
Choose a tag to compare

Change Summary

Key Improvements:

  • fixing data loss in case of pod deletion
  • avoid creation of volumeattachment object
  • adding validation for ZFSPV CR parameters
  • adding poolname info to the PV volumeattributes
  • handling unmounted volume
  • automate the CRDs generation with validations for APIs
  • scripts to help migrating to new CRDs
  • move CR from openebs.io to zfs.openebs.io
  • Upgrade the base ubuntu package
  • xfs duplicate uuid for cloned volumes
  • Makefile and version enhancement

Install using kubectl

kubectl apply -f https://raw.githubusercontent.com/openebs/zfs-localpv/v0.6.x/deploy/zfs-operator.yaml

Upgrade

https://github.com/openebs/zfs-localpv/tree/master/upgrade

0.4.1

22 Apr 19:05
Compare
Choose a tag to compare
fix(zfspv): fixing data loss in case of pod deletion

looks like a bug in ZFS as when you change the mountpoint property to none,
ZFS automatically umounts the file system. When we delete the pod, we get the
unmount request for the old pod and mount request for the new pod. Unmount
is done by the driver by setting mountpoint to none and the driver assumes that
unmount has done and proceeded to delete the mountpath, but here zfs has not unmounted
the dataset

```
$ sudo zfs get all zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 | grep mount
zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765  mounted               yes                                                                                                -
zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765  mountpoint            none                                                                                               local
zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765  canmount              on
```

here, the driver will assume that dataset has been unmouted and proceed to delete the
mountpath and it will delete the data as part of cleaning up for the NodeUnPublish request.

Shifting to use zfs umount instead of doing zfs set mountpoint=none for umounting the dataset.
Also the driver is using os.RemoveAll which is very risky as it will clean
child also, since the mountpoint is not supposed to have anything,
just os.Remove is sufficient and it will fail if there is anything there.

Signed-off-by: Pawan <[email protected]>

0.6.0

14 Apr 13:28
Compare
Choose a tag to compare
0.6.0 Pre-release
Pre-release

Changes since v0.5: