CSI DRIVER LVM utilizes local storage of Kubernetes nodes to provide persistent storage for pods.
It automatically creates hostPath based persistent volumes on the nodes.
Underneath it creates a LVM logical volume on the local disks. A comma-separated list of grok pattern, which disks to use must be specified.
This CSI driver is derived from csi-driver-host-path and csi-lvm
For the special case of block volumes, the filesystem-expansion has to be performed by the app using the block device
The persistent volumes created by this CSI driver are strictly node-affine to the node on which the pod was scheduled. This is intentional and prevents pods from starting without the LV data, which resides only on the specific node in the Kubernetes cluster.
Consequently, if a pod is evicted (potentially due to cluster autoscaling or updates to the worker node), the pod may become stuck. In certain scenarios, it's acceptable for the pod to start on another node, despite the potential for data loss. The csi-driver-lvm-controller can capture these events and automatically delete the PVC without requiring manual intervention by an operator.
To use this functionality, the following is needed:
- This only works on
StatefulSet
s with volumeClaimTemplates and volume references to thecsi-driver-lvm
storage class - In addition to that, the
Pod
orPersistentVolumeClaim
managed by theStatefulSet
needs the annotation:metal-stack.io/csi-driver-lvm.is-eviction-allowed: true
Helm charts for installation are located in a separate repository called helm-charts. If you would like to contribute to the helm chart, please raise an issue or pull request there.
You have to set the devicePattern for your hardware to specify which disks should be used to create the volume group.
helm install --repo https://helm.metal-stack.io mytest csi-driver-lvm --set lvm.devicePattern='/dev/nvme[0-9]n[0-9]'
Now you can use one of following storageClasses:
csi-driver-lvm-linear
csi-driver-lvm-mirror
csi-driver-lvm-striped
To get the previous old and now deprecated csi-lvm-sc-linear
, ... storageclasses, set helm-chart value compat03x=true
.
If you want to migrate your existing PVC to / from csi-driver-lvm, you can use korb.
- implement CreateSnapshot(), ListSnapshots(), DeleteSnapshot()
kubectl apply -f examples/csi-pvc-raw.yaml
kubectl apply -f examples/csi-pod-raw.yaml
kubectl apply -f examples/csi-pvc.yaml
kubectl apply -f examples/csi-app.yaml
kubectl delete -f examples/csi-pod-raw.yaml
kubectl delete -f examples/csi-pvc-raw.yaml
kubectl delete -f examples/csi-app.yaml
kubectl delete -f examples/csi-pvc.yaml
In order to run the integration tests locally, you need to create to loop devices on your host machine. Make sure the loop device mount paths are not used on your system (default path is /dev/loop10{0,1}
).
You can create these loop devices like this:
for i in 100 101; do fallocate -l 1G loop${i}.img ; sudo losetup /dev/loop${i} loop${i}.img; done
sudo losetup -a
# https://github.com/util-linux/util-linux/issues/3197
# use this for recreation or cleanup
# for i in 100 101; do sudo losetup -d /dev/loop${i}; rm -f loop${i}.img; done
You can then run the tests against a kind cluster, running:
make test
To recreate or cleanup the kind cluster:
make test-cleanup