Update local-volume-provisioner (#9463)

- Update and re-work the documentation:
  - Update links
  - Fix formatting (especially for lists)
  - Remove documentation about `useAlphaApi`,
    a flag only for k8s versions < v1.10
  - Attempt to clarify the doc
- Update to version 1.5.0
- Remove PodSecurityPolicy (deprecated in k8s v1.21+)
- Update ClusterRole following upstream
  (cf https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/pull/292)
- Add nodeSelector to DaemonSet (following upstream)
This commit is contained in:
Olivier Lemasle
2022-11-08 00:28:17 +01:00
committed by GitHub
parent a731e25778
commit 5d1fe64bc8
12 changed files with 85 additions and 175 deletions

View File

@@ -1,10 +1,11 @@
# Local Storage Provisioner
# Local Static Storage Provisioner
The [local storage provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume)
The [local static storage provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner)
is NOT a dynamic storage provisioner as you would
expect from a cloud provider. Instead, it simply creates PersistentVolumes for
all mounts under the host_dir of the specified storage class.
all mounts under the `host_dir` of the specified storage class.
These storage classes are specified in the `local_volume_provisioner_storage_classes` nested dictionary.
Example:
```yaml
@@ -16,15 +17,18 @@ local_volume_provisioner_storage_classes:
host_dir: /mnt/fast-disks
mount_dir: /mnt/fast-disks
block_cleaner_command:
- "/scripts/shred.sh"
- "2"
- "/scripts/shred.sh"
- "2"
volume_mode: Filesystem
fs_type: ext4
```
For each key in `local_volume_provisioner_storage_classes` a storageClass with the
same name is created. The subkeys of each storage class are converted to camelCase and added
as attributes to the storageClass.
For each key in `local_volume_provisioner_storage_classes` a "storage class" with
the same name is created in the entry `storageClassMap` of the ConfigMap `local-volume-provisioner`.
The subkeys of each storage class in `local_volume_provisioner_storage_classes`
are converted to camelCase and added as attributes to the storage class in the
ConfigMap.
The result of the above example is:
```yaml
@@ -43,80 +47,85 @@ data:
fsType: ext4
```
The default StorageClass is local-storage on /mnt/disks,
the rest of this doc will use that path as an example.
Additionally, a StorageClass object (`storageclasses.storage.k8s.io`) is also
created for each storage class:
```bash
$ kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY
fast-disks kubernetes.io/no-provisioner Delete
local-storage kubernetes.io/no-provisioner Delete
```
The default StorageClass is `local-storage` on `/mnt/disks`;
the rest of this documentation will use that path as an example.
## Examples to create local storage volumes
1. tmpfs method:
1. Using tmpfs
``` bash
for vol in vol1 vol2 vol3; do
mkdir /mnt/disks/$vol
mount -t tmpfs -o size=5G $vol /mnt/disks/$vol
done
```
```bash
for vol in vol1 vol2 vol3; do
mkdir /mnt/disks/$vol
mount -t tmpfs -o size=5G $vol /mnt/disks/$vol
done
```
The tmpfs method is not recommended for production because the mount is not
persistent and data will be deleted on reboot.
The tmpfs method is not recommended for production because the mounts are not
persistent and data will be deleted on reboot.
1. Mount physical disks
``` bash
mkdir /mnt/disks/ssd1
mount /dev/vdb1 /mnt/disks/ssd1
```
```bash
mkdir /mnt/disks/ssd1
mount /dev/vdb1 /mnt/disks/ssd1
```
Physical disks are recommended for production environments because it offers
complete isolation in terms of I/O and capacity.
Physical disks are recommended for production environments because it offers
complete isolation in terms of I/O and capacity.
1. Mount unpartitioned physical devices
``` bash
for disk in /dev/sdc /dev/sdd /dev/sde; do
ln -s $disk /mnt/disks
done
```
```bash
for disk in /dev/sdc /dev/sdd /dev/sde; do
ln -s $disk /mnt/disks
done
```
This saves time of precreating filesystems. Note that your storageclass must have
volume_mode set to "Filesystem" and fs_type defined. If either is not set, the
disk will be added as a raw block device.
This saves time of precreating filesystems. Note that your storageclass must have
`volume_mode` set to `"Filesystem"` and `fs_type` defined. If either is not set, the
disk will be added as a raw block device.
1. PersistentVolumes with `volumeMode="Block"`
Just like above, you can create PersistentVolumes with volumeMode `Block`
by creating a symbolic link under discovery directory to the block device on
the node, if you set `volume_mode` to `"Block"`. This will create a volume
presented into a Pod as a block device, without any filesystem on it.
1. File-backed sparsefile method
``` bash
truncate /mnt/disks/disk5 --size 2G
mkfs.ext4 /mnt/disks/disk5
mkdir /mnt/disks/vol5
mount /mnt/disks/disk5 /mnt/disks/vol5
```
```bash
truncate /mnt/disks/disk5 --size 2G
mkfs.ext4 /mnt/disks/disk5
mkdir /mnt/disks/vol5
mount /mnt/disks/disk5 /mnt/disks/vol5
```
If you have a development environment and only one disk, this is the best way
to limit the quota of persistent volumes.
If you have a development environment and only one disk, this is the best way
to limit the quota of persistent volumes.
1. Simple directories
In a development environment using `mount --bind` works also, but there is no capacity
management.
1. Block volumeMode PVs
Create a symbolic link under discovery directory to the block device on the node. To use
raw block devices in pods, volume_type should be set to "Block".
In a development environment, using `mount --bind` works also, but there is no capacity
management.
## Usage notes
Beta PV.NodeAffinity field is used by default. If running against an older K8s
version, the useAlphaAPI flag must be set in the configMap.
The volume provisioner cannot calculate volume sizes correctly, so you should
delete the daemonset pod on the relevant host after creating volumes. The pod
will be recreated and read the size correctly.
Make sure to make any mounts persist via /etc/fstab or with systemd mounts (for
Flatcar Container Linux). Pods with persistent volume claims will not be
Make sure to make any mounts persist via `/etc/fstab` or with systemd mounts (for
Flatcar Container Linux or Fedora CoreOS). Pods with persistent volume claims will not be
able to start if the mounts become unavailable.
## Further reading
Refer to the upstream docs here: <https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume>
Refer to the upstream docs here: <https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner>