modify doc structure and update existing doc-links as preparation for new doc generation script

This commit is contained in:
Payback159
2024-05-15 19:32:51 +02:00
parent 0b464b5239
commit 4dbfd42f1d
82 changed files with 70 additions and 70 deletions

View File

@@ -0,0 +1,73 @@
# CephFS Volume Provisioner for Kubernetes 1.5+
[![Docker Repository on Quay](https://quay.io/repository/external_storage/cephfs-provisioner/status "Docker Repository on Quay")](https://quay.io/repository/external_storage/cephfs-provisioner)
Using Ceph volume client
## Development
Compile the provisioner
``` console
make
```
Make the container image and push to the registry
``` console
make push
```
## Test instruction
- Start Kubernetes local cluster
See [Kubernetes](https://kubernetes.io/)
- Create a Ceph admin secret
``` bash
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print $3'} |xargs echo -n > /tmp/secret
kubectl create ns cephfs
kubectl create secret generic ceph-secret-admin --from-file=/tmp/secret --namespace=cephfs
```
- Start CephFS provisioner
The following example uses `cephfs-provisioner-1` as the identity for the instance and assumes kubeconfig is at `/root/.kube`. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.
``` bash
docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host cephfs-provisioner /usr/local/bin/cephfs-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=cephfs-provisioner-1
```
Alternatively, deploy it in kubernetes, see [deployment](deploy/README.md).
- Create a CephFS Storage Class
Replace Ceph monitor's IP in [example class](example/class.yaml) with your own and create storage class:
``` bash
kubectl create -f example/class.yaml
```
- Create a claim
``` bash
kubectl create -f example/claim.yaml
```
- Create a Pod using the claim
``` bash
kubectl create -f example/test-pod.yaml
```
## Known limitations
- Kernel CephFS doesn't work with SELinux, setting SELinux label in Pod's securityContext will not work.
- Kernel CephFS doesn't support quota or capacity, capacity requested by PVC is not enforced or validated.
- Currently each Ceph user created by the provisioner has `allow r` MDS cap to permit CephFS mount.
## Acknowledgement
Inspired by CephFS Manila provisioner and conversation with John Spray

View File

@@ -0,0 +1,131 @@
# Local Static Storage Provisioner
The [local static storage provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner)
is NOT a dynamic storage provisioner as you would
expect from a cloud provider. Instead, it simply creates PersistentVolumes for
all mounts under the `host_dir` of the specified storage class.
These storage classes are specified in the `local_volume_provisioner_storage_classes` nested dictionary.
Example:
```yaml
local_volume_provisioner_storage_classes:
local-storage:
host_dir: /mnt/disks
mount_dir: /mnt/disks
fast-disks:
host_dir: /mnt/fast-disks
mount_dir: /mnt/fast-disks
block_cleaner_command:
- "/scripts/shred.sh"
- "2"
volume_mode: Filesystem
fs_type: ext4
```
For each key in `local_volume_provisioner_storage_classes` a "storage class" with
the same name is created in the entry `storageClassMap` of the ConfigMap `local-volume-provisioner`.
The subkeys of each storage class in `local_volume_provisioner_storage_classes`
are converted to camelCase and added as attributes to the storage class in the
ConfigMap.
The result of the above example is:
```yaml
data:
storageClassMap: |
local-storage:
hostDir: /mnt/disks
mountDir: /mnt/disks
fast-disks:
hostDir: /mnt/fast-disks
mountDir: /mnt/fast-disks
blockCleanerCommand:
- "/scripts/shred.sh"
- "2"
volumeMode: Filesystem
fsType: ext4
```
Additionally, a StorageClass object (`storageclasses.storage.k8s.io`) is also
created for each storage class:
```bash
$ kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY
fast-disks kubernetes.io/no-provisioner Delete
local-storage kubernetes.io/no-provisioner Delete
```
The default StorageClass is `local-storage` on `/mnt/disks`;
the rest of this documentation will use that path as an example.
## Examples to create local storage volumes
1. Using tmpfs
```bash
for vol in vol1 vol2 vol3; do
mkdir /mnt/disks/$vol
mount -t tmpfs -o size=5G $vol /mnt/disks/$vol
done
```
The tmpfs method is not recommended for production because the mounts are not
persistent and data will be deleted on reboot.
1. Mount physical disks
```bash
mkdir /mnt/disks/ssd1
mount /dev/vdb1 /mnt/disks/ssd1
```
Physical disks are recommended for production environments because it offers
complete isolation in terms of I/O and capacity.
1. Mount unpartitioned physical devices
```bash
for disk in /dev/sdc /dev/sdd /dev/sde; do
ln -s $disk /mnt/disks
done
```
This saves time of precreating filesystems. Note that your storageclass must have
`volume_mode` set to `"Filesystem"` and `fs_type` defined. If either is not set, the
disk will be added as a raw block device.
1. PersistentVolumes with `volumeMode="Block"`
Just like above, you can create PersistentVolumes with volumeMode `Block`
by creating a symbolic link under discovery directory to the block device on
the node, if you set `volume_mode` to `"Block"`. This will create a volume
presented into a Pod as a block device, without any filesystem on it.
1. File-backed sparsefile method
```bash
truncate /mnt/disks/disk5 --size 2G
mkfs.ext4 /mnt/disks/disk5
mkdir /mnt/disks/vol5
mount /mnt/disks/disk5 /mnt/disks/vol5
```
If you have a development environment and only one disk, this is the best way
to limit the quota of persistent volumes.
1. Simple directories
In a development environment, using `mount --bind` works also, but there is no capacity
management.
## Usage notes
Make sure to make any mounts persist via `/etc/fstab` or with systemd mounts (for
Flatcar Container Linux or Fedora CoreOS). Pods with persistent volume claims will not be
able to start if the mounts become unavailable.
## Further reading
Refer to the upstream docs here: <https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner>

View File

@@ -0,0 +1,79 @@
# RBD Volume Provisioner for Kubernetes 1.5+
`rbd-provisioner` is an out-of-tree dynamic provisioner for Kubernetes 1.5+.
You can use it quickly & easily deploy ceph RBD storage that works almost
anywhere.
It works just like in-tree dynamic provisioner. For more information on how
dynamic provisioning works, see [the docs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
or [this blog post](http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html).
## Development
Compile the provisioner
```console
make
```
Make the container image and push to the registry
```console
make push
```
## Test instruction
* Start Kubernetes local cluster
See [Kubernetes](https://kubernetes.io/).
* Create a Ceph admin secret
```bash
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print $3'} |xargs echo -n > /tmp/secret
kubectl create secret generic ceph-admin-secret --from-file=/tmp/secret --namespace=kube-system
```
* Create a Ceph pool and a user secret
```bash
ceph osd pool create kube 8 8
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
ceph auth get-key client.kube > /tmp/secret
kubectl create secret generic ceph-secret --from-file=/tmp/secret --namespace=kube-system
```
* Start RBD provisioner
The following example uses `rbd-provisioner-1` as the identity for the instance and assumes kubeconfig is at `/root/.kube`. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.
```bash
docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host quay.io/external_storage/rbd-provisioner /usr/local/bin/rbd-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=rbd-provisioner-1
```
Alternatively, deploy it in kubernetes, see [deployment](deploy/README.md).
* Create a RBD Storage Class
Replace Ceph monitor's IP in [examples/class.yaml](examples/class.yaml) with your own and create storage class:
```bash
kubectl create -f examples/class.yaml
```
* Create a claim
```bash
kubectl create -f examples/claim.yaml
```
* Create a Pod using the claim
```bash
kubectl create -f examples/test-pod.yaml
```
## Acknowledgements
* This provisioner is extracted from [Kubernetes core](https://github.com/kubernetes/kubernetes) with some modifications for this project.

View File

@@ -0,0 +1,49 @@
# Scheduler plugins for Kubernetes
[scheduler-plugins](https://github.com/kubernetes-sigs/scheduler-plugins) is out-of-tree scheduler plugins based on the [scheduler framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/).
The kube-scheduler binary includes a list of plugins:
- [CapacityScheduling](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/capacityscheduling) [Beta]
- [CoScheduling](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/coscheduling) [Beta]
- [NodeResources](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/noderesources) [Beta]
- [NodeResouceTopology](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/noderesourcetopology/README.md) [Beta]
- [PreemptionToleration](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/preemptiontoleration/README.md) [Alpha]
- [Trimaran](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/trimaran/README.md) [Alpha]
- [NetworkAware](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/networkaware/README.md) [Sample]
- [CrossNodePreemption](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/crossnodepreemption/README.md) [Sample]
- [PodState](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/podstate/README.md) [Sample]
- [QualityOfService](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/qos/README.md) [Sample]
Currently, we use [helm chart](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/manifests/install/charts/as-a-second-scheduler/README.md#installing-the-chart) to install the scheduler plugins, so that a second scheduler would be created and running. **Note that running multi-scheduler will inevitably encounter resource conflicts when the cluster is short of resources**.
## Compatibility Matrix
There are requirements for the version of Kubernetes, please see [Compatibility Matrix
](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master?tab=readme-ov-file#compatibility-matrix). It deserves our attention.
| Scheduler Plugins | Compiled With K8s Version |
| ----------------- | ------------------------- |
| v0.28.9 | v1.28.9 |
| v0.27.8 | v1.27.8 |
## Turning it on
The `scheduler_plugins_enabled` option is used to enable the installation of scheduler plugins.
You can enable or disable some plugins by setting the `scheduler_plugins_enabled_plugins` or `scheduler_plugins_disabled_plugins` option. They must be in the list we mentioned above.
In addition, to use custom plugin configuration, set a value for `scheduler_plugins_plugin_config` option.
For example, for Coscheduling plugin, you want to customize the permit waiting timeout to 10 seconds:
```yaml
scheduler_plugins_plugin_config:
- name: Coscheduling
args:
permitWaitingTimeSeconds: 10 # default is 60
```
## Leverage plugin
Once the plugin is installed, we can apply CRs into cluster. For example, if using `CoScheduling`, we can apply the CR and test the deployment in the [example](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/doc/install.md#test-coscheduling).