mirror of
https://github.com/kubernetes-sigs/kubespray.git
synced 2026-02-01 17:48:12 -03:30
modify doc structure and update existing doc-links as preparation for new doc generation script
This commit is contained in:
87
docs/CSI/aws-ebs-csi.md
Normal file
87
docs/CSI/aws-ebs-csi.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# AWS EBS CSI Driver
|
||||
|
||||
AWS EBS CSI driver allows you to provision EBS volumes for pods in EC2 instances. The old in-tree AWS cloud provider is deprecated and will be removed in future versions of Kubernetes. So transitioning to the CSI driver is advised.
|
||||
|
||||
To enable AWS EBS CSI driver, uncomment the `aws_ebs_csi_enabled` option in `group_vars/all/aws.yml` and set it to `true`.
|
||||
|
||||
To set the number of replicas for the AWS CSI controller, you can change `aws_ebs_csi_controller_replicas` option in `group_vars/all/aws.yml`.
|
||||
|
||||
Make sure to add a role, for your EC2 instances hosting Kubernetes, that allows it to do the actions necessary to request a volume and attach it: [AWS CSI Policy](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/example-iam-policy.json)
|
||||
|
||||
If you want to deploy the AWS EBS storage class used with the CSI Driver, you should set `persistent_volumes_enabled` in `group_vars/k8s_cluster/k8s_cluster.yml` to `true`.
|
||||
|
||||
You can now run the kubespray playbook (cluster.yml) to deploy Kubernetes over AWS EC2 with EBS CSI Driver enabled.
|
||||
|
||||
## Usage example
|
||||
|
||||
To check if AWS EBS CSI Driver is deployed properly, check that the ebs-csi pods are running:
|
||||
|
||||
```ShellSession
|
||||
$ kubectl -n kube-system get pods | grep ebs
|
||||
ebs-csi-controller-85d86bccc5-8gtq5 4/4 Running 4 40s
|
||||
ebs-csi-node-n4b99 3/3 Running 3 40s
|
||||
```
|
||||
|
||||
Check the associated storage class (if you enabled persistent_volumes):
|
||||
|
||||
```ShellSession
|
||||
$ kubectl get storageclass
|
||||
NAME PROVISIONER AGE
|
||||
ebs-sc ebs.csi.aws.com 45s
|
||||
```
|
||||
|
||||
You can run a PVC and an example Pod using this file `ebs-pod.yml`:
|
||||
|
||||
```yml
|
||||
--
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: ebs-claim
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: ebs-sc
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: app
|
||||
spec:
|
||||
containers:
|
||||
- name: app
|
||||
image: centos
|
||||
command: ["/bin/sh"]
|
||||
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
|
||||
volumeMounts:
|
||||
- name: persistent-storage
|
||||
mountPath: /data
|
||||
volumes:
|
||||
- name: persistent-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: ebs-claim
|
||||
```
|
||||
|
||||
Apply this conf to your cluster: ```kubectl apply -f ebs-pod.yml```
|
||||
|
||||
You should see the PVC provisioned and bound:
|
||||
|
||||
```ShellSession
|
||||
$ kubectl get pvc
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
ebs-claim Bound pvc-0034cb9e-1ddd-4b3f-bb9e-0b5edbf5194c 1Gi RWO ebs-sc 50s
|
||||
```
|
||||
|
||||
And the volume mounted to the example Pod (wait until the Pod is Running):
|
||||
|
||||
```ShellSession
|
||||
$ kubectl exec -it app -- df -h | grep data
|
||||
/dev/nvme1n1 1014M 34M 981M 4% /data
|
||||
```
|
||||
|
||||
## More info
|
||||
|
||||
For further information about the AWS EBS CSI Driver, you can refer to this page: [AWS EBS Driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/).
|
||||
128
docs/CSI/azure-csi.md
Normal file
128
docs/CSI/azure-csi.md
Normal file
@@ -0,0 +1,128 @@
|
||||
# Azure Disk CSI Driver
|
||||
|
||||
The Azure Disk CSI driver allows you to provision volumes for pods with a Kubernetes deployment over Azure Cloud. The CSI driver replaces to volume provisioning done by the in-tree azure cloud provider which is deprecated.
|
||||
|
||||
This documentation is an updated version of the in-tree Azure cloud provider documentation (azure.md).
|
||||
|
||||
To deploy Azure Disk CSI driver, uncomment the `azure_csi_enabled` option in `group_vars/all/azure.yml` and set it to `true`.
|
||||
|
||||
## Azure Disk CSI Storage Class
|
||||
|
||||
If you want to deploy the Azure Disk storage class to provision volumes dynamically, you should set `persistent_volumes_enabled` in `group_vars/k8s_cluster/k8s_cluster.yml` to `true`.
|
||||
|
||||
## Parameters
|
||||
|
||||
Before creating the instances you must first set the `azure_csi_` variables in the `group_vars/all.yml` file.
|
||||
|
||||
All values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest>
|
||||
|
||||
After installation you have to run `az login` to get access to your account.
|
||||
|
||||
### azure\_csi\_tenant\_id + azure\_csi\_subscription\_id
|
||||
|
||||
Run `az account show` to retrieve your subscription id and tenant id:
|
||||
`azure_csi_tenant_id` -> tenantId field
|
||||
`azure_csi_subscription_id` -> id field
|
||||
|
||||
### azure\_csi\_location
|
||||
|
||||
The region your instances are located in, it can be something like `francecentral` or `norwayeast`. A full list of region names can be retrieved via `az account list-locations`
|
||||
|
||||
### azure\_csi\_resource\_group
|
||||
|
||||
The name of the resource group your instances are in, a list of your resource groups can be retrieved via `az group list`
|
||||
|
||||
Or you can do `az vm list | grep resourceGroup` and get the resource group corresponding to the VMs of your cluster.
|
||||
|
||||
The resource group name is not case-sensitive.
|
||||
|
||||
### azure\_csi\_vnet\_name
|
||||
|
||||
The name of the virtual network your instances are in, can be retrieved via `az network vnet list`
|
||||
|
||||
### azure\_csi\_vnet\_resource\_group
|
||||
|
||||
The name of the resource group your vnet is in, can be retrieved via `az network vnet list | grep resourceGroup` and get the resource group corresponding to the vnet of your cluster.
|
||||
|
||||
### azure\_csi\_subnet\_name
|
||||
|
||||
The name of the subnet your instances are in, can be retrieved via `az network vnet subnet list --resource-group RESOURCE_GROUP --vnet-name VNET_NAME`
|
||||
|
||||
### azure\_csi\_security\_group\_name
|
||||
|
||||
The name of the network security group your instances are in, can be retrieved via `az network nsg list`
|
||||
|
||||
### azure\_csi\_aad\_client\_id + azure\_csi\_aad\_client\_secret
|
||||
|
||||
These will have to be generated first:
|
||||
|
||||
- Create an Azure AD Application with:
|
||||
|
||||
```ShellSession
|
||||
az ad app create --display-name kubespray --identifier-uris http://kubespray --homepage http://kubespray.com --password CLIENT_SECRET
|
||||
```
|
||||
|
||||
Display name, identifier-uri, homepage and the password can be chosen
|
||||
|
||||
Note the AppId in the output.
|
||||
|
||||
- Create Service principal for the application with:
|
||||
|
||||
```ShellSession
|
||||
az ad sp create --id AppId
|
||||
```
|
||||
|
||||
This is the AppId from the last command
|
||||
|
||||
- Create the role assignment with:
|
||||
|
||||
```ShellSession
|
||||
az role assignment create --role "Owner" --assignee http://kubespray --subscription SUBSCRIPTION_ID
|
||||
```
|
||||
|
||||
azure\_csi\_aad\_client\_id must be set to the AppId, azure\_csi\_aad\_client\_secret is your chosen secret.
|
||||
|
||||
### azure\_csi\_use\_instance\_metadata
|
||||
|
||||
Use instance metadata service where possible. Boolean value.
|
||||
|
||||
## Test the Azure Disk CSI driver
|
||||
|
||||
To test the dynamic provisioning using Azure CSI driver, make sure to have the storage class deployed (through persistent volumes), and apply the following manifest:
|
||||
|
||||
```yml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: pvc-azuredisk
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
storageClassName: disk.csi.azure.com
|
||||
---
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: nginx-azuredisk
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx-azuredisk
|
||||
command:
|
||||
- "/bin/sh"
|
||||
- "-c"
|
||||
- while true; do echo $(date) >> /mnt/azuredisk/outfile; sleep 1; done
|
||||
volumeMounts:
|
||||
- name: azuredisk
|
||||
mountPath: "/mnt/azuredisk"
|
||||
volumes:
|
||||
- name: azuredisk
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc-azuredisk
|
||||
```
|
||||
102
docs/CSI/cinder-csi.md
Normal file
102
docs/CSI/cinder-csi.md
Normal file
@@ -0,0 +1,102 @@
|
||||
# Cinder CSI Driver
|
||||
|
||||
Cinder CSI driver allows you to provision volumes over an OpenStack deployment. The Kubernetes historic in-tree cloud provider is deprecated and will be removed in future versions.
|
||||
|
||||
To enable Cinder CSI driver, uncomment the `cinder_csi_enabled` option in `group_vars/all/openstack.yml` and set it to `true`.
|
||||
|
||||
To set the number of replicas for the Cinder CSI controller, you can change `cinder_csi_controller_replicas` option in `group_vars/all/openstack.yml`.
|
||||
|
||||
You need to source the OpenStack credentials you use to deploy your machines that will host Kubernetes: `source path/to/your/openstack-rc` or `. path/to/your/openstack-rc`.
|
||||
|
||||
Make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack. Otherwise [cinder](https://docs.openstack.org/cinder/latest/) won't work as expected.
|
||||
|
||||
If you want to deploy the cinder provisioner used with Cinder CSI Driver, you should set `persistent_volumes_enabled` in `group_vars/k8s_cluster/k8s_cluster.yml` to `true`.
|
||||
|
||||
You can now run the kubespray playbook (cluster.yml) to deploy Kubernetes over OpenStack with Cinder CSI Driver enabled.
|
||||
|
||||
## Usage example
|
||||
|
||||
To check if Cinder CSI Driver works properly, see first that the cinder-csi pods are running:
|
||||
|
||||
```ShellSession
|
||||
$ kubectl -n kube-system get pods | grep cinder
|
||||
csi-cinder-controllerplugin-7f8bf99785-cpb5v 5/5 Running 0 100m
|
||||
csi-cinder-nodeplugin-rm5x2 2/2 Running 0 100m
|
||||
```
|
||||
|
||||
Check the associated storage class (if you enabled persistent_volumes):
|
||||
|
||||
```ShellSession
|
||||
$ kubectl get storageclass
|
||||
NAME PROVISIONER AGE
|
||||
cinder-csi cinder.csi.openstack.org 100m
|
||||
```
|
||||
|
||||
You can run a PVC and an Nginx Pod using this file `nginx.yaml`:
|
||||
|
||||
```yml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: csi-pvc-cinderplugin
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
storageClassName: cinder-csi
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/www/html
|
||||
name: csi-data-cinderplugin
|
||||
volumes:
|
||||
- name: csi-data-cinderplugin
|
||||
persistentVolumeClaim:
|
||||
claimName: csi-pvc-cinderplugin
|
||||
readOnly: false
|
||||
```
|
||||
|
||||
Apply this conf to your cluster: ```kubectl apply -f nginx.yml```
|
||||
|
||||
You should see the PVC provisioned and bound:
|
||||
|
||||
```ShellSession
|
||||
$ kubectl get pvc
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
csi-pvc-cinderplugin Bound pvc-f21ad0a1-5b7b-405e-a462-48da5cb76beb 1Gi RWO cinder-csi 8s
|
||||
```
|
||||
|
||||
And the volume mounted to the Nginx Pod (wait until the Pod is Running):
|
||||
|
||||
```ShellSession
|
||||
kubectl exec -it nginx -- df -h | grep /var/lib/www/html
|
||||
/dev/vdb 976M 2.6M 958M 1% /var/lib/www/html
|
||||
```
|
||||
|
||||
## Compatibility with in-tree cloud provider
|
||||
|
||||
It is not necessary to enable OpenStack as a cloud provider for Cinder CSI Driver to work.
|
||||
Though, you can run both the in-tree openstack cloud provider and the Cinder CSI Driver at the same time. The storage class provisioners associated to each one of them are differently named.
|
||||
|
||||
## Cinder v2 support
|
||||
|
||||
For the moment, only Cinder v3 is supported by the CSI Driver.
|
||||
|
||||
## More info
|
||||
|
||||
For further information about the Cinder CSI Driver, you can refer to this page: [Cloud Provider OpenStack](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md).
|
||||
77
docs/CSI/gcp-pd-csi.md
Normal file
77
docs/CSI/gcp-pd-csi.md
Normal file
@@ -0,0 +1,77 @@
|
||||
# GCP Persistent Disk CSI Driver
|
||||
|
||||
The GCP Persistent Disk CSI driver allows you to provision volumes for pods with a Kubernetes deployment over Google Cloud Platform. The CSI driver replaces to volume provioning done by the in-tree azure cloud provider which is deprecated.
|
||||
|
||||
To deploy GCP Persistent Disk CSI driver, uncomment the `gcp_pd_csi_enabled` option in `group_vars/all/gcp.yml` and set it to `true`.
|
||||
|
||||
## GCP Persistent Disk Storage Class
|
||||
|
||||
If you want to deploy the GCP Persistent Disk storage class to provision volumes dynamically, you should set `persistent_volumes_enabled` in `group_vars/k8s_cluster/k8s_cluster.yml` to `true`.
|
||||
|
||||
## GCP credentials
|
||||
|
||||
In order for the CSI driver to provision disks, you need to create for it a service account on GCP with the appropriate permissions.
|
||||
|
||||
Follow these steps to configure it:
|
||||
|
||||
```ShellSession
|
||||
# This will open a web page for you to authenticate
|
||||
gcloud auth login
|
||||
export PROJECT=nameofmyproject
|
||||
gcloud config set project $PROJECT
|
||||
|
||||
git clone https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver $GOPATH/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver
|
||||
|
||||
export GCE_PD_SA_NAME=my-gce-pd-csi-sa
|
||||
export GCE_PD_SA_DIR=/my/safe/credentials/directory
|
||||
|
||||
./deploy/setup-project.sh
|
||||
```
|
||||
|
||||
The above will create a file named `cloud-sa.json` in the specified `GCE_PD_SA_DIR`. This file contains the service account with the appropriate credentials for the CSI driver to perform actions on GCP to request disks for pods.
|
||||
|
||||
You need to provide this file's path through the variable `gcp_pd_csi_sa_cred_file` in `inventory/mycluster/group_vars/all/gcp.yml`
|
||||
|
||||
You can now deploy Kubernetes with Kubespray over GCP.
|
||||
|
||||
## GCP PD CSI Driver test
|
||||
|
||||
To test the dynamic provisioning using GCP PD CSI driver, make sure to have the storage class deployed (through persistent volumes), and apply the following manifest:
|
||||
|
||||
```yml
|
||||
---
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: podpvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: csi-gce-pd
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: web-server
|
||||
spec:
|
||||
containers:
|
||||
- name: web-server
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/www/html
|
||||
name: mypvc
|
||||
volumes:
|
||||
- name: mypvc
|
||||
persistentVolumeClaim:
|
||||
claimName: podpvc
|
||||
readOnly: false
|
||||
```
|
||||
|
||||
## GCP PD documentation
|
||||
|
||||
You can find the official GCP Persistent Disk CSI driver installation documentation here: [GCP PD CSI Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/master/docs/kubernetes/user-guides/driver-install.md
|
||||
)
|
||||
102
docs/CSI/vsphere-csi.md
Normal file
102
docs/CSI/vsphere-csi.md
Normal file
@@ -0,0 +1,102 @@
|
||||
# vSphere CSI Driver
|
||||
|
||||
vSphere CSI driver allows you to provision volumes over a vSphere deployment. The Kubernetes historic in-tree cloud provider is deprecated and will be removed in future versions.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The vSphere user for CSI driver requires a set of privileges to perform Cloud Native Storage operations. Follow the [official guide](https://vsphere-csi-driver.sigs.k8s.io/driver-deployment/prerequisites.html#roles_and_privileges) to configure those.
|
||||
|
||||
## Kubespray configuration
|
||||
|
||||
To enable vSphere CSI driver, uncomment the `vsphere_csi_enabled` option in `group_vars/all/vsphere.yml` and set it to `true`.
|
||||
|
||||
To set the number of replicas for the vSphere CSI controller, you can change `vsphere_csi_controller_replicas` option in `group_vars/all/vsphere.yml`.
|
||||
|
||||
You need to source the vSphere credentials you use to deploy your machines that will host Kubernetes.
|
||||
|
||||
| Variable | Required | Type | Choices | Default | Comment |
|
||||
|-------------------------------------------------|----------|---------|-----------------|-------------------------|-----------------------------------------------------------------------------------------------------------------------------|
|
||||
| external_vsphere_vcenter_ip | TRUE | string | | | IP/URL of the vCenter |
|
||||
| external_vsphere_vcenter_port | TRUE | string | | "443" | Port of the vCenter API |
|
||||
| external_vsphere_insecure | TRUE | string | "true", "false" | "true" | set to "true" if the host above uses a self-signed cert |
|
||||
| external_vsphere_user | TRUE | string | | | User name for vCenter with required privileges (Can also be specified with the `VSPHERE_USER` environment variable) |
|
||||
| external_vsphere_password | TRUE | string | | | Password for vCenter (Can also be specified with the `VSPHERE_PASSWORD` environment variable) |
|
||||
| external_vsphere_datacenter | TRUE | string | | | Datacenter name to use |
|
||||
| external_vsphere_kubernetes_cluster_id | TRUE | string | | "kubernetes-cluster-id" | Kubernetes cluster ID to use |
|
||||
| external_vsphere_version | TRUE | string | | "6.7u3" | Vmware Vsphere version where located all VMs |
|
||||
| external_vsphere_cloud_controller_image_tag | TRUE | string | | "latest" | Kubernetes cluster ID to use |
|
||||
| vsphere_syncer_image_tag | TRUE | string | | "v2.2.1" | Syncer image tag to use |
|
||||
| vsphere_csi_attacher_image_tag | TRUE | string | | "v3.1.0" | CSI attacher image tag to use |
|
||||
| vsphere_csi_controller | TRUE | string | | "v2.2.1" | CSI controller image tag to use |
|
||||
| vsphere_csi_controller_replicas | TRUE | integer | | 1 | Number of pods Kubernetes should deploy for the CSI controller |
|
||||
| vsphere_csi_liveness_probe_image_tag | TRUE | string | | "v2.2.0" | CSI liveness probe image tag to use |
|
||||
| vsphere_csi_provisioner_image_tag | TRUE | string | | "v2.1.0" | CSI provisioner image tag to use |
|
||||
| vsphere_csi_node_driver_registrar_image_tag | TRUE | string | | "v1.1.0" | CSI node driver registrar image tag to use |
|
||||
| vsphere_csi_driver_image_tag | TRUE | string | | "v1.0.2" | CSI driver image tag to use |
|
||||
| vsphere_csi_resizer_tag | TRUE | string | | "v1.1.0" | CSI resizer image tag to use |
|
||||
| vsphere_csi_aggressive_node_drain | FALSE | boolean | | false | Enable aggressive node drain strategy |
|
||||
| vsphere_csi_aggressive_node_unreachable_timeout | FALSE | int | 300 | | Timeout till node will be drained when it in an unreachable state |
|
||||
| vsphere_csi_aggressive_node_not_ready_timeout | FALSE | int | 300 | | Timeout till node will be drained when it in not-ready state |
|
||||
| vsphere_csi_namespace | TRUE | string | | "kube-system" | vSphere CSI namespace to use; kube-system for backward compatibility, should be change to vmware-system-csi on the long run |
|
||||
|
||||
## Usage example
|
||||
|
||||
To test the dynamic provisioning using vSphere CSI driver, make sure to create a [storage policy](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md#create-a-storage-policy) and [storage class](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md#create-a-storageclass), then apply the following manifest:
|
||||
|
||||
```yml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: csi-pvc-vsphere
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
storageClassName: mongodb-sc
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /usr/share/nginx/html
|
||||
name: csi-data-vsphere
|
||||
volumes:
|
||||
- name: csi-data-vsphere
|
||||
persistentVolumeClaim:
|
||||
claimName: csi-pvc-vsphere
|
||||
readOnly: false
|
||||
```
|
||||
|
||||
Apply this conf to your cluster: ```kubectl apply -f nginx.yml```
|
||||
|
||||
You should see the PVC provisioned and bound:
|
||||
|
||||
```ShellSession
|
||||
$ kubectl get pvc
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
csi-pvc-vsphere Bound pvc-dc7b1d21-ee41-45e1-98d9-e877cc1533ac 1Gi RWO mongodb-sc 10s
|
||||
```
|
||||
|
||||
And the volume mounted to the Nginx Pod (wait until the Pod is Running):
|
||||
|
||||
```ShellSession
|
||||
kubectl exec -it nginx -- df -h | grep /usr/share/nginx/html
|
||||
/dev/sdb 976M 2.6M 907M 1% /usr/share/nginx/html
|
||||
```
|
||||
|
||||
## More info
|
||||
|
||||
For further information about the vSphere CSI Driver, you can refer to the official [vSphere Cloud Provider documentation](https://cloud-provider-vsphere.sigs.k8s.io/container_storage_interface.html).
|
||||
Reference in New Issue
Block a user