Docs improvements (#7660)

* Docs: update sidebar

* Docs: move registry documentation into docs/

* Docs: move rbd_provisioner documentation into docs/

* Docs: move cephfs_provisioner into docs/

* Docs: move local_volume_provisioner documentation into docs/

* Docs: move ambassador.md to docs/ingress_controller/

* Docs: move metallb.md to docs/ingress_controller/

* Docs: move ingress_nginx documentation into docs/

* Docs: move alb_ingress_controller documentation into docs/

* Docs: merge ambassador documentation into docs/ingress_controller/

* Docs: move cert_manager documentation into docs/

* Docs: move bootstrap-os documentation into docs/

* Docs: update file locations in sidebar
This commit is contained in:
Cristian Calin
2021-06-01 17:30:27 +03:00
committed by GitHub
parent 4674b03661
commit 6a2ea94b39
12 changed files with 46 additions and 91 deletions

View File

@@ -14,30 +14,45 @@
* [Calico](docs/calico.md)
* [Flannel](docs/flannel.md)
* [Kube Router](docs/kube-router.md)
* [Kube OVN](docs/kube-ovn.md)
* [Weave](docs/weave.md)
* [Multus](docs/multus.md)
* [OVN4NFV](docs/ovn4nfv.md)
* Ingress
* [Ambassador](docs/ambassador.md)
* [ALB Ingress](docs/ingress_controller/alb_ingress_controller.md)
* [Ambassador](docs/ingress_controller/ambassador.md)
* [MetalLB](docs/ingress_controller/metallb.md)
* [Nginx Ingress](docs/ingress_controller/ingress_nginx.md)
* [Cloud providers](docs/cloud.md)
* [AWS](docs/aws.md)
* [Azure](docs/azure.md)
* [OpenStack](/docs/openstack.md)
* [Equinix Metal](/docs/equinix-metal.md)
* [vSphere](/docs/vsphere.md)
* Operating Systems
* [Operating Systems](docs/bootstrap-os.md)
* [Debian](docs/debian.md)
* [Flatcar Container Linux](docs/flatcar.md)
* [Fedora CoreOS](docs/fcos.md)
* [OpenSUSE](docs/opensuse.md)
* [RedHat Enterprise Linux](docs/rhel.md)
* [CentOS/OracleLinux/AlmaLinux](docs/centos8.md)
* [Amaxon Linux 2](docs/amazonlinux.md)
* CRI
* [Containerd](docs/containerd.md)
* [CRI-O](docs/cri-o.md)
* [Kata Containers](docs/kata-containers.md)
* Advanced
* [Proxy](/docs/proxy.md)
* [Downloads](docs/downloads.md)
* [Netcheck](docs/netcheck.md)
* [Cert Manager](docs/cert_manager.md)
* [DNS Stack](docs/dns-stack.md)
* [Kubernetes reliability](docs/kubernetes-reliability.md)
* [Local Registry](docs/kubernetes-apps/registry.md)
* External Storage Provisioners
* [RBD Provisioner](docs/kubernetes-apps/rbd_provisioner.md)
* [CEPHFS Provisioner](docs/kubernetes-apps/cephfs_provisioner.md)
* [Local Volume Provisioner](docs/kubernetes-apps/local_volume_provisioner.md)
* Developers
* [Test cases](docs/test_cases.md)
* [Vagrant](docs/vagrant.md)

59
docs/bootstrap-os.md Normal file
View File

@@ -0,0 +1,59 @@
# bootstrap-os
Bootstrap an Ansible host to be able to run Ansible modules.
This role will:
* configure the package manager (if applicable) to be able to fetch packages
* install Python
* install the necessary packages to use Ansible's package manager modules
* set the hostname of the host to `{{ inventory_hostname }}` when requested
## Requirements
A host running an operating system that is supported by Kubespray.
See [Supported Linux Distributions](https://github.com/kubernetes-sigs/kubespray#supported-linux-distributions) for a current list.
SSH access to the host.
## Role Variables
Variables are listed with their default values, if applicable.
### General variables
* `http_proxy`/`https_proxy`
The role will configure the package manager (if applicable) to download packages via a proxy.
* `override_system_hostname: true`
The role will set the hostname of the machine to the name it has according to Ansible's inventory (the variable `{{ inventory_hostname }}`).
### Per distribution variables
#### Flatcar Container Linux
* `coreos_locksmithd_disable: false`
Whether `locksmithd` (responsible for rolling restarts) should be disabled or be left alone.
#### CentOS/RHEL/AlmaLinux
* `centos_fastestmirror_enabled: false`
Whether the [fastestmirror](https://wiki.centos.org/PackageManagement/Yum/FastestMirror) yum plugin should be enabled.
## Dependencies
The `kubespray-defaults` role is expected to be run before this role.
## Example Playbook
Remember to disable fact gathering since Python might not be present on hosts.
- hosts: all
gather_facts: false # not all hosts might be able to run modules yet
roles:
- kubespray-defaults
- bootstrap-os
## License
Apache 2.0

179
docs/cert_manager.md Normal file
View File

@@ -0,0 +1,179 @@
# Installation Guide
- [Installation Guide](#installation-guide)
- [Kubernetes TLS Root CA Certificate/Key Secret](#kubernetes-tls-root-ca-certificatekey-secret)
- [Securing Ingress Resources](#securing-ingress-resources)
- [Create New TLS Root CA Certificate and Key](#create-new-tls-root-ca-certificate-and-key)
- [Install Cloudflare PKI/TLS `cfssl` Toolkit.](#install-cloudflare-pkitls-cfssl-toolkit)
- [Create Root Certificate Authority (CA) Configuration File](#create-root-certificate-authority-ca-configuration-file)
- [Create Certficate Signing Request (CSR) Configuration File](#create-certficate-signing-request-csr-configuration-file)
- [Create TLS Root CA Certificate and Key](#create-tls-root-ca-certificate-and-key)
Cert-Manager is a native Kubernetes certificate management controller. It can help with issuing certificates from a variety of sources, such as Lets Encrypt, HashiCorp Vault, Venafi, a simple signing key pair, or self signed. It will ensure certificates are valid and up to date, and attempt to renew certificates at a configured time before expiry.
The Kubespray out-of-the-box cert-manager deployment uses a TLS Root CA certificate and key stored as the Kubernetes `ca-key-pair` secret consisting of `tls.crt` and `tls.key`, which are the base64 encode values of the TLS Root CA certificate and key respectively.
Integration with other PKI/Certificate management solutions, such as HashiCorp Vault will require some further development changes to the current cert-manager deployment and may be introduced in the future.
## Kubernetes TLS Root CA Certificate/Key Secret
If you're planning to secure your ingress resources using TLS client certificates, you'll need to create and deploy the Kubernetes `ca-key-pair` secret consisting of the Root CA certificate and key to your K8s cluster.
If these are already available, simply update `templates\secret-cert-manager.yml.j2` with the base64 encoded values of your TLS Root CA certificate and key prior to enabling and deploying cert-manager.
e.g.
```shell
$ cat ca.pem | base64 -w 0
LS0tLS1CRUdJTiBDRVJU...
$ cat ca-key.pem | base64 -w 0
LS0tLS1CRUdJTiBSU0Eg...
```
For further information, read the official [Cert-Manager CA Configuration](https://cert-manager.io/docs/configuration/ca/) doc.
Once the base64 encoded values have been added to `templates\secret-cert-manager.yml.j2`, cert-manager can now be enabled by editing your K8s cluster addons inventory e.g. `inventory\sample\group_vars\k8s_cluster\addons.yml` and setting `cert_manager_enabled` to true.
```ini
# Cert manager deployment
cert_manager_enabled: true
```
If you don't have a TLS Root CA certificate and key available, you can create these by following the steps outlined in section [Create New TLS Root CA Certificate and Key](#create-new-tls-root-ca-certificate-and-key) using the Cloudflare PKI/TLS `cfssl` toolkit. TLS Root CA certificates and keys can also be created using `ssh-keygen` and OpenSSL, if `cfssl` is not available.
## Securing Ingress Resources
A common use-case for cert-manager is requesting TLS signed certificates to secure your ingress resources. This can be done by simply adding annotations to your Ingress resources and cert-manager will facilitate creating the Certificate resource for you. A small sub-component of cert-manager, ingress-shim, is responsible for this.
To enable the Nginx Ingress controller as part of your Kubespray deployment, simply edit your K8s cluster addons inventory e.g. `inventory\sample\group_vars\k8s_cluster\addons.yml` and set `ingress_nginx_enabled` to true.
```ini
# Nginx ingress controller deployment
ingress_nginx_enabled: true
```
For example, if you're using the Nginx ingress controller, you can secure the Prometheus ingress by adding the annotation `cert-manager.io/cluster-issuer: ca-issuer` and the `spec.tls` section to the `Ingress` resource definition.
```yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: prometheus-k8s
namespace: monitoring
labels:
prometheus: k8s
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: ca-issuer
spec:
tls:
- hosts:
- prometheus.example.com
secretName: prometheus-dashboard-certs
rules:
- host: prometheus.example.com
http:
paths:
- path: /
backend:
serviceName: prometheus-k8s
servicePort: web
```
Once deployed to your K8s cluster, every 3 months cert-manager will automatically rotate the Prometheus `prometheus.example.com` TLS client certificate and key, and store these as the Kubernetes `prometheus-dashboard-certs` secret.
For further information, read the official [Cert-Manager Ingress](https://cert-manager.io/docs/usage/ingress/) doc.
### Create New TLS Root CA Certificate and Key
#### Install Cloudflare PKI/TLS `cfssl` Toolkit
e.g. For Ubuntu/Debian distributions, the toolkit is part of the `golang-cfssl` package.
```shell
sudo apt-get install -y golang-cfssl
```
#### Create Root Certificate Authority (CA) Configuration File
The default TLS certificate expiry time period is `8760h` which is 5 years from the date the certificate is created.
```shell
$ cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
```
#### Create Certficate Signing Request (CSR) Configuration File
The TLS certificate `names` details can be updated to your own specific requirements.
```shell
$ cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
```
#### Create TLS Root CA Certificate and Key
```shell
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ca.pem
ca-key.pem
```
Check the TLS Root CA certificate has the correct `Not Before` and `Not After` dates, and ensure it is indeed a valid Certificate Authority with the X509v3 extension `CA:TRUE`.
```shell
$ openssl x509 -text -noout -in ca.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
6a:d4:d8:48:7f:98:4f:54:68:9a:e1:73:02:fa:d0:41:79:25:08:49
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = US, ST = Oregon, L = Portland, O = Kubernetes, OU = CA, CN = Kubernetes
Validity
Not Before: Jul 10 15:21:00 2020 GMT
Not After : Jul 9 15:21:00 2025 GMT
Subject: C = US, ST = Oregon, L = Portland, O = Kubernetes, OU = CA, CN = Kubernetes
Subject Public Key Info:
...
X509v3 extensions:
X509v3 Key Usage: critical
Certificate Sign, CRL Sign
X509v3 Basic Constraints: critical
CA:TRUE
X509v3 Subject Key Identifier:
D4:38:B5:E2:26:49:5E:0D:E3:DC:D9:70:73:3B:C4:19:6A:43:4A:F2
...
```

View File

@@ -0,0 +1,43 @@
# AWS ALB Ingress Controller
**NOTE:** The current image version is `v1.1.6`. Please file any issues you find and note the version used.
The AWS ALB Ingress Controller satisfies Kubernetes [ingress resources](https://kubernetes.io/docs/user-guide/ingress) by provisioning [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html).
This project was originated by [Ticketmaster](https://github.com/ticketmaster) and [CoreOS](https://github.com/coreos) as part of Ticketmaster's move to AWS and CoreOS Tectonic. Learn more about Ticketmaster's Kubernetes initiative from Justin Dean's video at [Tectonic Summit](https://www.youtube.com/watch?v=wqXVKneP0Hg).
This project was donated to Kubernetes SIG-AWS to allow AWS, CoreOS, Ticketmaster and other SIG-AWS contributors to officially maintain the project. SIG-AWS reached this consensus on June 1, 2018.
## Documentation
Checkout our [Live Docs](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/)!
## Getting started
To get started with the controller, see our [walkthrough](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/walkthrough/echoserver/).
## Setup
- See [controller setup](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/setup/) on how to install ALB ingress controller
- See [external-dns setup](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/external-dns/setup/) for how to setup the external-dns to manage route 53 records.
## Building
For details on building this project, see our [building guide](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/BUILDING/).
## Community, discussion, contribution, and support
Learn how to engage with the Kubernetes community on the [community page](http://kubernetes.io/community/).
You can reach the maintainers of this project at:
- [Slack channel](https://kubernetes.slack.com/messages/sig-aws)
- [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-aws)
### Code of conduct
Participation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](code-of-conduct.md).
## License
[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Fcoreos%2Falb-ingress-controller.svg?type=large)](https://app.fossa.io/projects/git%2Bgithub.com%2Fcoreos%2Falb-ingress-controller?ref=badge_large)

View File

@@ -16,6 +16,16 @@ URL rewriting, CORS, rate limiting, and automatic metrics collection.
installation/updates.
* `ingress_ambassador_secure_port` (default: 443): HTTPS port to listen at.
* `ingress_ambassador_insecure_port` (default: 80): HTTP port to listen at.
* `ingress_ambassador_multi_namespaces` (default `false`): By default, Ambassador will only
watch the `ingress_ambassador_namespace` namespace for `AmbassadorInstallation` CRD resources.
When set to `true`, this value will tell the Ambassador Operator to watch **all** namespaces
for CRDs. If you want to run multiple Ambassador ingress instances, set this to `true`.
### Ingress annotations
The Ambassador API Gateway will automatically load balance `Ingress` resources
that include the annotation `kubernetes.io/ingress.class=ambassador`. All the other
resources will be just ignored.
### Ambassador Operator

View File

@@ -0,0 +1,203 @@
# Installation Guide
## Contents
- [Prerequisite Generic Deployment Command](#prerequisite-generic-deployment-command)
- [Provider Specific Steps](#provider-specific-steps)
- [Docker for Mac](#docker-for-mac)
- [minikube](#minikube)
- [AWS](#aws)
- [GCE - GKE](#gce-gke)
- [Azure](#azure)
- [Bare-metal](#bare-metal)
- [Verify installation](#verify-installation)
- [Detect installed version](#detect-installed-version)
- [Using Helm](#using-helm)
## Prerequisite Generic Deployment Command
!!! attention
The default configuration watches Ingress object from *all the namespaces*.
To change this behavior use the flag `--watch-namespace` to limit the scope to a particular namespace.
!!! warning
If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
!!! attention
If you're using GKE you need to initialize your user as a cluster-admin with the following command:
```console
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
```
The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version.
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/cloud/deploy.yaml
```
### Provider Specific Steps
There are cloud provider specific yaml files.
#### Docker for Mac
Kubernetes is available in Docker for Mac (from [version 18.06.0-ce](https://docs.docker.com/docker-for-mac/release-notes/#stable-releases-of-2018))
First you need to [enable kubernetes](https://docs.docker.com/docker-for-mac/#kubernetes).
Then you have to create a service:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
```
#### minikube
For standard usage:
```console
minikube addons enable ingress
```
For development:
1. Disable the ingress addon:
```console
minikube addons disable ingress
```
1. Execute `make dev-env`
1. Confirm the `nginx-ingress-controller` deployment exists:
```console
$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
default-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s
nginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s
```
#### AWS
In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of `Type=LoadBalancer`.
Since Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB)
Please check the [elastic load balancing AWS details page](https://aws.amazon.com/elasticloadbalancing/details/)
##### Elastic Load Balancer - ELB
This setup requires to choose in which layer (L4 or L7) we want to configure the Load Balancer:
- [Layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_Layer): Use an Network Load Balancer (NLB) with TCP as the listener protocol for ports 80 and 443.
- [Layer 7](https://en.wikipedia.org/wiki/OSI_model#Layer_7:_Application_Layer): Use an Elastic Load Balancer (ELB) with HTTP as the listener protocol for port 80 and terminate TLS in the ELB
For L4:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/deploy.yaml
```
For L7:
Change the value of `service.beta.kubernetes.io/aws-load-balancer-ssl-cert` in the file `provider/aws/deploy-tls-termination.yaml` replacing the dummy id with a valid one. The dummy value is `"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"`
Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the [ELB Idle Timeouts section](#elb-idle-timeouts) for additional information. If a change is required, users will need to update the value of `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` in `provider/aws/deploy-tls-termination.yaml`
Then execute:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/deploy-tls-termination.yaml
```
This example creates an ELB with just two listeners, one in port 80 and another in port 443
![Listeners](https://github.com/kubernetes/ingress-nginx/raw/master/docs/images/elb-l7-listener.png)
##### ELB Idle Timeouts
In some scenarios users will need to modify the value of the ELB idle timeout.
Users need to ensure the idle timeout is less than the [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) that is configured for NGINX.
By default NGINX `keepalive_timeout` is set to `75s`.
The default ELB idle timeout will work for most scenarios, unless the NGINX [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) has been modified,
in which case `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` will need to be modified to ensure it is less than the `keepalive_timeout` the user has configured.
_Please Note: An idle timeout of `3600s` is recommended when using WebSockets._
More information with regards to idle timeouts for your Load Balancer can be found in the [official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html).
##### Network Load Balancer (NLB)
This type of load balancer is supported since v1.10.0 as an ALPHA feature.
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-nlb.yaml
```
#### GCE-GKE
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
```
**Important Note:** proxy protocol is not supported in GCE/GKE
#### Azure
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
```
#### Bare-metal
Using [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport):
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
```
!!! tip
For extended notes regarding deployments on bare-metal, see [Bare-metal considerations](./baremetal.md).
### Verify installation
To check if the ingress controller pods have started, run the following command:
```console
kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
```
Once the operator pods are running, you can cancel the above command by typing `Ctrl+C`.
Now, you are ready to create your first ingress.
### Detect installed version
To detect which version of the ingress controller is running, exec into the pod and run `nginx-ingress-controller version` command.
```console
POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/component=controller -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
```
## Using Helm
NGINX Ingress controller can be installed via [Helm](https://helm.sh/) using the chart [ingress-nginx/ingress-nginx](https://kubernetes.github.io/ingress-nginx).
Official documentation is [here](https://kubernetes.github.io/ingress-nginx/deploy/#using-helm)
To install the chart with the release name `my-nginx`:
```console
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install my-nginx ingress-nginx/ingress-nginx
```
Detect installed version:
```console
POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version
```

View File

@@ -0,0 +1,73 @@
# CephFS Volume Provisioner for Kubernetes 1.5+
[![Docker Repository on Quay](https://quay.io/repository/external_storage/cephfs-provisioner/status "Docker Repository on Quay")](https://quay.io/repository/external_storage/cephfs-provisioner)
Using Ceph volume client
## Development
Compile the provisioner
``` console
make
```
Make the container image and push to the registry
``` console
make push
```
## Test instruction
- Start Kubernetes local cluster
See [Kubernetes](https://kubernetes.io/)
- Create a Ceph admin secret
``` bash
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print $3'} |xargs echo -n > /tmp/secret
kubectl create ns cephfs
kubectl create secret generic ceph-secret-admin --from-file=/tmp/secret --namespace=cephfs
```
- Start CephFS provisioner
The following example uses `cephfs-provisioner-1` as the identity for the instance and assumes kubeconfig is at `/root/.kube`. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.
``` bash
docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host cephfs-provisioner /usr/local/bin/cephfs-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=cephfs-provisioner-1
```
Alternatively, deploy it in kubernetes, see [deployment](deploy/README.md).
- Create a CephFS Storage Class
Replace Ceph monitor's IP in [example class](example/class.yaml) with your own and create storage class:
``` bash
kubectl create -f example/class.yaml
```
- Create a claim
``` bash
kubectl create -f example/claim.yaml
```
- Create a Pod using the claim
``` bash
kubectl create -f example/test-pod.yaml
```
## Known limitations
- Kernel CephFS doesn't work with SELinux, setting SELinux label in Pod's securityContext will not work.
- Kernel CephFS doesn't support quota or capacity, capacity requested by PVC is not enforced or validated.
- Currently each Ceph user created by the provisioner has `allow r` MDS cap to permit CephFS mount.
## Acknowledgement
Inspired by CephFS Manila provisioner and conversation with John Spray

View File

@@ -0,0 +1,122 @@
# Local Storage Provisioner
The [local storage provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume)
is NOT a dynamic storage provisioner as you would
expect from a cloud provider. Instead, it simply creates PersistentVolumes for
all mounts under the host_dir of the specified storage class.
These storage classes are specified in the `local_volume_provisioner_storage_classes` nested dictionary.
Example:
```yaml
local_volume_provisioner_storage_classes:
local-storage:
host_dir: /mnt/disks
mount_dir: /mnt/disks
fast-disks:
host_dir: /mnt/fast-disks
mount_dir: /mnt/fast-disks
block_cleaner_command:
- "/scripts/shred.sh"
- "2"
volume_mode: Filesystem
fs_type: ext4
```
For each key in `local_volume_provisioner_storage_classes` a storageClass with the
same name is created. The subkeys of each storage class are converted to camelCase and added
as attributes to the storageClass.
The result of the above example is:
```yaml
data:
storageClassMap: |
local-storage:
hostDir: /mnt/disks
mountDir: /mnt/disks
fast-disks:
hostDir: /mnt/fast-disks
mountDir: /mnt/fast-disks
blockCleanerCommand:
- "/scripts/shred.sh"
- "2"
volumeMode: Filesystem
fsType: ext4
```
The default StorageClass is local-storage on /mnt/disks,
the rest of this doc will use that path as an example.
## Examples to create local storage volumes
1. tmpfs method:
``` bash
for vol in vol1 vol2 vol3; do
mkdir /mnt/disks/$vol
mount -t tmpfs -o size=5G $vol /mnt/disks/$vol
done
```
The tmpfs method is not recommended for production because the mount is not
persistent and data will be deleted on reboot.
1. Mount physical disks
``` bash
mkdir /mnt/disks/ssd1
mount /dev/vdb1 /mnt/disks/ssd1
```
Physical disks are recommended for production environments because it offers
complete isolation in terms of I/O and capacity.
1. Mount unpartitioned physical devices
``` bash
for disk in /dev/sdc /dev/sdd /dev/sde; do
ln -s $disk /mnt/disks
done
```
This saves time of precreating filesystems. Note that your storageclass must have
volume_mode set to "Filesystem" and fs_type defined. If either is not set, the
disk will be added as a raw block device.
1. File-backed sparsefile method
``` bash
truncate /mnt/disks/disk5 --size 2G
mkfs.ext4 /mnt/disks/disk5
mkdir /mnt/disks/vol5
mount /mnt/disks/disk5 /mnt/disks/vol5
```
If you have a development environment and only one disk, this is the best way
to limit the quota of persistent volumes.
1. Simple directories
In a development environment using `mount --bind` works also, but there is no capacity
management.
1. Block volumeMode PVs
Create a symbolic link under discovery directory to the block device on the node. To use
raw block devices in pods, volume_type should be set to "Block".
## Usage notes
Beta PV.NodeAffinity field is used by default. If running against an older K8s
version, the useAlphaAPI flag must be set in the configMap.
The volume provisioner cannot calculate volume sizes correctly, so you should
delete the daemonset pod on the relevant host after creating volumes. The pod
will be recreated and read the size correctly.
Make sure to make any mounts persist via /etc/fstab or with systemd mounts (for
Flatcar Container Linux). Pods with persistent volume claims will not be
able to start if the mounts become unavailable.
## Further reading
Refer to the upstream docs here: <https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume>

View File

@@ -0,0 +1,79 @@
# RBD Volume Provisioner for Kubernetes 1.5+
`rbd-provisioner` is an out-of-tree dynamic provisioner for Kubernetes 1.5+.
You can use it quickly & easily deploy ceph RBD storage that works almost
anywhere.
It works just like in-tree dynamic provisioner. For more information on how
dynamic provisioning works, see [the docs](http://kubernetes.io/docs/user-guide/persistent-volumes/)
or [this blog post](http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html).
## Development
Compile the provisioner
```console
make
```
Make the container image and push to the registry
```console
make push
```
## Test instruction
* Start Kubernetes local cluster
See [Kubernetes](https://kubernetes.io/).
* Create a Ceph admin secret
```bash
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print $3'} |xargs echo -n > /tmp/secret
kubectl create secret generic ceph-admin-secret --from-file=/tmp/secret --namespace=kube-system
```
* Create a Ceph pool and a user secret
```bash
ceph osd pool create kube 8 8
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
ceph auth get-key client.kube > /tmp/secret
kubectl create secret generic ceph-secret --from-file=/tmp/secret --namespace=kube-system
```
* Start RBD provisioner
The following example uses `rbd-provisioner-1` as the identity for the instance and assumes kubeconfig is at `/root/.kube`. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.
```bash
docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host quay.io/external_storage/rbd-provisioner /usr/local/bin/rbd-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=rbd-provisioner-1
```
Alternatively, deploy it in kubernetes, see [deployment](deploy/README.md).
* Create a RBD Storage Class
Replace Ceph monitor's IP in [examples/class.yaml](examples/class.yaml) with your own and create storage class:
```bash
kubectl create -f examples/class.yaml
```
* Create a claim
```bash
kubectl create -f examples/claim.yaml
```
* Create a Pod using the claim
```bash
kubectl create -f examples/test-pod.yaml
```
## Acknowledgements
* This provisioner is extracted from [Kubernetes core](https://github.com/kubernetes/kubernetes) with some modifications for this project.

View File

@@ -0,0 +1,252 @@
# Private Docker Registry in Kubernetes
Kubernetes offers an optional private Docker registry addon, which you can turn
on when you bring up a cluster or install later. This gives you a place to
store truly private Docker images for your cluster.
## How it works
The private registry runs as a `Pod` in your cluster. It does not currently
support SSL or authentication, which triggers Docker's "insecure registry"
logic. To work around this, we run a proxy on each node in the cluster,
exposing a port onto the node (via a hostPort), which Docker accepts as
"secure", since it is accessed by `localhost`.
## Turning it on
Some cluster installs (e.g. GCE) support this as a cluster-birth flag. The
`ENABLE_CLUSTER_REGISTRY` variable in `cluster/gce/config-default.sh` governs
whether the registry is run or not. To set this flag, you can specify
`KUBE_ENABLE_CLUSTER_REGISTRY=true` when running `kube-up.sh`. If your cluster
does not include this flag, the following steps should work. Note that some of
this is cloud-provider specific, so you may have to customize it a bit.
### Make some storage
The primary job of the registry is to store data. To do that we have to decide
where to store it. For cloud environments that have networked storage, we can
use Kubernetes's `PersistentVolume` abstraction. The following template is
expanded by `salt` in the GCE cluster turnup, but can easily be adapted to
other situations:
<!-- BEGIN MUNGE: EXAMPLE registry-pv.yaml.in -->
``` yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: kube-system-kube-registry-pv
spec:
{% if pillar.get('cluster_registry_disk_type', '') == 'gce' %}
capacity:
storage: {{ pillar['cluster_registry_disk_size'] }}
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: "{{ pillar['cluster_registry_disk_name'] }}"
fsType: "ext4"
{% endif %}
```
<!-- END MUNGE: EXAMPLE registry-pv.yaml.in -->
If, for example, you wanted to use NFS you would just need to change the
`gcePersistentDisk` block to `nfs`. See
[here](https://kubernetes.io/docs/concepts/storage/volumes/) for more details on volumes.
Note that in any case, the storage (in the case the GCE PersistentDisk) must be
created independently - this is not something Kubernetes manages for you (yet).
### I don't want or don't have persistent storage
If you are running in a place that doesn't have networked storage, or if you
just want to kick the tires on this without committing to it, you can easily
adapt the `ReplicationController` specification below to use a simple
`emptyDir` volume instead of a `persistentVolumeClaim`.
## Claim the storage
Now that the Kubernetes cluster knows that some storage exists, you can put a
claim on that storage. As with the `PersistentVolume` above, you can start
with the `salt` template:
<!-- BEGIN MUNGE: EXAMPLE registry-pvc.yaml.in -->
``` yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: kube-registry-pvc
namespace: kube-system
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ pillar['cluster_registry_disk_size'] }}
```
<!-- END MUNGE: EXAMPLE registry-pvc.yaml.in -->
This tells Kubernetes that you want to use storage, and the `PersistentVolume`
you created before will be bound to this claim (unless you have other
`PersistentVolumes` in which case those might get bound instead). This claim
gives you the right to use this storage until you release the claim.
## Run the registry
Now we can run a Docker registry:
<!-- BEGIN MUNGE: EXAMPLE registry-rc.yaml -->
``` yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-registry-v0
namespace: kube-system
labels:
k8s-app: registry
version: v0
spec:
replicas: 1
selector:
k8s-app: registry
version: v0
template:
metadata:
labels:
k8s-app: registry
version: v0
spec:
containers:
- name: registry
image: registry:2
resources:
limits:
cpu: 100m
memory: 100Mi
env:
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
volumes:
- name: image-store
persistentVolumeClaim:
claimName: kube-registry-pvc
```
<!-- END MUNGE: EXAMPLE registry-rc.yaml -->
## Expose the registry in the cluster
Now that we have a registry `Pod` running, we can expose it as a Service:
<!-- BEGIN MUNGE: EXAMPLE registry-svc.yaml -->
``` yaml
apiVersion: v1
kind: Service
metadata:
name: kube-registry
namespace: kube-system
labels:
k8s-app: registry
kubernetes.io/name: "KubeRegistry"
spec:
selector:
k8s-app: registry
ports:
- name: registry
port: 5000
protocol: TCP
```
<!-- END MUNGE: EXAMPLE registry-svc.yaml -->
## Expose the registry on each node
Now that we have a running `Service`, we need to expose it onto each Kubernetes
`Node` so that Docker will see it as `localhost`. We can load a `Pod` on every
node by creating following daemonset.
<!-- BEGIN MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
``` yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-registry-proxy
namespace: kube-system
labels:
k8s-app: kube-registry-proxy
version: v0.4
spec:
template:
metadata:
labels:
k8s-app: kube-registry-proxy
kubernetes.io/name: "kube-registry-proxy"
version: v0.4
spec:
containers:
- name: kube-registry-proxy
image: gcr.io/google_containers/kube-registry-proxy:0.4
resources:
limits:
cpu: 100m
memory: 50Mi
env:
- name: REGISTRY_HOST
value: kube-registry.kube-system.svc.cluster.local
- name: REGISTRY_PORT
value: "5000"
ports:
- name: registry
containerPort: 80
hostPort: 5000
```
<!-- END MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
When modifying replication-controller, service and daemon-set definitions, take
care to ensure *unique* identifiers for the rc-svc couple and the daemon-set.
Failing to do so will have register the localhost proxy daemon-sets to the
upstream service. As a result they will then try to proxy themselves, which
will, for obvious reasons, not work.
This ensures that port 5000 on each node is directed to the registry `Service`.
You should be able to verify that it is running by hitting port 5000 with a web
browser and getting a 404 error:
``` console
$ curl localhost:5000
404 page not found
```
## Using the registry
To use an image hosted by this registry, simply say this in your `Pod`'s
`spec.containers[].image` field:
``` yaml
image: localhost:5000/user/container
```
Before you can use the registry, you have to be able to get images into it,
though. If you are building an image on your Kubernetes `Node`, you can spell
out `localhost:5000` when you build and push. More likely, though, you are
building locally and want to push to your cluster.
You can use `kubectl` to set up a port-forward from your local node to a
running Pod:
``` console
$ POD=$(kubectl get pods --namespace kube-system -l k8s-app=registry \
-o template --template '{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}' \
| grep Running | head -1 | cut -f1 -d' ')
$ kubectl port-forward --namespace kube-system $POD 5000:5000 &
```
Now you can build and push images on your local computer as
`localhost:5000/yourname/container` and those images will be available inside
your kubernetes cluster with the same name.