[jjo] add kube-router support (#3339)

* [jjo] add kube-router support

Fixes cloudnativelabs/kube-router#147.

* add kube-router as another network_plugin choice
* support most used kube-router flags via
  `kube_router_foo` vars as other plugins
* implement replacing kube-proxy (--run-service-proxy=true) via
  `kube_proxy_mode: none`, verified in a _non kubeadm_enabled_
  install, should also work for recent kubeadm releases via
  `skipKubeProxyInstall: true` config

* [jjo] address PR#3339 review from @woopstar

* add busybox image used by kube-router to downloads

* fix busybox download groups key

* rework kubeadm_enabled + kube_router_run_service_proxy

- verify it working ok w/the kubeadm_enabled and
  kube_router_run_service_proxy true or false

- introduce `kube_proxy_remove` fact, to decouple logic
  from kube_proxy_mode (which affects kubeadm configmap
  settings, thus no-good to ab-use it to 'none')

* improve kube-router.md re: kubeadm_enabled and kube_router_run_service_proxy

* address @woopstar latest review

* add inventory/sample/group_vars/k8s-cluster/k8s-net-kube-router.yml

* fix kube_router_run_service_proxy conditional for kube-proxy removal

* fix kube_proxy_remove fact (w/ |bool), add some needed kube-proxy tags on my and existing changes

* update kube-router tolerations for 1.12 compatibility

* add PriorityClass to kube-router DaemonSet
This commit is contained in:
JuanJo Ciarlante
2018-10-16 11:15:05 -03:00
committed by k8s-ci-robot
parent c33e08c3fa
commit a5edd0d709
28 changed files with 634 additions and 17 deletions

91
docs/kube-router.md Normal file
View File

@@ -0,0 +1,91 @@
Kube-router
===========
Kube-router is a L3 CNI provider, as such it will setup IPv4 routing between
nodes to provide Pods' networks reachability.
See [kube-router documentation](https://www.kube-router.io/).
## Verifying kube-router install
Kube-router runs its pods as a `DaemonSet` in the `kube-system` namespace:
* Check the status of kube-router pods
```
# From the CLI
kubectl get pod --namespace=kube-system -l k8s-app=kube-router -owide
# output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
kube-router-4f679 1/1 Running 0 2d 192.168.186.4 mykube-k8s-node-nf-2 <none>
kube-router-5slf8 1/1 Running 0 2d 192.168.186.11 mykube-k8s-node-nf-3 <none>
kube-router-lb6k2 1/1 Running 0 20h 192.168.186.14 mykube-k8s-node-nf-6 <none>
kube-router-rzvrb 1/1 Running 0 20h 192.168.186.17 mykube-k8s-node-nf-4 <none>
kube-router-v6n56 1/1 Running 0 2d 192.168.186.6 mykube-k8s-node-nf-1 <none>
kube-router-wwhg8 1/1 Running 0 20h 192.168.186.16 mykube-k8s-node-nf-5 <none>
kube-router-x2xs7 1/1 Running 0 2d 192.168.186.10 mykube-k8s-master-1 <none>
```
* Peek at kube-router container logs:
```
# From the CLI
kubectl logs --namespace=kube-system -l k8s-app=kube-router | grep Peer.Up
# output
time="2018-09-17T16:47:14Z" level=info msg="Peer Up" Key=192.168.186.6 State=BGP_FSM_OPENCONFIRM Topic=Peer
time="2018-09-17T16:47:16Z" level=info msg="Peer Up" Key=192.168.186.11 State=BGP_FSM_OPENCONFIRM Topic=Peer
time="2018-09-17T16:47:46Z" level=info msg="Peer Up" Key=192.168.186.10 State=BGP_FSM_OPENCONFIRM Topic=Peer
time="2018-09-18T19:12:24Z" level=info msg="Peer Up" Key=192.168.186.14 State=BGP_FSM_OPENCONFIRM Topic=Peer
time="2018-09-18T19:12:28Z" level=info msg="Peer Up" Key=192.168.186.17 State=BGP_FSM_OPENCONFIRM Topic=Peer
time="2018-09-18T19:12:38Z" level=info msg="Peer Up" Key=192.168.186.16 State=BGP_FSM_OPENCONFIRM Topic=Peer
[...]
```
## Gathering kube-router state
Kube-router Pods come bundled with a "Pod Toolbox" which provides very
useful internal state views for:
* IPVS: via `ipvsadm`
* BGP peering and routing info: via `gobgp`
You need to `kubectl exec -it ...` into a kube-router container to use these, see
<https://www.kube-router.io/docs/pod-toolbox/> for details.
## Kube-router configuration
You can change the default configuration by overriding `kube_router_...` variables
(as found at `roles/network_plugin/kube-router/defaults/main.yml`),
these are named to follow `kube-router` command-line options as per
<https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers>.
## Caveats
### kubeadm_enabled: true
If you want to set `kube-router` to replace `kube-proxy`
(`--run-service-proxy=true`) while using `kubeadm_enabled`,
then 'kube-proxy` DaemonSet will be removed *after* kubeadm finishes
running, as it's not possible to skip kube-proxy install in kubeadm flags
and/or config, see https://github.com/kubernetes/kubeadm/issues/776.
Given above, if `--run-service-proxy=true` is needed it would be
better to void `kubeadm_enabled` i.e. set:
```
kubeadm_enabled: false
kube_router_run_service_proxy: true
```
If for some reason you do want/need to set `kubeadm_enabled`, removing
it afterwards behave better if kube-proxy is set to ipvs mode, i.e. set:
```
kubeadm_enabled: true
kube_router_run_service_proxy: true
kube_proxy_mode: ipvs
```

View File

@@ -8,15 +8,15 @@ After that make sure to source in your OpenStack credentials like you would do w
The next step is to make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack.
Otherwise [cinder](https://wiki.openstack.org/wiki/Cinder) won't work as expected.
Unless you are using calico you can now run the playbook.
Unless you are using calico or kube-router you can now run the playbook.
**Additional step needed when using calico:**
**Additional step needed when using calico or kube-router:**
Calico does not encapsulate all packages with the hosts' ip addresses. Instead the packages will be routed with the PODs ip addresses directly.
Being L3 CNI, calico and kube-router do not encapsulate all packages with the hosts' ip addresses. Instead the packets will be routed with the PODs ip addresses directly.
OpenStack will filter and drop all packages from ips it does not know to prevent spoofing.
OpenStack will filter and drop all packets from ips it does not know to prevent spoofing.
In order to make calico work on OpenStack you will need to tell OpenStack to allow calico's packages by allowing the network it uses.
In order to make L3 CNIs work on OpenStack you will need to tell OpenStack to allow pods packets by allowing the network they use.
First you will need the ids of your OpenStack instances that will run kubernetes:
@@ -36,10 +36,14 @@ Then you can use the instance ids to find the connected [neutron](https://wiki.o
| 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
| e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |
Given the port ids on the left, you can set the two `allowed_address`(es) in OpenStack. Note that you have to allow both `kube_service_addresses` (default `10.233.0.0/18`) and `kube_pods_subnet` (default `10.233.64.0/18`.)
Given the port ids on the left, you can set the two `allowed-address`(es) in OpenStack. Note that you have to allow both `kube_service_addresses` (default `10.233.0.0/18`) and `kube_pods_subnet` (default `10.233.64.0/18`.)
# allow kube_service_addresses and kube_pods_subnet network
openstack port set 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
openstack port set e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
If all the VMs in the tenant correspond to kubespray deployment, you can "sweep run" above with:
openstack port list --device-owner=compute:nova -c ID -f value | xargs -tI@ openstack port set @ --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
Now you can finally run the playbook.