Compare commits

...

701 Commits

Author SHA1 Message Date
Matthew Mosesohn
02cd5418c2 Weave limits (#2660)
* Raise limits for weave

* Adjust weave limits
2018-04-15 18:32:49 +03:00
Aivars Sterns
c7683f33cb Merge pull request #2654 from ganeshmaharaj/fix-vagrant-default-inventory
Vagrantfile: Fix default inventory path.
2018-04-13 19:10:42 +03:00
Matthew Mosesohn
49e3665d96 Remove prometheus operator from Kubespray (#2658)
Kubespray should not install any helm charts. This is a task
that a user should do on his/her own through ansible or another
tool. It opens the door to wrapping installation of any helm
chart.
2018-04-13 18:53:39 +03:00
Matthew Mosesohn
e95ba800ea Define local volume provisioner dirs in defaults (#2656) 2018-04-13 17:23:10 +03:00
Aivars Sterns
5d9bb300d7 Merge pull request #2646 from Atoms/fix-sync-container
move when condition to main.yml
2018-04-13 09:10:21 +03:00
Ganesh Maharaj Mahalingam
afcd5997b9 Vagrantfile: Fix default inventory path.
Change to support multiple inventory path led to Vagrant environment not
getting a default group_vars in it's inventory path. Using sample as the
default path if none specified.

Fix issue #2541

Signed-off-by: Ganesh Maharaj Mahalingam <ganesh.mahalingam@intel.com>
2018-04-12 14:24:41 -07:00
Matthew Mosesohn
f73717ea35 Mount local volume provisioner dirs for containerized kubelet (#2648) 2018-04-12 22:55:13 +03:00
Aivars Sterns
1967963702 Merge pull request #2380 from hwoarang/add-opensuse-support
Add openSUSE support
2018-04-12 20:28:50 +03:00
Chad Swenson
76dd0cd777 Merge pull request #2609 from chadswen/front-proxy-client-ca
Use dedicated front-proxy-ca for front-proxy-client
2018-04-12 12:19:41 -05:00
Chad Swenson
d87b6fd9f3 Use dedicated front-proxy-ca for front-proxy-client 2018-04-12 11:03:22 -05:00
Chad Swenson
a6a47dbc96 Merge pull request #2617 from bradbeam/savaultcert
Adding missing service-account certificate for vault
2018-04-12 11:02:24 -05:00
Matthew Mosesohn
61791bbb3d Remove condition for docker pull when using download delegate 2018-04-12 19:01:13 +03:00
Aivars Sterns
298c6cb790 Merge pull request #2633 from grebois/patch-3
Enabling MutatingAdmissionWebhook for Istio Automatic sidecar injection
2018-04-12 11:53:58 +03:00
Rong Zhang
a561ee6207 Merge pull request #2647 from riverzhang/build_rpm
Fix missing install remove-node feature
2018-04-12 15:44:40 +08:00
Matthew Mosesohn
3fa7468d54 Copy ca-key.pem to etcd and kube-masters accordingly 2018-04-12 10:17:54 +03:00
Michal Rostecki
bc3abad602 tests: Add CI jobs for openSUSE 2018-04-11 20:55:20 +01:00
Markos Chandras
d75b5d6931 README.md: Add openSUSE Leap and Tumbleweed as supported distributions 2018-04-11 20:55:20 +01:00
Markos Chandras
02bf742e15 roles: rkt: Add support for SUSE distributions
The RPM file that's provided by upstream can be used for SUSE
distributions as well. Moreover we simplify the playbook to use
the 'package' module to install packages across different distros.

Link: https://github.com/rkt/rkt/pull/3904
2018-04-11 20:55:20 +01:00
Markos Chandras
d07f75b389 roles: kubernetes: secrets: Add SUSE support
Add path for certificate location for SUSE distributions. Also make sure
the 'update-ca-certificates' command is executed on SUSE hosts as well.
2018-04-11 20:55:02 +01:00
Markos Chandras
2d34781259 roles: etcd: Add support for SUSE distributions
Add path for certificate location for SUSE distributions. Also make sure
the 'update-ca-certificates' command is executed on SUSE hosts as well.
2018-04-11 20:53:43 +01:00
Markos Chandras
cdb63a8c49 roles: docker: Ensure service is started if docker is already installed
If the 'docker' package is already installed, then the handlers will not
run and the service will not be (re-)started. As such, lets make sure
that the service is started even if the packages are already installed.
2018-04-11 17:46:14 +01:00
Markos Chandras
44a0626fc8 roles: docker: Add support for SUSE distributions
Add support for installing Docker on SUSE distributions. The Docker
repository at https://yum.dockerproject.org/repo/main/ does not support
recent openSUSE distributions so the only alternative is to use the
packages from the distro repositories. This however renders the
'docker_version' Ansible variable useless on SUSE.
2018-04-11 17:46:14 +01:00
Nirmoy Das
45eac53ec7 roles: kubernetes: preinstall: Install openssl-1.1.0 on Tumbleweed
The openssl package on Tumbleweed is actually a virtual package covering
openssl-1.0.0 and openssl-1.1.0 implementations. It defaults to 1.1.0 so
when trying to install it and openssl-1.0.0 is installed, zypper fails
with conflicts. As such, lets explicitly pull the package that we need
which also updates the virtual one.

Co-authored-by: Markos Chandras <mchandras@suse.de>
2018-04-11 17:46:14 +01:00
Markos Chandras
e42203a13e roles: kubernetes: preinstall: Add SUSE support
Add support for installing package dependencies and refreshing metadata
on SUSE distributions

Co-authored-by: Nirmoy Das <ndas@suse.de>
2018-04-11 17:46:14 +01:00
Nirmoy Das
4ba25326ed roles: bootstrap-os: Use 'hostname' command on Tumbleweed
openSUSE Tumbleweed is having the same problems with CoreOS when it
comes to using the hostname ansible module (#1588, #1600) so we need
to apply a similar workaround.

Co-authored-by: Markos Chandras <mchandras@suse.de>
Link: http://bugzilla.opensuse.org/show_bug.cgi?id=997614
2018-04-11 17:46:14 +01:00
Markos Chandras
dca4777347 roles: bootstrap-os: Add support for SUSE distributions
Install some required packages when running on SUSE distributions.
2018-04-11 17:46:14 +01:00
Markos Chandras
e113d1ccab Vagrantfile: Use rsync to copy working directory to VM
Depending on the VM configuration, vagrant may either use 'rsync' or
vboxfs for populating the working directory to the VM. However, vboxfs
means that any files created by the VM will also be present on the host.
As such, lets be explicit and always use 'rsync' to copy the directory
to the VM so we can keep the host copy clean. Moreover, the default
rsync options include '--copy-links' and this breaks rsync if there are
missing symlinks in the working directory like the following one:

Error: symlink has no referent:
"/home/user/kubespray/contrib/network-storage/glusterfs/group_vars"

As such, we override the default options to drop --copy-links.
2018-04-11 17:44:55 +01:00
Nirmoy Das
112ccfa9db Vagrantfile: Add support for openSUSE Tumbleweed
Add support for provisioning kubespray on openSUSE Tumbleweed virtual
machines.
2018-04-11 16:58:03 +01:00
Markos Chandras
0ed1919a38 Vagrantfile: Add support for openSUSE Leap 42.3
Add support for provisioning kubespray on openSUSE Leap 42.3 virtual
machines.
2018-04-11 16:58:03 +01:00
rongzhang
ff003cfa3c Fix missing install remove-node feature 2018-04-11 17:30:01 +08:00
Atoms
6c954df636 move when condition to main.yml 2018-04-11 12:05:33 +03:00
Aivars Sterns
981e61fb51 Merge pull request #2625 from kaarolch/master
Add note about ansible_become to coreos section
2018-04-11 08:32:43 +03:00
Karol Chrapek
5db1c3eef7 Add note about privilege escalation method to the README 2018-04-10 22:22:08 +02:00
Marcelo Grebois
88765f62e6 Updating order
https://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-admission-controllers-to-use
2018-04-10 17:17:39 +02:00
Robin Skahjem-Eriksen
0f35e17e23 Fix new envvar for setting openstack_tenant_id (#2641)
Changed from OS_PROJECT_ID to OS_PROJECT_NAME.
2018-04-10 17:23:31 +03:00
Brad Beam
77b3f9bb97 Removing default for volume-plugins mountpoint (#2618)
All checks test if this is defined meaning there is no way to undefine it.
2018-04-10 17:19:25 +03:00
Matthew Mosesohn
09f93d9e0c Fix CI upgrade scenario by using dynamic inventory file (#2635)
Also updates the commit ID we use as a basis for upgrade tests.
2018-04-10 16:02:33 +03:00
Matthew Mosesohn
45f15bf753 Revert "Fix new envvar for setting openstack_tenant_id" (#2640) 2018-04-10 14:37:24 +03:00
Aivars Sterns
913cc5a9af Merge pull request #2639 from ironhouzi/openstack_tenant_id_fix
Fix new envvar for setting openstack_tenant_id
2018-04-10 14:35:28 +03:00
Aivars Sterns
a46acfcdd8 Merge pull request #2627 from mattymo/no_more_do_do
Remove jinja2 dependency of do
2018-04-10 14:32:29 +03:00
Robin Skahjem-Eriksen
0c0f6b755d Fix new envvar for setting openstack_tenant_id
Changed from OS_PROJECT_ID to OS_PROJECT_NAME.
2018-04-10 13:30:48 +02:00
Aivars Sterns
ecda4e3a8c Merge pull request #2632 from Atoms/permission_fix
fix kubectl download location and kubectl.sh helper owner/group remove
2018-04-09 15:33:04 +03:00
Marcelo Grebois
4c12b273ac Enabling MutatingAdmissionWebhook for Istio Automatic sidecar injection
https://istio.io/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection
2018-04-09 12:49:05 +02:00
Atoms
b68854f79d fix kubectl download location and kubectl.sh helper owner/group remove 2018-04-09 13:19:26 +03:00
Matthew Mosesohn
f954bc0a5a Remove jinja2 dependency of do
While `do` looks cleaner, forcing this extra option in ansible.cfg
seems to be more invasive. It would be better to keep the traditional
approach of `set dummy = ` instead.
2018-04-09 12:27:53 +03:00
Rong Zhang
7b8359df4d Merge pull request #2613 from riverzhang/atomic-docker
Fix check docker error for atomic
2018-04-08 20:58:42 +08:00
rongzhang
66b61866cd Fix check docker error for atomic
Fix issues #2611
2018-04-08 17:53:16 +08:00
Rong Zhang
3736bfa04a Merge pull request #2612 from riverzhang/os
Fix issues #2522 Support Debian stretch
2018-04-08 16:29:05 +08:00
Brad Beam
dfc46f02d7 Adding missing service-account certificate for vault
Missed in #2554
2018-04-06 15:29:52 -05:00
rongzhang
9086665013 Fix issues #2522 Support Debian stretch
https://download.docker.com/linux/debian/dists/
2018-04-06 18:31:46 +08:00
RongZhang
0210e53bb7 Merge pull request #2610 from danielhoherd/master
Fix typos (no logic changes)
2018-04-06 16:53:40 +08:00
Daniel Hoherd
ca40d51bc6 Fix typos (no logic changes) 2018-04-05 15:54:58 -07:00
RongZhang
ca6a07f595 Add VMware vSphere to deployed 2018-04-05 22:36:50 +08:00
Andreas Krüger
b5bd959a97 Merge pull request #2598 from chenhonggc/persist_ipvs_module
Persist ip_vs modules
2018-04-05 09:53:47 +02:00
Chen Hong
973e7372b4 content: | 2018-04-04 23:05:27 +08:00
Chen Hong
b54e091886 Persist ip_vs modules 2018-04-04 18:18:51 +08:00
Andreas Krüger
6c220e4e4b Merge pull request #2495 from holmsten/rotate-provisioner-token
Rotate local-volume-provisioner token
2018-04-04 10:21:12 +02:00
Andreas Krüger
2511e14289 Merge pull request #2346 from Miouge1/kube-scheduler-mode
Use legacy policy config to apply the scheduler policy
2018-04-04 10:20:51 +02:00
Andreas Krüger
0f5ea5474c Merge pull request #2593 from vterdunov/fix-check-vsphere_cloud_provider
Properly check vsphere_cloud_provider.rc
2018-04-03 20:35:59 +02:00
Andreas Krüger
6567b8e012 Merge pull request #2590 from hswong3i/istio-download
istio: container download related things should defined in the download role
2018-04-03 13:57:43 +02:00
Andreas Krüger
aee3ec682e Merge pull request #2587 from tossmilestone/update-ingress-nginx-version
Bump ingress-nginx-controller to version 0.12.0
2018-04-03 13:56:41 +02:00
Wong Hoi Sing Edison
428a554ddb istio: container download related things should defined in the download role 2018-04-03 14:29:50 +08:00
Xiaoxi He
32f4194cf8 Bump ingress-nginx-controller to version 0.12.0 2018-04-03 10:39:17 +08:00
Chad Swenson
6f3ff70b17 Merge pull request #2585 from georgejdli/fix-sa-token-signing
check if dedicated service account token signing key exists
2018-04-02 14:23:49 -05:00
georgejdli
76bb5f8d75 check if dedicated service account token signing key exists 2018-04-02 10:57:24 -05:00
vterdunov
4b98537f79 Properly check vsphere_cloud_provider.rc 2018-04-02 18:45:42 +03:00
Andreas Krüger
cac2196ad5 Merge pull request #2575 from hswong3i/local-volume-provisioner-download
local-volume-provisioner: container download related things should defined in the download role
2018-04-02 10:32:43 +02:00
Andreas Krüger
ba24fe3226 Merge pull request #2570 from avoidik/transfer-cloud-configs
Move cloud config configurations to proper location
2018-04-02 10:31:38 +02:00
Matthew Mosesohn
3004791c64 Add pre-upgrade task for moving credentials file (#2394)
* Add pre-upgrade task for moving credentials file

This reverts commit 7ef9f4dfdd.

* add python interpreter workaround for localhost
2018-04-02 11:19:23 +03:00
Wong Hoi Sing Edison
b1a7889ff5 local-volume-provisioner: container download related things should defined in the download role 2018-04-02 13:50:11 +08:00
Andreas Krüger
92fc2df214 Merge pull request #2574 from hswong3i/cephfs-provisioner-download
cephfs-provisioner: container download related things should defined in the download role
2018-04-01 18:26:08 +02:00
Wong Hoi Sing Edison
4f714b07b8 cephfs-provisioner: container download related things should defined in the download role 2018-04-01 20:35:44 +08:00
Andreas Krüger
eb4038a6b9 Merge pull request #2573 from hswong3i/registry-download
registry: container download related things should defined in the download role
2018-04-01 11:39:04 +02:00
Wong Hoi Sing Edison
4c0e9ba890 registry: container download related things should defined in the download role 2018-04-01 06:51:57 +08:00
Andreas Krüger
deac627dc7 Merge pull request #2571 from hswong3i/ingress-nginx-download
ingress-nginx: container download related things should defined in the download role
2018-03-31 20:51:50 +02:00
Andreas Krüger
6ee3c053b7 Merge pull request #2547 from bobahspb/master
prometheus operator, metrics for k8s cluster
2018-03-31 20:51:24 +02:00
bobahspb
16961f69f2 Merge branch 'master' into master 2018-03-31 21:48:39 +03:00
Andreas Krüger
b9b028a735 Update etcd deployment to use correct cert and key (#2572)
* Update etcd deployment to use correct cert and key

* Update to use admin cert for etcdctl commands

* Update handler to use admin cert too
2018-03-31 14:06:09 -04:00
Wong Hoi Sing Edison
5fe144aa0f ingress-nginx: container download related things should defined in the download role 2018-04-01 00:22:33 +08:00
Andreas Krüger
5b0da4279f Merge pull request #2543 from hswong3i/cert-manager-0.2.3
Integrate jetstack/cert-manager 0.2.3 to Kubespray
2018-03-31 18:15:25 +02:00
Andreas Krüger
1ac978b8fa Merge pull request #2567 from mirwan/node_labels_doc_plus_kube_ingress_handling
node_labels documentation and kube-ingress label definition as role_node_label
2018-03-31 18:05:52 +02:00
Andreas Krüger
c1a2e9a8c6 Merge pull request #2569 from avoidik/allow-ssh-key-openstack
Allow ansible_ssh_private_key_file for Openstack
2018-03-31 18:05:00 +02:00
Wong Hoi Sing Edison
195d6d791a Integrate jetstack/cert-manager 0.2.3 to Kubespray 2018-03-31 19:29:11 +08:00
avoidik
aa301c31d1 Move credential checks into proper folder 2018-03-31 13:29:00 +03:00
Andreas Krüger
d9418b1dc4 Merge pull request #2554 from georgejdli/fix-sa-token-signing
Fix kubespray's ServiceAccount token signing keys
2018-03-31 09:59:22 +02:00
Andreas Krüger
2c89a02db3 Only download container/file if host is in defined group (#2565)
* Only download container/file if host is in defined group

* Set correct when clause

* Fix last entries

* Update download groups
2018-03-30 22:40:01 -04:00
Chad Swenson
0ca08e03af Merge pull request #2566 from woopstar/etcd-fix-2
Fix etcd from import task to include task
2018-03-30 20:53:32 -04:00
avoidik
15efdf0c16 Move credential checks 2018-03-31 03:26:37 +03:00
avoidik
ab8760cc83 Move credentials pre-check 2018-03-31 03:24:57 +03:00
avoidik
b6da596ec1 Move default configuration parameters for cloud-config 2018-03-31 03:18:23 +03:00
avoidik
3c12c6beb3 Move cloud config configurations to proper location 2018-03-31 02:59:59 +03:00
avoidik
26caad4f12 Allow ansible_ssh_private_key_file for Openstack 2018-03-31 02:38:01 +03:00
Erwan Miran
8ece922ef0 node_labels documentation + kube-ingress label handling as role_node_label 2018-03-31 00:36:11 +02:00
Andreas Krüger
887a468d32 Merge pull request #2562 from avoidik/fix-indexes-pr-2251
Fix kubecert_node.results indexes
2018-03-31 00:16:11 +02:00
woopstar
859a7f32fb Fix import task. Has to be include task to evalutate etcd_cluster_setup variable at run time 2018-03-31 00:06:34 +02:00
Andreas Krüger
1f28764ca1 Merge pull request #2512 from woopstar/hyperkube-fix-1
Switch hyperkube from CoreOS to Google
2018-03-30 21:58:03 +02:00
Andreas Krüger
76cb37d6b5 Merge pull request #2544 from woopstar/cert-fix-2
Update openssl.conf to count better and work with Jinja 2.9
2018-03-30 21:57:17 +02:00
Andreas Krüger
7ddd4cd38c Merge pull request #2561 from rsmitty/no_proxy
only set no_proxy if other proxy vars are defined
2018-03-30 21:43:23 +02:00
Andreas Krüger
c1eb975545 Merge pull request #2557 from chenhonggc/vault_health_check_delay
Maybe vault health check needs delay
2018-03-30 21:39:15 +02:00
Chad Swenson
414b739641 Merge pull request #2564 from rsmitty/jinja-ext
include do extension for jinja
2018-03-30 15:30:15 -04:00
georgejdli
572ab650db copy dedicated service account token signing key for kubeadm migration 2018-03-30 13:03:32 -05:00
Spencer Smith
e296ccb4d0 include do extension for jinja 2018-03-30 12:31:38 -04:00
avoidik
72c2a8982b Fix kubecert_node.results indexes 2018-03-30 17:24:50 +03:00
Spencer Smith
13c57147eb only set no_proxy if other proxy vars are defined 2018-03-30 09:48:55 -04:00
Andreas Krüger
7e58b96328 Merge pull request #2525 from avoidik/openstack_subnet_fix
Return subnet_id as defined in kubespray.tf
2018-03-30 13:42:12 +02:00
Andreas Krüger
ac4a71452e Merge pull request #2062 from wanix/ansible-vagrant-update
replace ansible.sudo by ansible.become for vagrant
2018-03-30 13:41:45 +02:00
Matthew Mosesohn
03bcfa7ff5 Stop templating kube-system namespace and creating it (#2545)
Kubernetes makes this namespace automatically, so there is
no need for kubespray to manage it.
2018-03-30 14:29:13 +03:00
Andreas Kruger
af5f376163 Revert 2018-03-30 11:42:20 +02:00
woopstar
004b0a3fcf Fix merge conflict 2018-03-30 11:38:59 +02:00
Andreas Kruger
4bb7d2b566 Merge branch 'master' of https://github.com/kubernetes-incubator/kubespray into cert-fix-2 2018-03-30 11:34:05 +02:00
Vladimir Vasilkin
94a0562c93 adding prometheus_operator_enabled, k8s_metrics_enabled parameters to tests 2018-03-30 12:29:04 +03:00
Andreas Krüger
f619eb08b1 Merge pull request #2350 from whereismyjetpack/kubeadm-nodename
set nodeName to "{{ inventory_hostname }}" in kubeadm-config
2018-03-30 11:15:52 +02:00
Andreas Krüger
55195fe546 Merge pull request #2500 from gorazio/patch-1
Add prometheus annotations to spec in ingress
2018-03-30 11:02:31 +02:00
RongZhang
5711074c5a Merge pull request #2290 from mirwan/node_labels_from_inventory
Node labels definition in kubelet params from inventory
2018-03-30 03:42:52 -05:00
Chen Hong
4a705b3fba May vault health check needs delay 2018-03-30 16:42:08 +08:00
RongZhang
31e386886f Merge pull request #2555 from chenhonggc/redundancy
remove redundancy code
2018-03-29 22:03:37 -05:00
陈宏
4d85e3765e remove redundancy code 2018-03-30 09:19:00 +08:00
Vladimir Vasilkin
f0a04b4d65 wait 5 * 4 secs until Tiller starts 2018-03-30 00:09:36 +03:00
Vladimir Vasilkin
760ca1c3a9 adding checking for prometheus_operator_enabled 2018-03-29 23:03:43 +03:00
Vladimir Vasilkin
23b3833806 running on the first master only. 2018-03-29 22:51:46 +03:00
Kuldip Madnani
daeeae1a91 Added retries in pre-upgrade.yml and retries while applying kube-dns.yml (#2553)
* Added retries in pre-upgrade.yml and retries while applying kube-dns.yml

* Removed trailing spaces
2018-03-29 11:37:32 -05:00
georgejdli
c8f857eae4 configure kubespray to sign service account tokens with a dedicated and stable key 2018-03-29 09:50:31 -05:00
Andreas Krüger
270d21f5c1 Merge pull request #2540 from mattymo/cloud_config_timing
Write cloud-config during kubelet configuration
2018-03-29 09:12:18 +02:00
Andreas Kruger
bf29198efd Fix merge conflict 2018-03-29 09:11:13 +02:00
Chad Swenson
db4e225342 Merge pull request #2548 from kmadnani/fix-openssl-conf
Added a fix in openssl.conf template to check for loadbalancer IP.
2018-03-28 16:37:24 -05:00
Kuldip Madnani
9ebbf1c3cd Added a fix in openssl.conf template to check if IP of loadbalncer is available or not. 2018-03-28 16:34:26 -05:00
Chad Swenson
ef7f5edbb3 Remove old docker packages and other docker upgrade fixes (#2536)
* Remove old docker packages

This removes docker packages that are obsolete if docker-ce packages are to be installed, which fixes some package conflict issues that can occur during upgrades.

* Add support for setting obsoletes=0 when installing docker with yum
2018-03-28 15:10:39 -05:00
woopstar
0b5404b2b7 Fix 2018-03-28 20:28:04 +02:00
Vladimir Vasilkin
19e1b11d98 prometheus operator, metrics for k8s cluster
install using Helm:
- Prometheus Operator
- metrics for k8s cluster including: grafana dashboard, alertmanager, node exporters

base project:
https://github.com/coreos/prometheus-operator

the issue:
https://github.com/kubernetes-incubator/kubespray/issues/2042

Previous PR, raw ansible without Helm:
https://github.com/kubernetes-incubator/kubespray/pull/2499
2018-03-28 21:23:30 +03:00
woopstar
0df32b03ca Update openssl.conf to count better and work with Jinja 2.9 2018-03-28 17:48:56 +02:00
Matthew Mosesohn
72a4223884 Write cloud-config during kubelet configuration
This file should only be updated during kubelet upgrade so that
master components are not accidentally restarted first during
preinstall stage.
2018-03-28 16:26:36 +03:00
Andreas Krüger
03117d9572 Merge pull request #2488 from LuckySB/ingress-nginx-node-role
Dedicated node for ingress nginx controller
2018-03-28 14:07:40 +02:00
Andreas Krüger
c78f5393c3 Merge pull request #2538 from hswong3i/weave-2.2.1-fixup
Fixup #2523: Upgrade Weave to 2.2.1
2018-03-28 11:06:33 +02:00
Andreas Krüger
fda49564bf Merge pull request #2492 from chenhonggc/gather_all_instances
gather all facts
2018-03-28 09:52:39 +02:00
Andreas Krüger
ed48b6e4b7 Merge pull request #2537 from hswong3i/calico-2.6.8-docs
Fixup #2262: Update README.md for calico v2.6.8
2018-03-28 09:50:07 +02:00
Wong Hoi Sing Edison
848fc323db Fixup for #2523:
- Rename template for /etc/cni/net.d/00-weave.conflist to 00-weave.conflist.j2
- Apply resources requests/limits to both container weave and weave-npc
2018-03-28 11:16:42 +08:00
Wong Hoi Sing Edison
e6f57f27ee Fixup #2262: Update README.md for calico v2.6.8 2018-03-28 10:44:54 +08:00
Brad Beam
015ea62e92 Merge pull request #2262 from tmjd/calico-canal-v2-6-7
Update Calico and Canal
2018-03-27 21:07:28 -05:00
Andreas Krüger
2ca7087018 Merge pull request #2524 from avoidik/systemd_user_kubelet
Set exact user for Kubelet services
2018-03-27 16:41:10 +02:00
Andreas Krüger
d665f14682 Merge pull request #2526 from mzehrer/patch-1
Remove  kibana_base_url
2018-03-27 12:40:31 +02:00
avoidik
e375678674 Set exact user for Kubelet services 2018-03-27 11:13:52 +03:00
avoidik
076b5c153f Return subnet_id as defined in kubespray.tf 2018-03-27 11:13:36 +03:00
Andreas Krüger
d33a482c91 Merge pull request #2529 from dvazar/bugfix/azurerm_generate_inventory
Fixed inventory file creation
2018-03-26 21:11:28 +02:00
Andreas Krüger
d64839e7d2 Merge pull request #2532 from LuckySB/etcd_tuning
add etc tunning options
2018-03-26 21:10:38 +02:00
Erwan SEITE
31705a502d change vagrant version 2018-03-26 18:02:14 +02:00
Erwan SEITE
5f5d0ffe14 replace sudo by become 2018-03-26 18:02:14 +02:00
Sergey Bondarev
4f7479d94d add etc tunning options
https://coreos.com/etcd/docs/latest/tuning.html

etcd_snapshot_count
and
ionice priority
2018-03-26 17:25:51 +03:00
dvazar
9511178666 fixed: creation of an inventory template 2018-03-26 12:00:02 +07:00
Michael Zehrer
b8d1652baf Remove kibana_base_url
The default for kibana_base_url does not make sense an makes kibana unusable. The default path forces a 404 when you try to open kibana in the browser. Not setting kibana_base_url works just fine.
2018-03-25 16:08:07 +02:00
Andreas Krüger
f7dc73b830 Merge pull request #2521 from f84anton/patch-1
optional calico_ip_auto_method variable with IP_AUTODETECTION_METHOD
2018-03-24 18:37:03 +01:00
Andreas Krüger
8eac37fabd Merge pull request #2523 from hswong3i/weave-2.2.1
Upgrade Weave to 2.2.1
2018-03-24 18:36:28 +01:00
Dann Bohn
1d0415a6cf fixes typo in kube_override_hostname for kubeadm 2018-03-24 13:29:07 -04:00
Wong Hoi Sing Edison
3f5c60886b Upgrade Weave to 2.2.1
- Fix #2414, so namespace isolation should now works
- Update weave-net.yml.j2 as per latest https://cloud.weave.works/k8s/net
- Other minor fixup
2018-03-24 17:27:12 +08:00
Anton Fayzrahmanov
a75598b3f4 IP_AUTODETECTION_METHOD docs 2018-03-24 01:54:17 +03:00
Anton Fayzrahmanov
60a057cace Update calico-node.yml.j2 2018-03-24 01:46:26 +03:00
Anton Fayzrahmanov
dd9d0c0530 optional calico_ip_auto_method variable with IP_AUTODETECTION_METHOD
can be set to one of
first-found
can-reach 
interface
2018-03-23 16:33:20 +03:00
Dann Bohn
9fa995ac9d only sets nodeName in kubeadm-config when kube_override_hostname is set 2018-03-23 08:33:25 -04:00
Andreas Krüger
f07734596e Merge pull request #2333 from hswong3i/cephfs_provisioner_fixup
CephFS Provisioner Addon Fixup
2018-03-23 11:57:11 +01:00
Wong Hoi Sing Edison
caec3de364 Updating to use calico-node v2.6.8 2018-03-22 12:33:04 -05:00
Erik Stidham
60bfc56e8e Update Calico and Canal
- Updating to use calico-node v2.6.7
- A few updates to their manifests too
2018-03-22 12:30:23 -05:00
Wong Hoi Sing Edison
206e24448b CephFS Provisioner Addon Fixup 2018-03-22 23:03:13 +08:00
Spencer Smith
4175431dcd Merge pull request #2332 from hswong3i/registry_fixup
Registry Addon Fixup
2018-03-22 11:01:22 -04:00
Wong Hoi Sing Edison
bb1eb9fec8 Add labels for namespace 2018-03-22 21:33:32 +08:00
Keyvan Hedayati
b0d7115e9b hswong3i/kubespray#3: Use {{ cluster_name }} for valid FQDN in REGISTRY_HOST 2018-03-22 21:33:32 +08:00
Wong Hoi Sing Edison
f8ebd08e75 Registry Addon Fixup 2018-03-22 21:33:32 +08:00
Andreas Krüger
6ac7840195 Update to correct versions in README
Some of the versions does not match. Fix this.
2018-03-22 11:59:58 +01:00
Andreas Krüger
30e4b89837 Merge pull request #2504 from brtknr/patch-1
Update kube-apiserver.manifest.j2 and kubeadm-config.yaml.j2 to incorporate `endpoint-reconciler-type: lease`
2018-03-22 09:15:55 +01:00
Andreas Krüger
405c711edb Remove v in tag 2018-03-22 09:07:28 +01:00
Chad Swenson
0e6b4e80f7 Merge pull request #2490 from woopstar/workaround-fix-1
Only apply roles from first master node to fix regression
2018-03-21 20:29:59 -05:00
Chad Swenson
9949782e96 Merge pull request #2489 from woopstar/token-fix-1
Only copy tokens if tokens_list contains any
2018-03-21 20:28:06 -05:00
Chad Swenson
bbb6e7b3da Merge pull request #2508 from melkosoft/cilium
Cilium v.1.0.0-rc8
2018-03-21 20:25:43 -05:00
Chad Swenson
bc68188209 Merge pull request #2498 from zmsp/master
Upgraded kubernetes from 1.9.3 to 1.9.5
2018-03-21 20:25:05 -05:00
woopstar
d3780e181e Switch hyperkube from CoreOS to Google 2018-03-21 23:27:16 +01:00
Andreas Krüger
2e202051e3 Merge pull request #2364 from whereismyjetpack/default-download
set local_release_dir in downloads to match others
2018-03-21 23:16:48 +01:00
Chad Swenson
448c1d5faa Merge pull request #2509 from chadswen/flannel-update
Update flannel version to v0.10.0
2018-03-21 12:15:09 -05:00
Andreas Krüger
ff2b8e5e60 Merge pull request #2503 from woopstar/kubelet-fix-1
Fix duplicate --proxy-client-cert-file and --proxy-client-key-file
2018-03-21 10:03:31 +01:00
Erwan Miran
8b71ef8ceb Labels from role (node-role.k8s.io/node) and labels from inventory are merged into node-labels parameter in kubelet 2018-03-21 09:19:05 +01:00
mirwan
ee8f678010 Addition of the .creds extension to the credentials files generated by password lookup in order for Ansible not to consider them as inventory files with inventory_ignore_extensions set accordingly (#2446) 2018-03-21 10:50:32 +03:00
melkosoft
6425c837d5 Added GCE Cilium Ubuntu test 2018-03-21 04:39:29 +00:00
Chad Swenson
a6b918c1a1 Merge pull request #2485 from LuckySB/flannel_iface_regexp
Add --iface-regex options to flannel
2018-03-20 21:18:01 -05:00
Chad Swenson
c025ab4eb4 Update flannel version to v0.10.0 2018-03-20 19:59:51 -05:00
melkosoft
ae30009fbc changed version to 1.0.0-rc8 2018-03-20 14:18:56 -07:00
melkosoft
158d775306 changed cilium to 1.0.0-rc7. Set CI to use coreos for cilium test 2018-03-20 12:43:26 -07:00
woopstar
9d540165c0 Set kube_api_aggregator_routing to default false as we use kube-proxy 2018-03-20 16:28:05 +01:00
Bharat Kunwar
0cb51e7530 Merge branch 'master' into patch-1 2018-03-20 13:36:25 +00:00
Bharat Kunwar
13e47e73c8 Update kubeadm-config.yaml.j2
As requested
2018-03-20 13:33:36 +00:00
Bharat Kunwar
6c4e5e0e3d Update kubeadm-config.yaml.j2 2018-03-20 13:30:57 +00:00
Bharat Kunwar
d2fd7b7462 Update kube-apiserver.manifest.j2 2018-03-20 12:19:53 +00:00
Bharat Kunwar
d9453f323b Update kube-apiserver.manifest.j2 2018-03-20 12:16:35 +00:00
Bharat Kunwar
b787b76c6c Update kube-apiserver.manifest.j2
Ensure that kube-apiserver will respond even if one of the nodes are down.
2018-03-20 12:06:34 +00:00
woopstar
a94a407a43 Fix duplicate --proxy-client-cert-file and --proxy-client-key-file 2018-03-20 12:08:36 +01:00
gorazio
96e46c4209 bump after CLA signing 2018-03-20 10:23:50 +03:00
gorazio
aa30fa8009 Add prometheus annotations to spec in ingress
Added annotations from metadata to spec.template.metadata. Without it, pod does not get any annotations, and Prometheus didn't see it
2018-03-20 08:47:36 +03:00
Zobair Shahadat
ebfee51aca Upgraded kubernetes from 1.9.3 to 1.9.5 2018-03-19 15:42:24 -04:00
Andreas Krüger
8b6a6a5a28 Merge pull request #2487 from MQasimSarfraz/upgrade-playbook-fix
Mark "calico-rr" as optional in fact gather
2018-03-19 20:25:24 +01:00
Andreas Holmsten
14ac7d797b Rotate local-volume-provisioner token
When tokens need to rotate, include local-volume-provisioner
2018-03-19 13:04:18 +01:00
Andreas Krüger
f253691a68 Merge pull request #2347 from hswong3i/multiple_artifacts_dir
Support multiple artifacts under individual inventory directory
2018-03-19 12:45:55 +01:00
Sergey Bondarev
038da7255f check if group kube-ingress is not empty
fix spelling mistaker ingress_nginx_host_network
set default value for ingress_nginx_host_network: false
2018-03-19 12:59:38 +03:00
Chen Hong
73cd24bf5a gather facts from all instances, maybe include calico-rr 2018-03-19 14:35:45 +08:00
Chen Hong
4ee9cb2be9 gather facts from all instances, maybe include calico-rr 2018-03-19 14:32:25 +08:00
woopstar
f1d2f84043 Only apply roles from first master node to fix regression 2018-03-18 16:15:01 +01:00
woopstar
b9a949820a Only copy tokens if tokens_list contains any 2018-03-18 08:42:38 +01:00
Andreas Krüger
50e5f0d28b Merge pull request #2468 from LuckySB/master
change expirations period for generated certificate from 10y to 100 years
2018-03-17 19:43:40 +01:00
Sergey Bondarev
1481f7d64b Dedicated node for ingress nginx controller
The ability to create dedicated node for ingress nginx controller
host type network for nginx controller

and add from example https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/static-ip/nginx-ingress-controller.yaml
terminationGracePeriodSeconds: 60
2018-03-17 02:54:46 +03:00
Chad Swenson
7d33650019 Merge pull request #2462 from woopstar/coredns-patch
Add CoreDNS support
2018-03-16 18:33:36 -05:00
MQasimSarfraz
728598b230 Mark "calico-rr" as optional in fact gathering 2018-03-16 21:09:47 +00:00
woopstar
e40368ae2b Add CoreDNS support with various fixes
Added CoreDNS to downloads

Updated with labels. Should now work without RBAC too

Fix DNS settings on hosts

Rename CoreDNS service from kube-dns to coredns

Add rotate based on http://edgeofsanity.net/rant/2017/12/20/systemd-resolved-is-broken.html

Updated docs with CoreDNS info

Added labels and fixed minor settings from official yaml file: https://github.com/kubernetes/kubernetes/blob/release-1.9/cluster/addons/dns/coredns.yaml.sed

Added a secondary deployment and secondary service ip. This is to mitigate dns timeouts and create high resitency for failures. See discussion at 'https://github.com/coreos/coreos-kubernetes/issues/641#issuecomment-281174806'

Set dns list correct. Thanks to @whereismyjetpack

Only download KubeDNS or CoreDNS if selected

Move dns cleanup to its own file and import tasks based on dns mode

Fix install of KubeDNS when dnsmask_kubedns mode is selected

Add new dns option coredns_dual for dual stack deployment. Added variable to configure replicas deployed. Updated docs for dual stack deployment. Removed rotate option in resolv.conf.

Run DNS manifests for CoreDNS and KubeDNS

Set skydns servers on dual stack deployment

Use only one template for CoreDNS dual deployment

Set correct cluster ip for the dns server
2018-03-16 21:51:37 +01:00
Brad Beam
4ff17cb5a5 Merge pull request #2457 from MQasimSarfraz/vsphere-volumes-rbac
Fix vsphere cloud_provider RBAC permissions
2018-03-16 14:08:49 -05:00
Sergey Bondarev
b7e6dd0dd4 Add --iface-regex options to flannel
Flannel use interface for inter-host communication setted on --iface options
Defaults to the interface for the default route on the machine.

flannel config set via daemonset, and flannel config on all nodes is the same.
But different nodes can have different interface names for the inter-host communication network

The option --iface-regex allows the flannel to find the interface on which the address is set from the inter-host communication network
2018-03-16 21:44:36 +03:00
Qasim Sarfraz
8ee2091955 Merge pull request #3 from kubernetes-incubator/master
Sync Upstream
2018-03-16 17:21:54 +01:00
Sergey Bondarev
3fac550090 Merge remote-tracking branch 'upstream/master' 2018-03-16 14:09:54 +03:00
Andreas Krüger
d29a1db134 Merge pull request #2461 from woopstar/patch-11
Add support to kubeadm too
2018-03-16 08:24:31 +01:00
Andreas Krüger
653d97dda4 Merge pull request #2472 from woopstar/patch-12
Make sure output from extra args is strings
2018-03-16 08:23:50 +01:00
Andreas Krüger
5364160d6a Merge pull request #2476 from woopstar/patch-13
Enable encrypting the secrets
2018-03-16 08:22:54 +01:00
Andreas Krüger
1a35948ff6 Enable encrypting the secrets
Enable the CI test to check the encryption of secrets
2018-03-15 20:33:57 +01:00
woopstar
40c0f3756b Encapsulate item instead of casting to string 2018-03-15 20:27:21 +01:00
Andreas Krüger
3d6fd49179 Added option for encrypting secrets to etcd v.2 (#2428)
* Added option for encrypting secrets to etcd

* Fix keylength to 32

* Forgot the default

* Rename secrets.yaml to secrets_encryption.yaml

* Fix static path for secrets file to use ansible variable

* Rename secrets.yaml.j2 to secrets_encryption.yaml.j2

* Base64 encode the token

* Fixed merge error

* Changed path to credentials dir

* Update path to secrets file which is now readable inside the apiserver container. Set better file permissions

* Add encryption option to k8s-cluster.yml
2018-03-15 22:20:05 +03:00
Oleg Vyukov
d843e3d562 Fix indent Custom ConfigMap ingress-nginx (#2447) 2018-03-15 22:18:18 +03:00
Aivars Sterns
d8d5474dcc Merge pull request #2467 from huzhengchuan/fix/kubeadm_enable
Fix error in kubelet.kubeadm.env.j2
2018-03-15 08:50:40 +02:00
Andreas Krüger
788e41a315 Make sure output from extra args is strings
Setting the following:

```
kube_kubeadm_controller_extra_args:
  address: 0.0.0.0
  terminated-pod-gc-threshold: "100"
```

Results in `terminated-pod-gc-threshold: 100` in the kubeadm config file. But it has to be a string to work.
2018-03-14 19:23:43 +01:00
MQasimSarfraz
1bcc641dae Create vsphere clusterrole only if it doesnt exists 2018-03-14 11:29:35 +00:00
Sergey Bondarev
f8fed0f308 change expirations period for generated certificate from 10 years to 100 years 2018-03-14 13:33:36 +03:00
zhengchuan hu
d1e6632e6a Fix err in kubelet.kubeadm.env.j2
1. 404 link url
2. kubelet_authentication_token_webhook is not work
3. kube_reserved variable set twice
2018-03-14 17:25:21 +08:00
Aivars Sterns
710295bd2f Merge pull request #2434 from protomech/feature/azure-vnet-resource-group
add support for azure vnetResourceGroup
2018-03-13 17:42:09 +02:00
RongZhang
3e2d68cd32 Merge pull request #2455 from whereismyjetpack/kube-limits
uses new kube_memory_reserved/kube_cpu_reserved variables in kubelt
2018-03-13 06:28:07 -05:00
Dann Bohn
f3788525ff fixes yamllint for docker defaults, and weave network plugin 2018-03-13 06:15:48 -04:00
Andreas Krüger
39d247a238 Add support to kubeadm too
Explicitly defines the --kubelet-preferred-address-types parameter #2418

Fixes #2453
2018-03-13 10:31:15 +01:00
Aivars Sterns
b37144b0b2 Merge pull request #2459 from riverzhang/remove-node-docs
Add remove node to getting-started doc
2018-03-13 11:12:42 +02:00
rong.zhang
2e0b33f754 Add remove node to getting-started doc 2018-03-13 16:41:26 +08:00
Aivars Sterns
adc3f79c23 Merge pull request #2458 from jouve/collect_info
use archive instead of command
2018-03-13 09:45:48 +02:00
Aivars Sterns
7904b454ba Merge pull request #2460 from riverzhang/fix-weave
Fix yamllint roles error for #2188 commit
2018-03-13 09:40:54 +02:00
rong.zhang
d264da8f08 Fix yamllint roles error for #2188 commit 2018-03-13 14:28:49 +08:00
Cyril Jouve
6abe78ff46 use archive instead of command 2018-03-12 19:59:22 +01:00
MQasimSarfraz
9a4aa4288c Fix vsphere cloud_provider RBAC permissions 2018-03-12 18:07:08 +00:00
Dann Bohn
50e3ccfa2b uses new kube_memory_reserved/kube_cpu_reserved variables in kubelt 2018-03-12 12:46:14 -04:00
RongZhang
69a3c33ceb Merge pull request #2429 from riverzhang/patch-6
Fix Docker exits prematurely
2018-03-12 06:16:25 -05:00
RongZhang
649b1ae868 Merge pull request #2452 from riverzhang/dockerproject
Fix issues #2451 Support docker-ce and docker-engine
2018-03-12 06:15:44 -05:00
Aivars Sterns
973cc12ca9 Merge pull request #2188 from cornelius-keller/fix_weave
fix nodePort for weave
2018-03-12 10:55:41 +02:00
Aivars Sterns
436de45dd4 Merge pull request #2295 from manics/supplementary-bugfix
Fix indexing of supplementary DNS in openssl.conf
2018-03-12 10:54:56 +02:00
Aivars Sterns
5f186a2835 Merge pull request #2418 from kubernetes-incubator/1439br
Explicitly defines the --kubelet-preferred-address-types parameter
2018-03-12 10:53:48 +02:00
RongZhang
ecec94ee7e Fix Docker exits prematurely
details:https://github.com/moby/moby/pull/31490/files
2018-03-12 14:44:47 +08:00
rong.zhang
196995a1a7 Fix issues#2451 Support docker-ce and docker-engine
Support docker-ce and docker-engine include redhat/centos ubuntu debian
2018-03-12 13:31:31 +08:00
Spencer Smith
3a714fd4ac Merge pull request #2427 from hswong3i/local_volume_provisioner_default
FIXUP #2424: local_provisioner directory should be created only if enabled
2018-03-10 09:00:35 -05:00
Spencer Smith
2132ec0269 Merge pull request #2378 from dleske/reorg-inventory-for-opst
Update OpenStack contrib to use per-cluster inventory layout
2018-03-09 15:21:21 -05:00
Spencer Smith
c47fdc9aa0 Merge pull request #2445 from chadswen/kube-cert-directory-fix
Fix kubernetes cert permission sync
2018-03-09 15:10:35 -05:00
Spencer Smith
5c4cfb54ae Merge pull request #2444 from chadswen/system-node-crb-name
Prefix system:node CRB
2018-03-09 15:09:01 -05:00
chadswen
cd153a1fb3 Fix kubernetes cert permission sync
Add `state: directory` to `file` task so that `recurse: yes` will actually take effect and ensure
certs/keys have the right file mode and owner
2018-03-09 00:11:10 -06:00
chadswen
b0ab92c921 Prefix system:node CRB
Change the name of `system:node` CRB to `kubespray:system:node` to avoid
conflicts with the auto-reconciled CRB also named `system:node`

Fixes #2121
2018-03-08 23:56:46 -06:00
RongZhang
5007a69eee Merge pull request #2437 from huzhengchuan/fix/callo-routereflector
Fix always download calico_rr image
2018-03-08 23:22:48 -06:00
Chad Swenson
8a46e050e3 Merge pull request #2433 from octarinesec/eyeofthefrog/systemd_command_fix
Fix systemd version detection
2018-03-08 22:28:12 -06:00
Chad Swenson
256fd12da5 Merge pull request #2440 from huzhengchuan/fix/proxy
clean http-proxy.conf
2018-03-08 20:36:26 -06:00
zhengchuan hu
8e36ad09b4 clean http-proxy.conf 2018-03-08 23:16:02 +08:00
zhengchuan hu
96a92503cb Fix always download calico_rr image 2018-03-08 17:04:16 +08:00
RongZhang
5253153dbb Merge pull request #2416 from riverzhang/delete-node
Remove nodes
2018-03-08 01:55:20 -06:00
rong.zhang
12c78e622b Remove nodes
Drain node except daemonsets resource
Use reset cluser for delete deploy data
Then delete node
2018-03-08 15:03:42 +08:00
RongZhang
216bf2e867 Merge pull request #2422 from riverzhang/patch-5
Enable OOM killing for etcd-events
2018-03-07 23:15:19 -06:00
Wong Hoi Sing Edison
a086686e9f Support multiple artifacts under individual inventory directory 2018-03-08 11:57:53 +08:00
Wong Hoi Sing Edison
6402004018 FIXUP #2424: local_provisioner directory should be created only if enabled 2018-03-08 11:57:46 +08:00
RongZhang
955f833120 Merge pull request #2430 from huzhengchuan/fix/kube-reserve
fix the name of some variable
2018-03-07 21:25:32 -06:00
Chad Swenson
f4476f25bd Merge pull request #2435 from chadswen/kube-module-enhancements
kube Module Enhancements
2018-03-07 17:17:01 -06:00
Chad Swenson
8960d5bcfa kube Module Enhancements
* Multiple files are now supported across operations.
  * Can be specified as a list or a comma separated string.
  * Single item per task params will still work without changes.
* Added `files`, `filenames`, and `file`, as aliases for the `filename` param.
* Improved output of error message to always include stderr
* `exists` now supports checking files

Follow up PRs encouraged across roles to start converting `with_items` loops on `kube` tasks into `files` param lists so we can improve performance.
2018-03-07 14:50:09 -06:00
Chris Mildebrandt
605738757d Fix systemd version detection
Change "command" to "shell" in order for the pipe to work correctly
2018-03-07 11:32:47 -08:00
Aivars Sterns
569613f2a4 Merge pull request #2425 from hswong3i/ingress_nginx_configmap
Add Custom ConfigMap Support for ingress-nginx
2018-03-07 19:02:03 +02:00
Antoine Legrand
cc182ea2f3 Merge pull request #2432 from kubernetes-incubator/remove-do-ci
Remove DigitalOcean
2018-03-07 16:29:45 +01:00
Wong Hoi Sing Edison
3f96b2da7a Add Custom ConfigMap Support for ingress-nginx 2018-03-07 21:37:45 +08:00
Antoine Legrand
9e44f94176 Remove DigitalOcean 2018-03-07 13:37:07 +01:00
Aivars Sterns
f94a7c6d82 Merge pull request #2431 from huzhengchuan/fix/doc-large-deployment
add tip to large-deployments.doc
2018-03-07 14:14:30 +02:00
RongZhang
dbf40bbbb8 docker-ce instead of docker-engine repo (#2423)
* Use docker-ce 17.03.2
* Docker-engine may be discarded
2018-03-07 15:11:20 +03:00
RongZhang
954aae931e Fix issues #2246 (#2403)
Support Centos/Fedora atomic host
2018-03-07 14:39:03 +03:00
zhengchuan hu
0b1200bb49 add tip to large-deployments.doc
set the ``etcd_events_cluster_setup: true`` store events
in a separate dedicated etcd instance.
2018-03-07 19:00:00 +08:00
zhengchuan hu
646d473e8e fix the name of some variable 2018-03-07 18:30:34 +08:00
Aivars Sterns
6975cd1622 Merge pull request #2419 from hswong3i/ingress_nginx_labels
Add labels for ingress_nginx_namespace
2018-03-06 08:01:13 +02:00
Aivars Sterns
b7f9bf43c2 Merge pull request #2421 from ctlam/master
Adding ssh_private_key_file to ProxyCommand
2018-03-06 07:59:26 +02:00
RongZhang
388b627f72 Enable OOM killing for etcd-events
Enable OOM killing like docker run etcd
2018-03-05 20:46:39 -06:00
Dominic Lam
f9019ab116 Adding ssh_private_key_file to ProxyCommand
This is trying to match what the roles/bastion-ssh-config is trying to do. When the setup is going through bastion, we want to ssh private key to be used on the bastion instance.
2018-03-05 13:15:10 -08:00
Michael Beatty
07657aecf4 add support for azure vnetResourceGroup 2018-03-05 13:40:25 -06:00
Wong Hoi Sing Edison
e65904eee3 Add labels for ingress_nginx_namespace, also only setup serviceAccountName if rbac_enabled 2018-03-05 23:11:18 +08:00
Ayaz Ahmed Khan
89847d5684 Explicitly defines the --kubelet-preferred-address-types parameter
to the API server configuration.

This solves the problem where if you have non-resolvable node names,
and try to scale the server by adding new nodes, kubectl commands
start to fail for newly added nodes, giving a TCP timeout error when
trying to resolve the node hostname against a public DNS.
2018-03-05 15:25:14 +01:00
RongZhang
dada98143c Fix kubespary rpm spec file some requires (#2417)
Fix kubespary rpm spec file some requires
2018-03-05 13:53:51 +03:00
Aivars Sterns
713efff78e Merge pull request #2184 from kongslund/kubelet-webhook-support
Added support for kubelet webhook authentication/authorization
2018-03-05 12:23:47 +02:00
Jonas Kongslund
585303ad66 Start with three dashes for consistency 2018-03-03 10:05:05 +04:00
Jonas Kongslund
a800ed094b Added support for webhook authentication/authorization on the secure kubelet endpoint 2018-03-03 10:00:09 +04:00
Aivars Sterns
84e47f4aaa Merge pull request #2411 from MQasimSarfraz/patch-1
README.md update for default component versions
2018-03-02 18:59:36 +02:00
Qasim Sarfraz
46ff9ce765 README.md update for default component versions 2018-03-02 16:37:41 +00:00
Aivars Sterns
e31eb199c5 Merge pull request #2324 from hswong3i/nginx_ingress
Integrate kubernetes/ingress-nginx 0.10.2 to Kubespray
2018-03-02 17:35:40 +02:00
Wong Hoi Sing Edison
fd46442188 Integrate kubernetes/ingress-nginx 0.11.0 to Kubespray 2018-03-02 23:33:19 +08:00
Matthew Mosesohn
9837b7926f Use proper lookup of etcd host for calico (#2408)
Fixes #2397
2018-03-02 15:36:52 +03:00
Aivars Sterns
5aeaa248d4 Merge pull request #2407 from brant4test/PR
remove + groups['calico-rr']
2018-03-02 12:04:43 +02:00
brant
739f6c78ad remove [calico-rr] 2018-03-02 15:41:04 +08:00
Aivars Sterns
b75b6b513b Merge pull request #2406 from riverzhang/fedora
Delete unused fedora docker repo
2018-03-02 09:33:57 +02:00
rong.zhang
2a3b48edaf Delete unused fedora docker repo 2018-03-02 14:39:13 +08:00
brant
7c7b33a0f8 remove + groups['calico-rr'] 2018-03-02 14:31:54 +08:00
brant
40d72d1865 added [calico-rr] 2018-03-02 13:48:11 +08:00
brant
cdc2e7d4fe Test-for-release-(do-not-merge) 2018-03-02 11:30:55 +08:00
Spencer Smith
2628663590 Merge pull request #2376 from dleske/opst-deprecation-warning
Correct use of deprecated argument to TF/OpenStack module
2018-03-01 12:51:09 -05:00
Antoine Legrand
5cc77eb6fd Merge pull request #2294 from Nowaker/patch-1
Enable OOM killing
2018-03-01 14:56:26 +01:00
Aivars Sterns
a1aa9d79c0 Merge pull request #2341 from trilogy-group/hotfix/single_node_limited_run
gather facts for all nodes, even if running for single one (--limit)
2018-03-01 15:50:45 +02:00
Aivars Sterns
8b21034b31 Merge pull request #2344 from hswong3i/local_volume_provisioner_fixup
Upgrade Local Volume Provisioner Addon to v2.0.0
2018-03-01 13:12:44 +02:00
RongZhang
67ffd8e923 Add etcd-events cluster for kube-apiserver (#2385)
Add etcd-events cluster for kube-apiserver
2018-03-01 11:39:14 +03:00
Chad Swenson
af7edf4dff Merge pull request #2369 from eviln1/fix-insecure-apiserver-port
fix apiserver manifest when disabling insecure_port
2018-02-28 17:48:08 -06:00
Spencer Smith
0fd3b9f7af Merge pull request #2391 from Miouge1/latest-helm
Install latest version of Helm
2018-02-28 15:04:41 -05:00
Matthew Mosesohn
7ef9f4dfdd Revert "Add pre-upgrade task for moving credentials file" (#2393) 2018-02-28 22:41:52 +03:00
Brad Beam
6ce507f39f Merge pull request #2345 from mattymo/credentials_upgrade_fix
Add pre-upgrade task for moving credentials file
2018-02-28 12:39:02 -06:00
Brad Beam
34cab91e86 Merge pull request #2366 from z1nkum/bump_dashboard_tag
Bump dashboard from 1.8.1 to 1.8.3 because of reload bug
2018-02-28 12:38:34 -06:00
Brad Beam
63de9bdba3 Merge pull request #2363 from whereismyjetpack/default-kube-proxy
default kube_proxy_mode in kubernetes-defaults
2018-02-28 12:37:46 -06:00
Brad Beam
afb6e7dfc3 Merge pull request #2362 from mattymo/calico_ignore_extra_pools_again
Use CNI to assign kube_pods_subnet for calico
2018-02-28 12:36:50 -06:00
Brad Beam
ad89d1c876 Update pre_upgrade.yml 2018-02-28 19:07:44 +03:00
Simon Li
6b80ac6500 Fix indexing of supplementary DNS in openssl.conf 2018-02-28 16:04:52 +00:00
Miouge1
2257dc9baa Install latest version of Helm 2018-02-28 16:29:38 +01:00
Andrew Greenwood
a40d9f3c72 Document a silent killer... (#2373)
Adding this into the default example inventory so it has less of a chance of biting others after weeks of random failures (as etcd does not  express that it has run out of RAM it just stalls).. 512MB was not  enough for us to run one of our products.
2018-02-28 15:36:51 +03:00
Dmitry Vlasov
977e7ae105 remove obsolete init image, bump dashboard version 1.8.1 -> 1.8.3 2018-02-28 12:52:59 +03:00
Matthew Mosesohn
bc0fc5df98 Use node cert for etcd tasks instead of delegating to first etcd (#2386)
For etcdctl commands, use admin cert instead of node because this file
doesn't exist on etcd only hosts.
2018-02-27 22:23:51 +03:00
Brad Beam
810c10a0e9 Merge pull request #2382 from chechiachang/replace-tab-with-space-and-remove-redundant-spaces
Remove redundant spaces
2018-02-27 10:39:57 -06:00
Matthew Mosesohn
bb469005b2 Add pre-upgrade task for moving credentials file 2018-02-27 17:35:15 +03:00
Brad Beam
89ade65ad6 Fixing etcd certs for calico rr (#2374) 2018-02-27 17:34:07 +03:00
RongZhang
128d3ef94c Fix run kubectl error (#2199)
* Fix run kubectl error

Fix run kubectl error when first master doesn't work

* if access_ip is define use first_kube_master
else different master use a different ip

* Delete set first_kube_master and use kube_apiserver_access_address
2018-02-27 16:32:20 +03:00
RongZhang
b7e06085c7 Upgrade to Kubernetes v1.9.3 (#2323)
Upgrade to Kubernetes v1.9.3
2018-02-27 14:31:59 +03:00
David Chang
8875e25fe9 Replace tab with space. Remove redundant spaces 2018-02-27 14:34:58 +08:00
Chad Swenson
44f9739750 Merge pull request #2326 from merwan/patch-1
Fix link markdown markup
2018-02-26 18:40:11 -06:00
Chad Swenson
9e85a023c1 Merge pull request #2360 from mattymo/reset_fixes
retry unmount kubelet dirs
2018-02-26 18:30:38 -06:00
Drew Leske
b6698e686a Update README.md with minor fixes/clarifications 2018-02-23 16:54:13 -08:00
Drew Leske
66bd570584 Update README.md with minor fixes and cleanup 2018-02-23 15:05:29 -08:00
Drew Leske
e2c5a3895b Rename sample inventory directory to be less awkward 2018-02-23 14:39:31 -08:00
Drew Leske
fe719c1bc1 Update OpenStack contrib to use per-cluster inventory layout
Supports Kubespray workspace with multiple OpenStack-deployed k8s
clusters.

* Create sample inventory directory for template
* Moved broken `group_vars` symlink to sample directory
* Created sample cluster Terraform file
* Updated documentation
2018-02-23 12:08:45 -08:00
Drew Leske
89fe6505f9 Correct use of deprecated argument to TF/OpenStack module
https://www.terraform.io/docs/providers/openstack/r/networking_router_v2.html#external_gateway
2018-02-22 22:41:19 -08:00
Brad Beam
4b5f780ff0 Merge pull request #2357 from octarinesec/eyeofthefrog/set_TasksMax_infinity_for_ubuntu
Set TasksMax to infinity on any OS with systemd
2018-02-22 21:31:10 -06:00
Brad Beam
31659efe13 Fixing cert name in calico/canal for etcd check (#2358) 2018-02-22 17:37:07 +03:00
Nedim Haveric
2bd3776ddb fix apiserver manifest when disabling insecure_port 2018-02-22 14:00:32 +01:00
Brad Beam
c874f16c02 Fixing credential lookup for fe proxy and vault (#2361) 2018-02-22 15:09:26 +03:00
Maxim Krasilnikov
ba91304636 Fixed generate front proxy client certs with vault (#2359)
* Fixed generate front proxy client certs with vault

* fix vault cert management

* Distrebute etcd node certs to vault hosts
2018-02-22 15:08:50 +03:00
Andreas Krüger
42a0f46268 Add health check to kube proxy (#2356)
Adding health checking to kube proxy. Fixes #2308
2018-02-21 23:14:45 +03:00
Andreas Krüger
d84ff06f73 Set filemode to 0640 (#2315)
* Set filemode to 0640

weave-net.yml file is readable by all users on the host. It however contains the weave_password to encrypt all pod communication. It should only be readable by root.

* Set mode 0640 on users_file with basic auth
2018-02-21 23:13:46 +03:00
Matthew Mosesohn
87f33a4644 Use CNI to assign kube_pods_subnet for calico
Now calico can be deployed if there are other existing pools
and not confuse IPAM and end up with pods in the wrong pools.
2018-02-21 20:32:28 +03:00
Dann Bohn
2d69b05c77 set local_release_dir in downloads to match others 2018-02-21 11:35:34 -05:00
Dann Bohn
2eb57ee5cd default kube_proxy_mode in kubernetes-defaults 2018-02-21 11:33:25 -05:00
Chris Mildebrandt
85c69c2a4a Add check for atomic hosts in template 2018-02-21 08:26:18 -08:00
Matthew Mosesohn
c20f38b89c retry unmount kubelet dirs 2018-02-21 14:41:57 +03:00
Aivars Sterns
bfe196236f Merge pull request #2033 from ArchiFleKs/terraform-fix-cred
Update Terraform docs and authentication method
2018-02-21 12:16:24 +02:00
Wong Hoi Sing Edison
d4c61d2628 Fixup for gce_centos7-flannel-addons 2018-02-21 13:41:25 +08:00
Wong Hoi Sing Edison
deef47c923 Upgrade Local Volume Provisioner Addon to v2.0.0 2018-02-21 13:41:25 +08:00
Chris Mildebrandt
c19d8994b9 Set TasksMax to infinity on any OS with systemd 2018-02-20 11:55:13 -08:00
Chad Swenson
2de6da25a8 Merge pull request #2312 from woopstar/patch-7
Added iptables lock fix and ajusted oom-score
2018-02-19 22:47:07 -06:00
melkosoft
f13e76d022 Added cilium support (#2236)
* Added cilium support

* Fix typo in debian test config

* Remove empty lines

* Changed cilium version from <latest> to <v1.0.0-rc3>

* Add missing changes for cilium

* Add cilium to CI pipeline

* Fix wrong file name

* Check kernel version for cilium

* fixed ci error

* fixed cilium-ds.j2 template

* added waiting for cilium pods to run

* Fixed missing EOF

* Fixed trailing spaces

* Fixed trailing spaces

* Fixed trailing spaces

* Fixed too many blank lines

* Updated tolerations,annotations in cilium DS template

* Set cilium_version to iptables-1.9 to see if bug is fixed in CI

* Update cilium image tag to v1.0.0-rc4

* Update Cilium test case CI vars filenames

* Add optional prometheus flag, adjust initial readiness delay

* Update README.md with cilium info
2018-02-16 21:37:47 -06:00
Dann Bohn
95e2bde15b set nodeName to "{{ inventory_hostname }}" in kubeadm-config 2018-02-16 16:20:08 -05:00
Antoine Legrand
5c0a41a6e0 Merge pull request #2340 from eduardobaitello/patch-1
Fix typo in aws.md docs
2018-02-16 19:03:14 +01:00
David Miller
6424928ba3 Update typo in the tag necessary for the ELB role (#2330) 2018-02-16 17:20:03 +01:00
Miouge1
4c280e59d4 Use legacy policy config to apply the scheduler policy 2018-02-16 13:43:35 +01:00
Antoine Legrand
56b7400dac Merge pull request #2325 from kubernetes-incubator/ci_two_stages
rollback to a two stage CI
2018-02-15 23:27:45 +01:00
Antoine Legrand
d095a1bb96 rollback to a two stage CI 2018-02-15 23:17:10 +01:00
Antoine Legrand
76a89039ad Merge pull request #2285 from jasdeep-hundal/do_not_install_python_apt
Remove redundant python-apt install
2018-02-15 17:04:08 +01:00
Łukasz Piątkowski
f90e509bf6 gather facts for all nodes, even if running for single one (--limit) 2018-02-14 15:45:50 +00:00
Eduardo Baitello
dd8902bfcd Fix typo in aws.md docs
"kubernetes" is spelled wrong in the cluster tag example
2018-02-14 12:52:36 -02:00
Merouane Atig
911af3f331 Fix link markdown markup 2018-02-13 16:02:50 +01:00
Antoine Legrand
e2f083f885 Merge pull request #2317 from kubernetes-incubator/add_digitalocean_ci
Additional CI platform (digital-ocean)
2018-02-13 14:59:08 +01:00
Antoine Legrand
e5a450349b Single step CI 2018-02-13 12:30:14 +01:00
Antoine Legrand
7a20d69809 fix inventory paths 2018-02-12 17:38:32 +01:00
Antoine Legrand
c187ae22e5 Force to /usr/bin/python in CI 2018-02-12 17:38:32 +01:00
Antoine Legrand
cb202a76df Fix tests 2018-02-12 17:38:32 +01:00
Antoine Legrand
e1d139db2e Use new CI image 2018-02-12 17:38:32 +01:00
Antoine Legrand
51e695066a Add dockerfile for the ci 2018-02-12 17:38:32 +01:00
Antoine Legrand
ce25fa4302 Enable multiple CI platform / Add DigitalOcean 2018-02-12 17:38:32 +01:00
Antoine Legrand
c403b61383 Update private key 2018-02-12 17:38:32 +01:00
Antoine Legrand
3ef7c25a16 Add digitalocean test case 2018-02-12 17:38:32 +01:00
Antoine Legrand
442d211ee3 Add DigitalOcean playbook to create VM 2018-02-12 17:38:32 +01:00
RongZhang
c0aad0a6d5 Fix install etcd by host service (#2297)
Fix bug issues #2289
2018-02-12 17:34:01 +01:00
Matthew Mosesohn
5903aea86f Update coreos-calico-aio scenario to test no group vars (#2314)
This updated scenario ensures deployment still passes without
having any group_vars available.
2018-02-12 12:38:06 +01:00
Aivars Sterns
f4a68eae01 Merge pull request #2313 from hswong3i/multiple_inventory_dir
Update default CONFIG_FILE for contrib/inventory_builder/inventory.py
2018-02-12 11:24:35 +00:00
Wong Hoi Sing Edison
4a36b091f4 Update default CONFIG_FILE for contrib/inventory_builder/inventory.py 2018-02-12 18:01:56 +08:00
Antoine Legrand
874ec8fc73 Merge pull request #2311 from brutus333/fix/wait4k8sapi
Increased timeout values for k8s API server restart
2018-02-12 10:25:16 +01:00
Andreas Krüger
41ca67bf54 Added iptables lock fix and ajusted oom-score
xtables lock was missing. Added new option for oom-score to make sure it's not killed in an OOM situation before regular pods.
2018-02-12 10:21:38 +01:00
Virgil Chereches
d72232f15b Increased timeout values for k8s API server restart 2018-02-12 07:35:29 +00:00
Maxim Krasilnikov
03c61685fb Added apiserver extra args variable for kubeadm config (#2291) 2018-02-12 10:29:46 +03:00
Antoine Legrand
46284198f8 Merge pull request #2298 from clkao/patch-2
Fix version comparison
2018-02-11 17:22:39 +01:00
Antoine Legrand
9916100835 Merge pull request #2299 from riverzhang/patch-4
Fix default_resolver is undefined
2018-02-10 17:26:51 +01:00
RongZhang
bbb1da1a83 Fix default_resolver is undefined
fix issues #2265
2018-02-10 10:08:26 -06:00
Antoine Legrand
cf183288dd Merge pull request #2287 from hswong3i/cephfs_provisioner_default
Add optional StorageClass name with cephfs_provisioner_storage_class
2018-02-10 14:13:48 +01:00
Wong Hoi Sing Edison
07075add3d Add optional StorageClass name with cephfs_provisioner_storage_class 2018-02-10 20:31:34 +08:00
Chia-liang Kao
338238d086 Fix version comparison
`FAILED! => {"changed": false, "msg": "AnsibleFilterError: Version comparison: unorderable types: str() < int()"}`
2018-02-10 03:49:49 +08:00
Brad Beam
c6c74616d8 Merge pull request #2293 from bradbeam/upgrade
Making node status and scheduling detection mo betta for upgrades
2018-02-09 13:15:55 -06:00
Brad Beam
03bb729fea Making status and detection mo betta 2018-02-09 12:30:46 -06:00
Antoine Legrand
60460c025c Merge pull request #2256 from mlushpenko/fix-kubeadm-safe-upgrade
Fix safe upgrade
2018-02-09 19:03:04 +01:00
Damian Nowak
f8a59446e8 Enable OOM killing
When etcd exceeds its memory limit, it becomes useless but keeps running.
We should let OOM killer kill etcd process in the container, so systemd can spot
the problem and restart etcd according to "Restart" setting in etcd.service unit file.
If OOME problem keep repeating, i.e. it happens every single restart,
systemd will eventually back off and stop restarting it anyway.

--restart=on-failure:5 in this file has no effect because memory allocation error
doesn't by itself cause the process to die

Related: https://github.com/kubernetes-incubator/kubespray/blob/master/roles/etcd/templates/etcd-docker.service.j2

This kind of reverts a change introduced in #1860.
2018-02-09 11:00:13 -06:00
mlushpenko
a37c642127 Remove obsolete token variables
Tokens are generated automatically during init process and on-demand for nodes joining process
2018-02-09 15:53:12 +01:00
mlushpenko
4e61fb9cd3 Refactored kubeadm join process and fixed uncrodonng for master nodes 2018-02-09 15:51:47 +01:00
mlushpenko
b472c2df98 Fix safe upgrade
Even though there it kubeadm_token_ttl=0 which means that kubeadm token never expires, it is not present in `kubeadm token list` after cluster is provisioned (at least after it is running for some time) and there is issue regarding this https://github.com/kubernetes/kubeadm/issues/335, so we need to create a new temporary token during the cluster upgrade.
2018-02-09 15:51:47 +01:00
Antoine Legrand
17f9242b58 Merge pull request #2292 from chapsuk/fix_miss_var
Added missing cephfs_provisioner_enabled to kubespray-defaults vars
2018-02-09 15:46:20 +01:00
mkrasilnikov
bc67deee78 Added missing cephfs_provisioner_enabled to kubespray-defaults vars 2018-02-09 17:03:38 +03:00
jasdeep-hundal
f57abae01e Remove redundant python-apt install
Ansible automatically installs the python-apt package when using
the 'apt' Ansible module, if python-apt is not present. This patch
removes the (unneeded) explicit installation in the Kubespray
'preinstall' role.
2018-02-08 18:59:37 -08:00
Antoine Legrand
275b1d6897 Merge pull request #2274 from mirwan/local_volume_provisioner_configmap_in_daemonset
Local volume provisioner fixes
2018-02-09 00:59:47 +01:00
Erwan Miran
e9a676951b storageClass name template as suggested by @eyeofthefrog 2018-02-09 00:11:07 +01:00
Antoine Legrand
b31d905704 Merge pull request #2230 from hswong3i/cephfs_provisioner
Add cephfs_provisioner Support for Kubespray
2018-02-08 16:52:15 +01:00
Aivars Sterns
c70c44b07b Merge pull request #2257 from rzenker/tb/baremetal-tweaks
baremetal tweaks
2018-02-08 15:48:55 +00:00
Aivars Sterns
20583e3d15 Merge pull request #2067 from manics/sysctl-net-brfilter
Always set net.bridge.bridge-nf-call-* sysctl
2018-02-08 15:43:46 +00:00
Aivars Sterns
9f4588cd0c Merge pull request #2266 from riverzhang/epel-release
Disalbe install epel-release rpm on Centos/Redhat
2018-02-08 15:42:28 +00:00
Wong Hoi Sing Edison
b25e0f82b1 Add cephfs_provisioner Support for Kubespray 2018-02-08 22:27:54 +08:00
Maxim Krasilnikov
cae1c683aa Merge pull request #2271 from leseb/retry-get-token
kubernetes-apps: retry get default token name
2018-02-08 16:46:32 +03:00
Antoine Legrand
57e7a5a34a Merge pull request #2233 from hswong3i/multiple_inventory_dir
Support multiple inventory files under individual inventory directory
2018-02-08 11:57:04 +01:00
Antoine Legrand
230f1e1208 Merge pull request #2273 from infernix/inventory-symlink-fix
Fix symlinking vagrant inventory
2018-02-08 11:52:55 +01:00
Antoine Legrand
7bce70339f Merge pull request #2251 from woopstar/metrics-server-patch-2
Adding metrics-server support for K8s version 1.9
2018-02-08 11:16:44 +01:00
Erwan Miran
e1aaef7d4d Removal of surnumerary slash 2018-02-08 09:06:17 +01:00
Wong Hoi Sing Edison
1a1d154e14 Support multiple inventory files under individual inventory directory 2018-02-08 08:08:15 +08:00
Brad Beam
384e5dd4c4 Merge pull request #2160 from kongslund/disable-read-only-port
Make the Kubelet read-only port configurable and disable it by default
2018-02-07 13:06:32 -06:00
Erwan Miran
abfb147292 MountDir in configmap and daemonset must be the same 2018-02-07 18:42:42 +01:00
Erwan Miran
44eb03f78a typo 2018-02-07 17:57:54 +01:00
Erwan Miran
857784747b local-provisioner:v1.0.1 still expects json configmap 2018-02-07 17:47:05 +01:00
Erwan Miran
7a2cb5e41c local-provisioner:v1.0.1 still uses VOLUME_CONFIG_NAME env to read ConfigMap 2018-02-07 17:01:19 +01:00
Gerben Meijer
e662ed4adc Fix symlinking vagrant inventory
The default path assumes that the vagrant dir is called 'inventory'.
With custom defined inventory dirs that are not called 'inventory' this
fails to create the correct symlink under .vagrant.d.
2018-02-07 16:50:05 +01:00
Antoine Legrand
712bdfc82f Merge pull request #2260 from mirwan/local_volume_provisioner_fixes
local_volume_provisioner_enabled replacement
2018-02-07 13:42:00 +01:00
Sébastien Han
34bd47de79 kubernetes-apps: retry get default token name
In some installation, it can take up to 3sec to get the value. Retrying
for 5 sec will ensure the command won't return 1.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-07 12:09:51 +01:00
Antoine Legrand
fe57c13b51 Merge pull request #2172 from leseb/etcd-auth
etcd: ability to enable/disable ETCD_PEER_CLIENT_CERT_AUTH
2018-02-07 11:25:56 +01:00
woopstar
f9df692056 Issue front proxy certs for vault 2018-02-07 11:03:10 +01:00
woopstar
f193b12059 Kubeadm auto creates this 2018-02-07 10:50:34 +01:00
woopstar
2cd254954c Remove defaults of allowed names. Updated kubeadm 2018-02-07 10:07:55 +01:00
woopstar
4dab92ce69 Rename from aggregator-proxy-client to front-proxy-client to match kubeadm design. Added kubeadm support too. Changed to use variables set and not hardcode paths. Still missing cert generation for Vault 2018-02-07 09:50:19 +01:00
Erwan Miran
d53f45d4e2 missing double quotes for ansible 2018-02-07 09:24:00 +01:00
Erwan Miran
ca08614641 yamllint fix 2018-02-07 09:12:28 +01:00
rong.zhang
47adf4bce6 Disalbe install epel-release rpm on Centos/Redhat
1.Disalbe install epel-release rpm on Centos/Redhat
2.Use yum install epel-release
2018-02-07 14:58:50 +08:00
Erwan Miran
e69979d5a2 keep local_volumes_enabled as deprecated 2018-02-07 07:58:50 +01:00
Maxim Krasilnikov
2ae68df41b Merge pull request #2153 from leseb/retry-rotate
Several enhancements and bug fixes
2018-02-07 00:54:13 +03:00
Brad Beam
7928cd20fb Merge pull request #2037 from tiewei/contiv-etcd-split
Split contiv etcd and etcd-proxy into two daemonsets
2018-02-06 15:37:16 -06:00
Ryan Zenker
ad9049a49e baremetal tweaks
* allow installs to not have hostname overriden with fqdn from inventory
* calico-config no longer requires local as and will default to global
* when cloudprovider is not defined, use the inventory_hostname for cni-calico
* allow reset to not restart network (buggy nodes die with this cmd)
* default kube_override_hostname to inventory_hostname instead of ansible_hostname
2018-02-06 13:52:22 -05:00
Sébastien Han
dfcd60a9e2 vagrant: use "centos/7" box to support libvirt provider
The "centos/7" box is the official centos box and supports all the major
providers:

virtualbox Externally hosted (cloud.centos.org)
vmware_desktop Externally hosted (cloud.centos.org)
libvirt Externally hosted (cloud.centos.org)
hyperv Externally hosted (cloud.centos.org)

Where  bento/centos-7.3 only supports:

parallels Hosted by Vagrant Cloud (570 MB)
virtualbox Hosted by Vagrant Cloud (525 MB)
vmware_desktop Hosted by Vagrant Cloud (608 MB)

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-06 18:20:11 +01:00
Sébastien Han
0684df804d vagrant: libvirt add vm_memory support
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-06 18:20:11 +01:00
Sébastien Han
f8d6b84cb6 vagrant: add ability to use drives in the VMs
When testing deployments of SDS, it is quite useful to get a Kubernetes
env with nodes having dedicated drives.
You can now enable this by setting: kube_node_instances_with_disks: true

Also you can chose the amount of drives per machine and their respective
size:

* kube_node_instances_with_disks_number: 10
* kube_node_instances_with_disks_size: "20G"

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-06 18:20:11 +01:00
Sébastien Han
a980731bed vagrant: remove trailing line
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-06 18:20:10 +01:00
Erwan Miran
b4e264251f JSON/YAML syntax fix 2018-02-06 17:17:10 +01:00
Erwan Miran
8006a6cd82 local_volumes_enabled replaced by local_volume_provisioner_enabled 2018-02-06 17:12:09 +01:00
Antoine Legrand
a69db4169b Merge pull request #2259 from woopstar/patch-5
Adding missing defaults for weave
2018-02-06 17:01:29 +01:00
Andreas Krüger
5cd6b0c753 Adding missing defaults for weave
The PR #2203 add's missing defaults for weave, but no signed CLA. So this PR fixes it.
2018-02-06 14:25:07 +01:00
Antoine Legrand
36ead3a720 Merge pull request #2258 from woopstar/patch-4
Set default registry_enabled to false
2018-02-06 14:19:24 +01:00
Andreas Krüger
bb339265fc Set default registry_enabled to false
In PR #2244 the `registry_enabled` is missing in defaults, causing a deployment to fail, if it is not set in k8s-cluster.yml
2018-02-06 14:17:06 +01:00
Antoine Legrand
bb4446e94c Merge pull request #2226 from manics/supplemental-addresses
Enable additional addresses to be added to certificates
2018-02-06 13:51:54 +01:00
Antoine Legrand
d2102671cd Merge pull request #2214 from woopstar/patch-3
Loadbalancer Apiserver Address is missing
2018-02-06 13:47:55 +01:00
Antoine Legrand
138e0c2301 Merge pull request #2250 from woopstar/weave-mtu-patch
Added option to set MTU on Weave
2018-02-06 12:13:54 +01:00
Antoine Legrand
37cfd289d8 Merge pull request #2248 from hswong3i/dashboard.yml.j2
Dashboard template should not suffix with .yml.j2
2018-02-06 11:25:02 +01:00
Antoine Legrand
9f3081580a Merge pull request #2249 from hswong3i/kubedns-deploy.yml.j2
KubeDNS template should not suffix with .yml.j2
2018-02-06 11:24:19 +01:00
Antoine Legrand
2b6781bc65 Merge pull request #2232 from hswong3i/local_volume_provisioner
Update and cleanup Local volume provisioner
2018-02-06 09:28:41 +01:00
Antoine Legrand
a3248379db Merge branch 'master' into local_volume_provisioner 2018-02-06 09:28:27 +01:00
Antoine Legrand
0774c8385c Merge pull request #2244 from hswong3i/registry
Migrate Kubernetes v1.9.1 cluster/addons/registry to Kubespray
2018-02-06 09:20:48 +01:00
woopstar
b2d30d68e7 Rename CN for aggreator back. Add flags to apiserver when version is >= 1.9 2018-02-05 20:37:14 +01:00
woopstar
82d10b882c Added fixes from whereismyjetpack 2018-02-05 20:07:12 +01:00
Spencer Smith
24ae85fa56 Merge pull request #2255 from rsmitty/kubespray-cli
remove references to deprecated kubespray-cli
2018-02-05 11:34:11 -05:00
Spencer Smith
1869aa3985 remove references to deprecated kubespray cli 2018-02-05 10:00:21 -05:00
Maxim Krasilnikov
95b8ac5f62 Added optional controller and scheduler extra args to kubeadm config (#2205) 2018-02-05 16:49:13 +03:00
woopstar
0b4168cad4 WIP. Adding metrics-server support for K8s version 1.9 2018-02-05 10:37:41 +01:00
woopstar
3289472e31 Added option to set MTU on Weave 2018-02-05 10:23:48 +01:00
Wong Hoi Sing Edison
4ad53339f6 KubeDNS template should not suffix with .yml.j2 2018-02-05 16:26:54 +08:00
Wong Hoi Sing Edison
a4d3da6a8e Dashboard template should not suffix with .yml.j2 2018-02-05 16:18:21 +08:00
Wong Hoi Sing Edison
7954ea2525 Migrate Kubernetes v1.9.1 cluster/addons/registry to Kubespray 2018-02-05 12:21:09 +08:00
Chad Swenson
bd1f0bcfd7 Merge pull request #2201 from riverzhang/ipvs
Support ipvs mode for kube-proxy
2018-02-01 22:29:52 -06:00
Wong Hoi Sing Edison
bc2e26d7ef update apiVersion 2018-02-01 14:16:32 +08:00
Wong Hoi Sing Edison
fd80013917 lint and cleanup local_volume_provisioner 2018-02-01 14:14:18 +08:00
Chad Swenson
f7d52564aa Merge pull request #2084 from riverzhang/devicemapper
Fix can not use devicemapper driver
2018-01-31 20:52:22 -06:00
Spencer Smith
f7e8d1149a Merge pull request #2229 from whereismyjetpack/etcd-quorum-read
--etcd-quorum-read is depricated in kube >= 1.9
2018-01-31 17:10:10 -05:00
Spencer Smith
bd091caaf9 Merge pull request #2200 from riverzhang/hyperkube
Upgrade to Kubernetes v1.9.2
2018-01-31 16:08:22 -05:00
Spencer Smith
b455a1bf76 Merge pull request #2212 from mattymo/missing_defaults
Add missing group var default values to kubespray-defaults
2018-01-31 16:07:53 -05:00
Spencer Smith
c0a3bcf9b3 Merge pull request #2221 from Xuxe/patch-vcp-v1.9.2
Updated vSphere cloud provider config for Kubernetes >= v1.9.2 and added resource pool deployment variable
2018-01-31 16:06:07 -05:00
Spencer Smith
5eedb5562f Merge pull request #2228 from mattymo/vault_etcd_secure
Vault should use cert auth for etcd
2018-01-31 16:05:28 -05:00
Dann Bohn
dc6c703741 --etcd-quorum-read is depricated in kube >= 1.9 2018-01-31 15:49:52 -05:00
Matthew Mosesohn
16629d0b8e Vault should use cert auth for etcd 2018-01-31 20:37:14 +03:00
Julian Hübenthal
7f79210ed1 reworked vsphere-cloud-config template 2018-01-31 16:51:23 +01:00
Simon Li
27a1a697e7 supplementary_addresses_in_ssl_keys can be a hostname 2018-01-31 15:16:08 +00:00
Aivars Sterns
c1267004ef Merge pull request #2130 from ArchiFleKs/simplify_os_provider
Simplify and update OpenStack cloud provider
2018-01-31 12:02:02 +02:00
Julian Hübenthal
9cdd2214f9 render vsphere_resource_pool only if defined 2018-01-31 09:56:43 +01:00
Julian Hübenthal
fc29764911 fixed broken variables table 2018-01-31 09:27:45 +01:00
Julian Hübenthal
989e9174c2 Added vSphere cloud provider config update for Kubernetes >= 1.9.2 2018-01-31 09:15:46 +01:00
rong.zhang
3993e12335 Fix can not be used devicemapper driver
Fix can not be used devicemapper driver
2018-01-31 15:51:11 +08:00
Brad Beam
ac4d782937 Merge pull request #2074 from fangzhen/fix-domains-split
Make spliting system_search_domains more robust
2018-01-30 21:01:19 -06:00
rong.zhang
32d18ca992 remove trailing space 2018-01-31 09:50:41 +08:00
Matthew Mosesohn
2df4b6c5d2 Rename default_resolver to cloud_resolver (#2209)
Cloud resolvers are mandatory for hosts on GCE and OpenStack
clouds. The 8.8.8.8 alternative resolver was dropped because
there is already a default nameserver. The new var name
reflects the purpose better.

Also restart apiserver when modifying dns settings.
2018-01-31 00:26:07 +03:00
Andreas Krüger
088d36da09 Increase the idx counter
Fix the idx counter to increase too, or you will end up with two same indexes.
2018-01-30 21:48:13 +01:00
Andreas Krüger
6f36faa4f9 Loadbalancer Apiserver Address is missing
If you configure your external loadbalancer to do a simple tcp pass-through to the api servers, and you do not use a DNS FQDN but just the ip, then you need to add the ip adress to the certificates too.

Example config:

```
## External LB example config
apiserver_loadbalancer_domain_name: "10.50.63.10"
loadbalancer_apiserver:
  address: 10.50.63.10
  port: 8383
```
2018-01-30 17:33:00 +01:00
RongZhang
3846384d56 Bump kube-dns to 1.14.8 (#2204)
Bump kube-dns to 1.14.8
2018-01-30 19:23:37 +03:00
Dmitri Rubinstein
331f141f63 Fix DNS entries in etcd's openssl.conf by adding a newline. (#2208)
DNS entries generated from 'etcd_cert_alt_names' variable in etcd's
openssl.conf are not terminated by a newline.

This fixes issue #2207.
2018-01-30 16:26:58 +03:00
Matthew Mosesohn
62dd3d2a9d Add missing group var default values to kubespray-defaults 2018-01-30 16:04:00 +03:00
Sébastien Han
fa8a128e49 etcd: ability to enable/disable ETCD_PEER_CLIENT_CERT_AUTH
Some installation are failing to authenticate with peers due to
etcd picking up/resoling the wrong node.

By setting 'etcd_peer_client_auth' to "False" you can disable peer client cert
authentication.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-01-30 11:19:12 +01:00
rong.zhang
b10c308a5a Support ipvs mode for kube-proxy
Support ipvs mode for kube-proxy
2018-01-30 13:09:01 +08:00
rong.zhang
e22c70e431 Upgrade to Kubernetes v1.9.2 2018-01-30 13:04:38 +08:00
Chad Swenson
f4fe9e3421 Merge pull request #2171 from ArchiFleKs/kubeproxy-lvs
Add lib/modules to kube-proxy to enable LVS
2018-01-29 22:58:02 -06:00
Brad Beam
da173615e4 Merge pull request #2048 from xizhibei/master
Fix: always only one container got synced after download
2018-01-29 16:01:11 -06:00
Matthew Mosesohn
dc6a17e092 Use include/import tasks (#2192)
import_tasks will consume far less memory, so it should be
used whenever it is compatible.
2018-01-29 14:37:48 +03:00
Antoine Legrand
f4180503c8 Merge pull request #2196 from Miouge1/network-size-large-deploy
Network size large deploy documentation
2018-01-26 15:26:03 +01:00
Miouge1
240d4193ae Update information about network sizes 2018-01-26 15:23:21 +01:00
Matthew Mosesohn
ac66e98ae9 Upgrade to Kubernetes v1.9.1 (#2152)
Raise drain timeout to 5m
2018-01-25 18:44:44 +03:00
Matthew Mosesohn
d2935ffed0 Optionally ignore the presence of extra calico pools (#2190) 2018-01-25 18:44:20 +03:00
Chad Swenson
c6e0fcea31 Merge pull request #1948 from sgmitchell/secured-etcd
Enable etcd secure client to prevent etcdctl access without cert and key
2018-01-25 09:35:51 -06:00
Chad Swenson
5d014d986b Merge pull request #1992 from manics/flannel-hairpin
Enable flannel hairpin mode
2018-01-24 21:20:03 -06:00
mirwan
714994cad8 iptables: flush nat table as well as filter table upon reset (#2174)
* iptables: flush nat table as well as filter table upon reset

* Indentation fix
2018-01-24 20:22:49 -06:00
Brad Beam
08fe61e058 Merge pull request #2071 from riverzhang/dashboard
Update dashboard version to v1.8.1
2018-01-24 20:10:05 -06:00
Brad Beam
0c8bed21ee Merge pull request #2019 from chadswen/disable-api-insecure-port
Support for disabling apiserver insecure port (the sequel)
2018-01-24 19:58:53 -06:00
Brad Beam
98eb845f8c Merge pull request #2173 from mirwan/hardcoded_dnsmasq-autoscaler_image
Dnsmasq autoscaler image should be a variable
2018-01-24 16:15:59 -06:00
Brad Beam
98300e3165 Merge pull request #2155 from brutus333/fix/pvc
Fix for Issue #2141
2018-01-24 16:15:33 -06:00
Cornelius Keller
e22759d8f0 fix nodePort for weave 2018-01-24 10:31:51 +01:00
Matthew Mosesohn
bf1411060e Add optional manual dns_mode (#2178) 2018-01-23 14:28:42 +01:00
Virgil Chereches
a4d142368b Renamed variable from disable_volume_zone_conflict to volume_cross_zone_attachment and removed cloud provider condition; fix identation 2018-01-23 13:14:00 +00:00
Brad Beam
eb80f9b606 Merge pull request #2154 from tdihp/proxy-conf-restart-docker
Restart docker when http-proxy.conf changed.
2018-01-22 08:39:05 -06:00
Stanislav Makar
ae47b617e3 Fix 'no such host' problem (#2148)
Fix 'no such host' problem reported by commands *kubectl logs* and *kubectl exec*
when cloud_provider is OpenStack

Closes: #2147
2018-01-22 16:08:24 +03:00
Bogdan Dobrelya
c116b8022e Update rpm spec and pbr setup configs (#2170)
* Update rpm spec and pbr setup configs

* Rename package to kubespray
* Do not break Fedora's FHS and install to /usr/share instead
* Remove the vendor tag
* Update source0 for better artifacts' names
* Fix missing files build errors
* Make version/release to auto match from git and fit PEP 440

Co-authored-by: Matthias Runge <mrunge@redhat.com>
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>

* Add package paths to roles search in ansible conf

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>

* Poke jinja2 requirements in rpm spec file

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2018-01-18 16:22:37 +01:00
Erwan Miran
5b98e15613 Merge branch 'hardcoded_dnsmasq-autoscaler_image' of github.com:mirwan/kubespray into hardcoded_dnsmasq-autoscaler_image 2018-01-18 16:04:35 +01:00
Erwan Miran
e5b4011aa4 move hardcoded dnsmasq autoscaler image to its own variable 2018-01-18 16:04:29 +01:00
Virgil Chereches
3125f93b3f Added disable_volume_zone_conflict variable 2018-01-18 10:55:23 +00:00
Spencer Smith
f19c8e8c1d Merge pull request #2132 from PhilippeChepy/flex-volumes
Add support for flex volumes plugins.
2018-01-17 15:00:45 -05:00
ArchiFleKs
20779df686 remove kube-node from master node by default 2018-01-17 19:02:02 +01:00
Dave Carley
752fba1691 Fix spelling mistakes in group_vars (#2166) 2018-01-17 18:42:27 +03:00
ArchiFleKs
637604d08f Add lib/modules to kube-proxy to enable LVS
kube-proxy is complaining of missing modules at startup. There is a plan
to also support an LVS implementation of kube-proxy in additon to
userspace and iptables
2018-01-17 16:35:53 +01:00
ArchiFleKs
ba7b1d74d0 remove user_data which may mess up /etc/hosts 2018-01-17 14:29:23 +01:00
Erwan Miran
1a9989ade9 move hardcoded dnsmasq autoscaler image to its own variable 2018-01-16 09:11:59 +01:00
Jonas Kongslund
11844c987c Make the Kubelet read-only port configurable and disable it by default. Fixes #2159. 2018-01-16 11:11:41 +04:00
Virgil Chereches
8c45c88d15 Fix for Issue #2141 - added policy file 2018-01-12 07:15:35 +00:00
Virgil Chereches
c87bb2f239 Fix for Issue #2141 2018-01-12 07:07:02 +00:00
heping
32eeb9a0e0 Restart docker when http-proxy.conf changed. 2018-01-12 10:56:25 +08:00
rong.zhang
df21fc8643 Remove initContainer 2018-01-10 12:17:17 +08:00
Spencer Smith
ffbdf31ac4 Merge pull request #2135 from riverron/master
Updated with correct syntax to access default_tags variable.
2018-01-09 17:22:12 -05:00
Spencer Smith
ccd9cc3dce Merge pull request #2146 from abelgana/master
Manage deprecated kubelet option
2018-01-09 17:19:42 -05:00
Spencer Smith
81867402f6 Merge pull request #2145 from pslijkhuis/master
Add kubelet_custom_flags to kubelet.kubeadm.env.j2
2018-01-09 17:19:09 -05:00
Spencer Smith
4f5d61212b Merge pull request #2144 from neith00/weave-2.1.3
updated weave to 2.1.3
2018-01-09 17:18:26 -05:00
Spencer Smith
ef96123482 Merge pull request #2068 from chadswen/remove-container-retries
Retry kube container removal during upgrade
2018-01-09 15:03:50 -05:00
Spencer Smith
ee27ab0052 Merge pull request #2124 from riverzhang/patch-3
Remove blank lines
2018-01-09 14:58:49 -05:00
Spencer Smith
57f87ba083 Merge pull request #2142 from trilogy-group/hotfix/fluentd-template
fix fluentd template
2018-01-09 14:44:50 -05:00
abelgana
a9bb72c6fd require-kubeconfig is depricated since k8s v1.8 2018-01-09 14:35:42 -05:00
abelgana
9506c2e597 require-kubeconfig is deprecated since K8s v1.8 2018-01-09 14:33:05 -05:00
Peter Slijkhuis
32884357ff Add kubelet_custom_flags to kubelet.kubeadm.env.j2 2018-01-09 14:04:36 +01:00
Bogdan Dobrelya
278ac08087 Fix HA docs API access endpoints explained (#2126)
* Fix HA docs API access endpoints explained

Follow-up commit 81347298a3
and fix the endpoint value provided in HA docs.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>

* Clarify internal LB with external LB use case

* Clarify how to use both internal and external, non-cluster aware and
  not managed with Kubespray, LB solutions.
* Clarify the requirements, like TLS/SSL termination, for such an external LB.
  Unlike to the 'cluster-aware' external LB config, endpoints' security must be
  managed by that non-cluster aware external LB.
* Note that masters always contact their local apiservers via https://bip:sp.
  It's highly unlikely to go down and it reduces latency that might be
  introduced when going host->lb->host. Only computes go that path.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>

* Add a note for supplementary_addresses_in_ssl_keys

Explain how to benefit from supplementary_addresses_in_ssl_keys

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2018-01-09 16:01:50 +03:00
neith00
88204642b7 updated weave to 2.1.3 2018-01-09 13:50:42 +01:00
Matthew Mosesohn
1401286910 Add support for cert alt names for etcd (#2139)
* Add support for cert alt names for etcd

* Update gen_certs_vault.yml
2018-01-09 14:37:34 +03:00
Lukasz Piatkowski
12eb242224 fix fluentd template 2018-01-08 13:40:47 +00:00
Ronald Rivera
8f36a02998 Merge branch 'master' of https://github.com/riverron/kubespray 2018-01-07 15:40:34 +00:00
Ronald Rivera
88f9e25f76 Updated with correct syntax to access default_tags variable. 2018-01-07 15:39:58 +00:00
Ron Rivera
dba1c13954 Updated with correct syntax to access default_tags variable. 2018-01-07 14:57:14 +00:00
Philippe Chepy
df9faa1743 Add support for flex volumes plugins. 2018-01-05 17:56:36 +01:00
ArchiFleKs
74fd975b57 run terraform FMT for readability 2018-01-05 12:09:04 +01:00
ArchiFleKs
ce85bcaee7 Simplify and update OpenStack cloud provider
Simplify the number of variables necessary to "just" enable OpenStack
cloud provider. Also add the new options available in K8s 1.9.
2018-01-05 12:05:24 +01:00
ArchiFleKs
6eb6e806e7 Update Terraform docs and authentication method
Hardcoded variables are removed from variables.tf file because it might
not be suitable for all OpenStack Cloud depending on Identity API
version available (between v2 or v3) and preferred authentication
method.
2018-01-05 11:25:37 +01:00
rong.zhang
6ed2a60978 fix run dashboard error 2018-01-04 13:13:36 +08:00
Brad Beam
fd04c14260 Merge pull request #2127 from spiffxp/follow-cla-doc
Follow CLA doc to kubernetes/community
2018-01-03 19:19:34 -06:00
Aaron Crickenberger
10a5273f07 Follow CLA doc to kubernetes/community 2018-01-03 16:48:53 -08:00
Bogdan Dobrelya
bac3bf1a5f Fix auto-evaluated API access endpoint for bind IP (#2086)
Auto configure API access endpoint with a custom bind IP, if provided.
Fix HA docs' http URLs are https in fact, clarify the insecure vs secure
API access modes as well.

Closes: #issues/2051

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2018-01-03 17:40:21 +01:00
RongZhang
e3b684df21 Remove blank lines
Remove blank lines
2018-01-03 00:54:04 -06:00
Steve Mitchell
e45b30d033 Add etcd key and cert environment variables for use with client auth 2018-01-02 13:52:17 -05:00
Matthew Mosesohn
ad6fecefa8 Update Kubernetes to v1.9.0 (#2100)
Update checksum for kubeadm
Use v1.9.0 kubeadm params
Include hash of ca.crt for kubeadm join
Update tag for testing upgrades
Add workaround for testing upgrades
Remove scale CI scenarios because of slow inventory parsing
in ansible 2.4.x.

Change region for tests to us-central1 to
improve ansible performance
2017-12-25 08:57:45 +00:00
Jan Jungnickel
3fdb2ccf55 Revert back to using an empty var as default to exclude hostname (#2110) 2017-12-22 22:09:59 +00:00
Matthew Mosesohn
29f5b55d42 remove unwanted whitespace for kube_override_hostname (#2105) 2017-12-22 11:31:18 +00:00
rong.zhang
5aef52e8c0 fix dashboard certs secret 2017-12-22 11:17:05 +08:00
Brad Beam
336e0cbf70 Merge pull request #2102 from spiffxp/update-code-of-conduct
Update code-of-conduct.md
2017-12-20 20:00:47 -06:00
Aaron Crickenberger
3cd06b0eb4 Update code-of-conduct.md
Refer to kubernetes/community as authoritative source for code of conduct
2017-12-20 14:12:38 -05:00
Matthew Mosesohn
6bb46e3ecb Fix param names in preparation for Kubernetes v1.9.0 (#2098)
This does not update v1.9.0, but fixes two incompatibilities
when trying to deploy v1.9.0.
2017-12-20 10:48:09 +00:00
Matthew Mosesohn
127bc01857 Do not override kubelet hostname if cloud_provider is used (#2095)
Starting with Kubernetes v1.8.4, kubelet ignores the AWS cloud
provider string and uses the override hostname, which fails
Node admission checks.

Fixes #2094
2017-12-19 20:18:20 +00:00
Evan Zeimet
a6975c1850 Rename runtime docker_version (#2082)
Renaming runtime docker_version to prevent setting that
value on the command line from breaking the play run.

This fixes #2081
2017-12-19 14:47:54 +00:00
Stanislav Makar
b2cb0725ac Default OpenStack Cinder Storage Class (#2083)
Add possibility to create default OpenStack Cinder Storage Class

Closes: #1609
2017-12-19 14:47:00 +00:00
rong.zhang
b974b144a8 Add RBAC to binding Dahsboard UI 2017-12-18 23:07:19 +08:00
Matthew Mosesohn
bfb25fa47b Change vault cert ttl to 8y (#2013) 2017-12-15 13:34:00 +00:00
Wei Tie
3bb505d43f Remove unrequired mounts 2017-12-14 14:59:40 -08:00
Matthew Mosesohn
b135bcb9d9 Split download container task for delegate and non-delegate modes (#2077)
Ansible cannot seem to handle omitting delegate_to since v2.4.0.0.

Possibly related: https://github.com/ansible/ansible/issues/30760
2017-12-14 16:45:54 +00:00
Wei Tie
4e97225424 Add quote for etcd endpoints 2017-12-13 18:35:12 -08:00
rong.zhang
0771cd8599 Remove dashboard_tls_key and dashboard_tls_cert 2017-12-13 15:42:20 +08:00
Fang Zhen
91d848f98a Make spliting system_search_domains more robust
The search line in /etc/resolv.conf could have
multiple spaces or tabs between domains.
split(' ') will give wrong results in some case,
use split() without argument instead.

e.g.
>>> 'domain.tld	cluster.tld '.split(' ')
['domain.tld\tcluster.tld', '']
>>> 'domain.tld cluster.tld '.split()
['domain.tld', 'cluster.tld']
2017-12-13 15:39:38 +08:00
rong.zhang
40edf8c6f5 Update dashboard version to v1.8.0
Update dependencies to be compatible with Kubernetes v1.8
2017-12-13 12:50:44 +08:00
Chad Swenson
e78562830f Retry kube container removal during upgrade
As we have seen with other containers, sometimes container removal fails on the first attempt due to some Docker bugs. Retrying typically corrects the issue.
2017-12-12 12:06:41 -06:00
Simon Li
bef259a6eb Always set net.bridge.bridge-nf-call-* sysctl 2017-12-12 17:11:35 +00:00
Brad Beam
39ce1bd8be Merge pull request #2059 from bradbeam/vaultalt
Fixing alt_names for vault cert generation
2017-12-12 09:28:51 -06:00
Spencer Smith
6291881943 Merge pull request #2057 from rsmitty/master
set docker_version fact regardless of docker_dns in use
2017-12-12 10:28:14 -05:00
Brad Beam
802fd94dad Merge pull request #2054 from ArchiFleKs/os-cloud-provider-domain-fix
Fix domain id for OpenStack provider
2017-12-11 21:06:16 -06:00
Xu Zhipei
66f38a1b31 fix: always only one docker image got synced after download 2017-12-12 09:51:03 +08:00
Brad Beam
d3850a4da5 Fixing alt_names for vault cert generation 2017-12-11 17:28:18 -06:00
Spencer Smith
53a4355e60 set docker_version fact regardless of docker_dns in use 2017-12-11 17:48:11 -05:00
Spencer Smith
18a616f57c Merge pull request #2052 from ArchiFleKs/os-terraform-fix-inventory
Change OpenStack inventory to python2
2017-12-11 13:42:05 -05:00
Spencer Smith
32333eb627 Merge pull request #2035 from brutus333/fix/proxy
Added proxy_env to scale and upgrade playbooks
2017-12-11 12:43:06 -05:00
Brad Beam
19def41fdf Merge pull request #2047 from bradbeam/vaulttime
Adding retries for vault-temp to come online
2017-12-11 09:04:57 -06:00
ArchiFleKs
44b9dce134 Fix domain id for OpenStack provider
OpenStack authentication does not support using a mix of DomainID and
DomainName, only one or the other should be used.
2017-12-11 15:57:33 +01:00
Brad Beam
fa5a538fe5 Merge pull request #2050 from jbonachera/fix-vault-tls-validation
append newline char to vault generated certs
2017-12-11 08:41:34 -06:00
ArchiFleKs
5e3fd2253f Change OpenStack inventory to python2
For distribution who ship python3 as default python, it breaks the
inventory script as it is not compatible with python3.
2017-12-11 14:25:05 +01:00
Brad Beam
9643c2c1e3 Fixes to reset (#2046)
- adding additional directories to cleanup (rkt/vault)
- targeting kubespray ansible groups instead of all
2017-12-11 12:49:21 +00:00
Brad Beam
93f3614382 Fixes #2039 - changing alt_names to be string instead of list (#2043) 2017-12-11 12:48:07 +00:00
Brad Beam
cbc8a7d679 Merge pull request #1995 from b0r1sp/patch-1
Update main.yml
2017-12-10 21:45:02 -06:00
Julien BONACHERA
290bc993a5 append newline char to vault generated certs 2017-12-10 13:06:28 +01:00
Brad Beam
3694657eb6 Adding retries for vault-init to come online 2017-12-09 17:40:44 -06:00
Thomas Sarboni
79417e07ca Fix systemd service unit for docker >= 17.03 (#1844) 2017-12-08 13:12:45 +00:00
Wei Tie
dad95c873b Remove templating for etcd members
Use a etcd-initer init container to generate etcd args, it determines
etcd name by comparing its ip and etcd cluster ips. This way will
make etcd configuration independent to the ansible templating so
that could be easier on adding master nodes.
2017-12-07 23:33:29 -08:00
Spencer Smith
626b35e1b0 Merge pull request #2005 from riverzhang/patch-1
Delete helm home
2017-12-07 11:23:30 -05:00
Wei Tie
5881ba43f8 Split contiv etcd and etcd-proxy into two daemonsets
Putting contiv etcd and etcd-proxy into the same daemonset and manage
the difference by a env file is not good for scaling (adding nodes).
This commit split them into two daemonsets so that when adding nodes,
k8s could automatically starting a etcd-proxy on new nodes without need
to run related play that putting env file.
2017-12-06 22:21:50 -08:00
Brad Beam
fed7b97dcb Merge pull request #2030 from mattymo/removerbaccheck
Remove RBAC from boolean checks
2017-12-06 23:41:13 -06:00
Spencer Smith
c4458c9d9a Merge pull request #1997 from mrbobbytables/feature-keepalived-cloud-provider
Add minimal keepalived-cloud-provider support
2017-12-06 23:28:27 -05:00
Virgil Chereches
7bae2a4547 Added proxy_env to scale and upgrade playbooks 2017-12-06 15:06:34 +00:00
riverzhang
aeb3e647d4 Remove the network device created by the flannel (#2006)
* Remove the network device created by the flannel

Remove the network device created by the flannel

* Modify flannel.1 device path

Modify flannel.1 device path

* remove trailing spaces
2017-12-06 14:15:39 +00:00
Kuldip Madnani
fe036cbe77 Adding changes to handle updation of yum Management cache in rhel. (#2026)
* Adding changes to handle updation of yum cache in rhel.

* Removed the redundant spaces
2017-12-06 09:00:41 +00:00
Matthew Mosesohn
952ec65a40 Remove RBAC from boolean checks 2017-12-06 11:57:40 +03:00
Chad Swenson
b8788421d5 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled).

Rework of #1937 with kubeadm support

Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
2017-12-05 09:13:45 -06:00
Brad Beam
c2347db934 Merge pull request #1953 from chadswen/dashboard-refactor
Kubernetes Dashboard v1.7.1 Refactor
2017-12-05 08:50:55 -06:00
Brad Beam
27ead5d4fa Merge pull request #2003 from abelgana/master
Change altnames to alt_names
2017-12-05 08:48:32 -06:00
BenGalewsky
591ae700ce Update OpenStack Terraform: Modules, Bastions, and New Floating IP config (#1958)
* Adding bastion and private network provisioning for openstack terraform

* Remove usage of floating-ip property

* Combine openstack instances + floating ips

* Fix relating floating IPs to hosts for openstack builds

* Tighten up security groups

Allow ssh into all instances with floating IP

* Add the gluster hosts to the no-floating group

* Break terraform into modules

* Update README and var descriptions to match current config

* Remove volume property in gluster compute def

* Include cluster name in internal network and router names

* Make dns_nameservers a variable
2017-12-05 12:48:47 +00:00
Stanislav Makar
6ade7c0a8d Update k8s version to 1.8.4 (#2015)
* Update k8s version to 1.8.4

* Update main.yml
2017-12-04 16:23:04 +00:00
Jan Jungnickel
b3745f2614 contrib/terraform/aws: Tag instances and remove loadbalancer ip (#2023)
* Properly tag instances and subnets with `kubernetes.io/cluster/$cluster_name`

This is required by kubernetes to support multiple clusters in a single vpc/az

* Get rid of loadbalancer_apiserver_address as it is no longer needed
2017-12-04 14:31:46 +00:00
Jean-Marie F
ca8a9c600a Terraform - Remove the need for region specific reference data (#1962)
* Dynamically retrieve aws_bastion_ami latest reference by querying AWS rather than hard coded

* Dynamically retrieve the list of availability_zones instead of needing to have them hard coded

* Limit availability zones to first 2, using slice extrapolation function

* Replace the need for hardcoded variable "aws_cluster_ami" by the data provided by Terraform

* Move ami choosing to vars, so people don't need to edit create infrastructure if they want another vendor image (as suggested by @atoms)

* Make name of the data block agnostic of distribution, given there are more than one distribution supported

* Add documentation about other distros being supported and what to change in which location to make these changes
2017-11-30 15:27:52 +00:00
Matthew Mosesohn
a0225507a0 Set helm deployment type to host (#2012) 2017-11-29 19:52:54 +00:00
Steven Hardy
d39a88d63f Allow setting --bind-address for apiserver hyperkube (#1985)
* Allow setting --bind-address for apiserver hyperkube

This is required if you wish to configure a loadbalancer (e.g haproxy)
running on the master nodes without choosing a different port for the
vip from that used by the API - in this case you need the API to bind to
a specific interface, then haproxy can bind the same port on the VIP:

root@overcloud-controller-0 ~]# netstat -taupen | grep 6443
tcp        0      0 192.168.24.6:6443       0.0.0.0:*               LISTEN      0          680613     134504/haproxy
tcp        0      0 192.168.24.16:6443      0.0.0.0:*               LISTEN      0          653329     131423/hyperkube
tcp        0      0 192.168.24.16:6443      192.168.24.16:58404     ESTABLISHED 0          652991     131423/hyperkube
tcp        0      0 192.168.24.16:58404     192.168.24.16:6443      ESTABLISHED 0          652986     131423/hyperkube

This can be achieved e.g via:

kube_apiserver_bind_address: 192.168.24.16

* Address code review feedback

* Update kube-apiserver.manifest.j2
2017-11-29 15:24:02 +00:00
unclejack
e5d353d0a7 contiv network support (#1914)
* Add Contiv support

Contiv is a network plugin for Kubernetes and Docker. It supports
vlan/vxlan/BGP/Cisco ACI technologies. It support firewall policies,
multiple networks and bridging pods onto physical networks.

* Update contiv version to 1.1.4

Update contiv version to 1.1.4 and added SVC_SUBNET in contiv-config.

* Load openvswitch module to workaround on CentOS7.4

* Set contiv cni version to 0.1.0

Correct contiv CNI version to 0.1.0.

* Use kube_apiserver_endpoint for K8S_API_SERVER

Use kube_apiserver_endpoint as K8S_API_SERVER to make contiv talks
to a available endpoint no matter if there's a loadbalancer or not.

* Make contiv use its own etcd

Before this commit, contiv is using a etcd proxy mode to k8s etcd,
this work fine when the etcd hosts are co-located with contiv etcd
proxy, however the k8s peering certs are only in etcd group, as a
result the etcd-proxy is not able to peering with the k8s etcd on
etcd group, plus the netplugin is always trying to find the etcd
endpoint on localhost, this will cause problem for all netplugins
not runnign on etcd group nodes.
This commit make contiv uses its own etcd, separate from k8s one.
on kube-master nodes (where net-master runs), it will run as leader
mode and on all rest nodes it will run as proxy mode.

* Use cp instead of rsync to copy cni binaries

Since rsync has been removed from hyperkube, this commit changes it
to use cp instead.

* Make contiv-etcd able to run on master nodes

* Add rbac_enabled flag for contiv pods

* Add contiv into CNI network plugin lists

* migrate contiv test to tests/files

Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>

* Add required rules for contiv netplugin

* Better handling json return of fwdMode

* Make contiv etcd port configurable

* Use default var instead of templating

* roles/download/defaults/main.yml: use contiv 1.1.7

Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2017-11-29 14:24:16 +00:00
Di Xu
de422c822d update nginx tag to use multi-arch docker image (#2009) 2017-11-29 10:39:52 +00:00
Matthew Mosesohn
4d3326b542 Raise default vault lease TTL to 10y (#2008) 2017-11-29 10:38:59 +00:00
riverzhang
1b82138142 Delete helm home
Delete helm home
2017-11-29 13:27:09 +08:00
Christopher Randles
208ff8e350 Allow for more customization of the tiller deploy (#1946) 2017-11-28 18:33:57 +00:00
Matthew Mosesohn
ec54b36e05 add retries for calico/canal etcd commands (#2007) 2017-11-28 16:39:55 +00:00
Spencer Smith
38e8522cbf Merge pull request #1983 from tomdee/bump-flannel-ver
Bump flannel version to v0.9.1
2017-11-28 11:38:55 -05:00
Spencer Smith
52f8687397 Merge pull request #1977 from mattymo/initializers
Disable initializers feature gate if istio is not used
2017-11-28 11:37:41 -05:00
Spencer Smith
43600ffcf8 Merge pull request #1972 from chadswen/master-static-pod-flush
Additional flush for static pod master upgrade
2017-11-28 11:36:38 -05:00
Christopher Randles
938d2d9e6e update helm/tiller to v2.7.2 -- security bugfix (#1986) 2017-11-28 14:52:42 +00:00
Kevin Lefevre
9368dbe0e7 update calico to 2.6.2 (#1874)
Move RS to deployment so no need to take care of the revision history
limits :
  - Delete the old RS
  - Make Calico manifest a deployment
  - move deployments to apps/v1beta2 API since Kubernetes 1.8
2017-11-28 12:01:30 +00:00
abelgana
fe3290601a The variable altnames is used by this task.
Since the value will change on the default. It needs to change here also.
2017-11-27 06:57:16 -05:00
abelgana
e7173e1d62 Change altnames to alt_names
Hi,

Could you please check if it was a typo?

https://www.vaultproject.io/api/secret/pki/

Regards,
2017-11-25 17:29:21 -05:00
Bogdan Dobrelya
8aafe64397 Defaults for apiserver_loadbalancer_domain_name (#1993)
* Defaults for apiserver_loadbalancer_domain_name

When loadbalancer_apiserver is defined, use the
apiserver_loadbalancer_domain_name with a given default value.

Fix unconsistencies for checking if apiserver_loadbalancer_domain_name
is defined AND using it with a default value provided at once.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>

* Define defaults for LB modes in common defaults

Adjust the defaults for apiserver_loadbalancer_domain_name and
loadbalancer_apiserver_localhost to come from a single source, which is
kubespray-defaults. Removes some confusion and simplefies the code.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-11-23 16:15:48 +00:00
Bob Killen
2140303fcc add minimal keepalived-cloud-provider support 2017-11-23 08:43:36 -05:00
brx
b80ded63ca Update main.yml
just a small spelling mistake
2017-11-21 22:37:52 +01:00
Simon Li
7be2521a31 Add flannel hairping mode 2017-11-21 10:43:50 +00:00
Tom Denham
15b9d54a32 Bump flannel version to v0.9.1 2017-11-16 12:52:18 -07:00
Spencer Smith
bc1a4e12ad fix broken variable in ansible 2.4.1.0 and ensure tasks for calico-rr (#1982) 2017-11-16 18:44:15 +00:00
Matthew Mosesohn
67419e8d0a Run rotate_tokens role only once (#1970) 2017-11-15 18:50:23 +00:00
Chad Swenson
849aaf7435 Update to k8s 1.8.3 (#1971) 2017-11-15 17:43:22 +00:00
Chad Swenson
a89ee8c406 Add ability to use custom cert secret instead of init container provisioned self-signed certs 2017-11-15 10:05:52 -06:00
Chad Swenson
0c6f172e75 Kubernetes Dashboard v1.7.1 Refactor
This version required changing the previous access model for dashboard completely but it's a change for the better. Docs were updated.

* New login/auth options that use apiserver auth proxying by default
* Requires RBAC in `authorization_modes`
* Only serves over https
* No longer available at https://first_master:6443/ui until apiserver is updated with the https proxy URL:
* Can access from https://first_master:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login you will be prompted for credentials
* Or you can run 'kubectl proxy' from your local machine to access dashboard in your browser from: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
* It is recommended to access dashboard from behind a gateway that enforces an authentication token, details and other access options here: https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above
2017-11-15 10:05:48 -06:00
Matthew Mosesohn
a67349b076 Disable initializers feature gate if istio is not used 2017-11-15 12:56:36 +00:00
Matthew Mosesohn
f9b68a5d17 Revert "Support for disabling apiserver insecure port" (#1974) 2017-11-14 13:41:28 +00:00
chenhonggc
c7910b51a1 --peers DEPRECATED - --endpoints should be used instead (#1943) 2017-11-14 11:28:35 +00:00
Chad Swenson
1f99710b21 Additional flush for static pod master upgrade
Thought this wasn't required at first but I forgot there's no auto flush at the end of these tasks since the `kubernetes/master` role is not the end of the play.
2017-11-13 18:11:57 -06:00
Aivars Sterns
5e558c361b update weave-net to 2.0.5 version (#1877) 2017-11-13 16:11:47 +00:00
neith00
5f39efcdfd adding mount for kubelet to enable rbd mounts (#1957)
* adding mount for kubelet to enable rbd mounts

* fix conditionnal variable name
2017-11-13 14:04:13 +00:00
Stanislav Makar
037edf1215 Fix failed task of setting up bash completion for helm (#1968)
Closes: #1967
2017-11-13 10:15:53 +00:00
Hyunsun Moon
37125866ca Make calico_node_ignorelooserpf have an effect (#1945) 2017-11-13 09:35:13 +00:00
Günther Grill
421e73b87c Add missing exclamation mark in shebang line (#1966) 2017-11-13 09:34:21 +00:00
Maxim Krasilnikov
0d8de289dd Revert "Change deprecated vagrant ansible flag 'sudo' to 'become'" (#1960) 2017-11-12 09:20:30 +00:00
Brad Beam
00916dec38 Merge pull request #1954 from abelgana/patch-1
fix a typo
2017-11-10 11:04:57 -05:00
Brad Beam
c115e5677e Merge pull request #1828 from hzamani/patch-1
Use etcd_access_addresses for vault_etcd_url
2017-11-10 10:56:37 -05:00
abelgana
56047c1c83 fix a typo 2017-11-10 09:30:27 -05:00
Spencer Smith
09d85631dc Merge pull request #1944 from chadswen/reload-master-pods
Master component and kubelet container upgrade fixes
2017-11-08 22:23:12 -05:00
Brad Beam
f25e4dc3ed Merge pull request #1937 from chadswen/disable-api-insecure-port
Support for disabling apiserver insecure port
2017-11-08 18:13:49 -05:00
Spencer Smith
a3a7c2d24e Merge pull request #1947 from rsmitty/rkt-proxy
provide environment for rkt trust and run with etcd
2017-11-08 15:26:47 -05:00
Spencer Smith
0126168472 provide environment for rkt trust and run with etcd 2017-11-08 12:57:22 -05:00
Chad Swenson
e9f795c5ce Master component and kubelet container upgrade fixes
* Fixes an issue where apiserver and friends (controller manager, scheduler) were prevented from restarting after manifests/secrets are changed. This occurred when a replaced kubelet doesn't reconcile new master manifests, which caused old master component versions to linger during deployment. In my case this was causing upgrades from k8s 1.6/1.7 -> k8s 1.8 to fail
* Improves transitions from kubelet container to host kubelet by preventing issues where kubelet container reappeared during the deployment
2017-11-08 01:40:33 -06:00
Chad Swenson
0c7e1889e4 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:

1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
2. Add apiserver client cert/key to the "Wait for apiserver up" checks
3. Update apiserver liveness probe to use HTTPS ports
4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.

Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.

Options:

1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)

Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
2017-11-06 14:01:10 -06:00
Aivars Sterns
8b2bec700a add bastion role to scale (#1882) 2017-11-06 13:51:36 +00:00
Amit Kumar Jaiswal
125267544e Fix Typo (#1935) 2017-11-06 13:51:22 +00:00
Günther Grill
0d55ed3600 Avoid that some read-only tasks cause an ansible-change (#1910) 2017-11-06 13:51:07 +00:00
Haiwei Liu
ad0cd6939a Add support cAdvisor (#1908)
Signed-off-by: Haiwei Liu <carllhw@gmail.com>
2017-11-06 13:50:28 +00:00
Rob Hirschfeld
a1244d7bd3 update link to latest Digital Rebar integration (#1933) 2017-11-06 13:49:54 +00:00
Stanislav Makar
33adb334cd Fix openstack tenant id variable name (#1932) 2017-11-05 08:40:41 +00:00
Spencer Smith
ef87a8a1f0 Merge pull request #1916 from vtomasr5/master
Fix bad handler directory name in kubeadm role
2017-11-03 18:14:48 -04:00
Spencer Smith
5223a80ab8 Merge pull request #1925 from chadswen/proxy-fixes
Remove proxy settings from etcd and kubernetes/master roles
2017-11-03 18:13:36 -04:00
Spencer Smith
a595c84f7e Merge pull request #1928 from chadswen/flannel-rbac-fix
Flannel RBAC Fix
2017-11-03 18:12:16 -04:00
Spencer Smith
adcfcc1178 Merge pull request #1931 from chadswen/docker-update
Docker Version Update
2017-11-03 18:11:33 -04:00
Chad Swenson
b158dbcf79 Docker Version Update
Update default docker version to 17.03.1
2017-11-03 12:34:45 -05:00
Matthew Mosesohn
ab3832f3e7 Set host IP for kubelet always (#1924)
* Set host IP for kubelet always

Use ansible default IP if ip var is not set.

* Update main.yml
2017-11-03 10:19:37 +00:00
Kevin Lefevre
9bf415f749 update helm to v2.7.0 (#1875)
* update helm to v2.7.0

* Update main.yml
2017-11-03 07:15:00 +00:00
Günther Grill
a2bda9e5f1 Eliminate jinja2 template expression warning and rename coreos-python var (#1911)
* Change deprecated vagrant ansible flag 'sudo' to 'become'

* Emphasize, that the name of the pip_pyton_modules is only considered in coreos

* Remove useless unused variable

* Fix warning when jinja2 template-delimiters used in when statement

There is no need for jinja2 template-delimiters like {{ }} or {% %}
any more. They can just be omitted as described in https://github.com/ansible/ansible/issues/22397

* Fix broken link in getting-started guide
2017-11-03 07:11:36 +00:00
Günther Grill
0195725563 Workaround ansible bug where access var via dict doesn't get real value (#1912)
* Change deprecated vagrant ansible flag 'sudo' to 'become'

* Workaround ansible bug where access var via dict doesn't get real value

When accessing a variable via it's name "{{ foo }}" its value is
retrieved. But when the variable value is retrieved via the vars-dict
"{{ vars['foo'] }}" this doesn't resolve the expression of the variable
any more due to a bug. So e.g. a expression foo="{{ 1 == 1 }}" isn't
longer resolved but just returned as string "1 == 1".

* Make file yamllint complient
2017-11-03 07:11:14 +00:00
Spencer Smith
ec1170bd37 only mount volumes if local_volumes_enabled is true. fix mount flags in rkt. (#1923) 2017-11-03 07:10:37 +00:00
Matthew Mosesohn
66c67dbe73 Add optional helm deployment mode for host (#1920) 2017-11-03 07:09:24 +00:00
Chad Swenson
e5d8d8234d Remove proxy settings from etcd and kubernetes/master roles
When proxy vars are set, `uri` module tasks will attempt to route traffic through the proxy. This causes the "Wait for" tasks in the `etcd` and `kubernetes/master` roles to hang, as localhost connections struggle with a proxy.

As far as I know these roles only need local/cluster networking, so a proxy doesn't apply here anyway.
2017-11-03 01:41:17 -05:00
Chad Swenson
16ae2c1809 Flannel RBAC Fix
Fixes a bug that can occur if `cni-flannel-rbac.yml` was written but the playbook failed before it was applied. Uses the same approach as calico.
2017-11-02 23:20:23 -05:00
Spencer Smith
5c5e879c2c Merge pull request #1904 from guenhter/master
Change deprecated vagrant ansible flag 'sudo' to 'become'
2017-11-02 12:02:32 -04:00
Spencer Smith
4771716ab2 Merge pull request #1907 from mattymo/disable_anon_auth
Block anonymous auth requests to kubelet
2017-11-02 12:01:39 -04:00
Spencer Smith
b156585739 Merge pull request #1917 from chadswen/docker-daemon-graph
Fix kubelet container with alternate Docker data paths
2017-11-02 11:58:55 -04:00
Spencer Smith
7a77b5c419 Merge pull request #1919 from mattymo/fix_rkt_local_vols
Fix local volume provisioner mount point for rkt
2017-11-02 11:32:30 -04:00
Spencer Smith
9872b594bf Merge pull request #1921 from pipo02mix/patch-2
Typo in apt-get command
2017-11-02 11:29:32 -04:00
Aivars Sterns
e6c88db0a0 change how terraform generates apiserver variables (#1922) 2017-11-02 12:26:11 +00:00
Fernando Ripoll
257280a050 Typo in apt-get command
Typo in apt-get command
2017-11-02 11:40:08 +01:00
Matthew Mosesohn
520103df78 Change namespace for provisioner account 2017-11-02 10:16:08 +00:00
Matthew Mosesohn
3e3787de15 Fix local volume provisioner mount point for rkt 2017-11-02 09:45:26 +00:00
Chad Swenson
0c824d5ef1 Fix kubelet container with alternate Docker data paths
Some time ago I think the hardcoded `/var/lib/docker` was required, but kubelet running in a container has been aware of the Docker path since at least as far back as k8s 1.6.

Without this change, you see a large number of errors in the kubelet logs if you installed with a non-default `docker_daemon_graph`
2017-11-01 13:25:15 -05:00
Matthew Mosesohn
c0e989b17c New addon: local_volume_provisioner (#1909) 2017-11-01 14:25:35 +00:00
Vicenç Juan Tomàs Montserrat
5218b3af82 Fix bad handler directory name in kubeadm role 2017-11-01 14:36:28 +01:00
Spencer Smith
ef0a91da27 Merge pull request #1891 from rsmitty/proxy-fixes
Improved proxy support
2017-10-31 14:32:12 -04:00
Spencer Smith
8412181746 Merge pull request #1899 from skyscooby/update_kube182
Update to Kubernetes 1.8.2
2017-10-31 14:30:56 -04:00
Spencer Smith
400ee2aa57 Merge pull request #1898 from skyscooby/update_kubedns
Update kubedns to 1.14.7 release
2017-10-31 14:30:36 -04:00
Spencer Smith
05b8466f87 Merge pull request #1890 from chadswen/apt-repo-params
Parameterize dockerproject apt repo endpoints
2017-10-31 14:29:19 -04:00
Spencer Smith
6061c691e6 Merge pull request #1902 from pipo02mix/patch-1
Typo in the apt-get command
2017-10-31 12:30:41 -04:00
guenhter
3ac967a7b6 Merge branch 'master' of https://github.com/kubernetes-incubator/kubespray 2017-10-31 15:15:39 +01:00
Spencer Smith
19962f6b6a fix indentation for master template (#1906) 2017-10-31 06:43:54 +00:00
Matthew Mosesohn
f7703dbca3 Block anonymous auth requests to kubelet 2017-10-30 19:06:54 +00:00
Spencer Smith
74a9eedb93 helm template check for http/https_proxy 2017-10-30 13:11:04 -04:00
Spencer Smith
6df104b275 don't check for no_proxy, only http/https_proxy. fix linting issues. 2017-10-30 11:42:14 -04:00
Spencer Smith
b27453d8d8 improved proxy support 2017-10-30 11:42:14 -04:00
Spencer Smith
4470ee4ccf Merge pull request #1887 from mattymo/fix_indent_apiserver
fix indentation for network policy option
2017-10-30 11:33:13 -04:00
Andrew Greenwood
df27fd1e9c Update README.md 2017-10-30 09:39:02 -04:00
guenhter
97c68810e0 Change deprecated vagrant ansible flag 'sudo' to 'become' 2017-10-30 14:37:06 +01:00
Andrew Greenwood
8a86acf75d Update kubespray-defaults kubernetes to v1.8.2 2017-10-30 09:34:32 -04:00
Fernando Ripoll
160e479f8d Typo in the apt-get command
Typo in the apt-get command
2017-10-30 13:47:39 +01:00
abelgana
d738acf638 Update kubelet.kubeadm.env.j2 (#1901) 2017-10-30 11:33:02 +00:00
tanshanshan
84d92aa3c7 fix-bug (#1900) 2017-10-30 11:23:24 +00:00
Andrew Greenwood
dd01cabcdc Update to kubernetes 1.8.2 2017-10-29 22:13:06 -04:00
Andrew Greenwood
e196adb98c Update kubernetes 1.8.2 2017-10-29 22:09:22 -04:00
Andrew Greenwood
c383c7e2c1 Update kubedns image to latest 2017-10-29 21:58:05 -04:00
Andrew Greenwood
958bb5285d Update kubedns image to latest 2017-10-29 21:57:32 -04:00
Spencer Smith
f0317ae70b Merge pull request #1876 from ArchiFleKs/update_flannel
update flannel
2017-10-27 15:22:54 -04:00
Spencer Smith
591941bd39 Merge pull request #1884 from abelgana/master
Sysctl reload if needed after IP forward enabling
2017-10-27 15:12:08 -04:00
Spencer Smith
e90769c869 Merge pull request #1888 from chapsuk/issue_1885
Disable swap in vagrant vms
2017-10-27 15:10:16 -04:00
Chad Swenson
256bbb1a8a Parameterize apt repo endpoints
This allows overriding of apt repo endpoints when internet sources are not accessible. Additionally, switch to using the dockerproject.org gpg key url for apt instead of keyservers.net
2017-10-27 13:48:11 -05:00
mkrasilnikov
2c7c956be9 Disable swap in vagrant vms 2017-10-27 19:57:54 +03:00
Matthew Mosesohn
fe81bba08d Force kubelet certificates to be generated as lowercase (#1886)
All nodes get converted to lowercase, so certs should set
CN with lowercase as well.
2017-10-27 15:58:25 +01:00
Matthew Mosesohn
564de07963 fix indentation for network policy option 2017-10-27 14:56:22 +01:00
Aivars Sterns
84cf6fbe83 change ssh_args/bastion configuration (#1883) 2017-10-27 12:18:39 +01:00
abelgana
d9160f19c0 Sysctl reload if needed after IP forward enabling
Add reload yes to reload sysctl if the value of net.ipv4.ip_forward changes.

- name: Enable ip forwarding
  sysctl:
    sysctl_file: "{{sysctl_file_path}}"
    name: net.ipv4.ip_forward
    value: 1
    state: present
    reload: yes
  tags:
    - bootstrap-os
2017-10-26 13:06:21 -04:00
ArchiFleKs
6e949bf951 update flannel 2017-10-26 11:18:06 +02:00
Hassan Zamani
3acc42c5b3 Use etcd_access_addresses for vault_etcd_url 2017-10-17 19:27:36 +03:30
430 changed files with 8805 additions and 2010 deletions

3
.gitignore vendored
View File

@@ -1,6 +1,7 @@
.vagrant .vagrant
*.retry *.retry
inventory/vagrant_ansible_inventory inventory/vagrant_ansible_inventory
inventory/credentials/
inventory/group_vars/fake_hosts.yml inventory/group_vars/fake_hosts.yml
inventory/host_vars/ inventory/host_vars/
temp temp
@@ -23,7 +24,7 @@ __pycache__/
# Distribution / packaging # Distribution / packaging
.Python .Python
artifacts/ inventory/*/artifacts/
env/ env/
build/ build/
credentials/ credentials/

View File

@@ -1,14 +1,31 @@
stages: stages:
- moderator
- unit-tests - unit-tests
- deploy-gce-part1 - moderator
- deploy-gce-part2 - deploy-part1
- deploy-gce-special - deploy-part2
- deploy-special
variables: variables:
FAILFASTCI_NAMESPACE: 'kargo-ci' FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-incubator__kubespray'
# DOCKER_HOST: tcp://localhost:2375 # DOCKER_HOST: tcp://localhost:2375
ANSIBLE_FORCE_COLOR: "true" ANSIBLE_FORCE_COLOR: "true"
MAGIC: "ci check this"
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET
CONTAINER_ENGINE: docker
SSH_USER: root
GCE_PREEMPTIBLE: "false"
ANSIBLE_KEEP_REMOTE_FILES: "1"
ANSIBLE_CONFIG: ./tests/ansible.cfg
ANSIBLE_INVENTORY: ./inventory/sample/${CI_JOB_NAME}-${BUILD_NUMBER}.ini
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
UPGRADE_TEST: "false"
KUBEADM_ENABLED: "false"
LOG_LEVEL: "-vv"
# asia-east1-a # asia-east1-a
# asia-northeast1-a # asia-northeast1-a
@@ -18,14 +35,14 @@ variables:
# us-west1-a # us-west1-a
before_script: before_script:
- pip install -r tests/requirements.txt - /usr/bin/python -m pip install -r tests/requirements.txt
- mkdir -p /.ssh - mkdir -p /.ssh
.job: &job .job: &job
tags: tags:
- kubernetes - kubernetes
- docker - docker
image: quay.io/ant31/kargo:master image: quay.io/kubespray/kubespray:latest
.docker_service: &docker_service .docker_service: &docker_service
services: services:
@@ -38,24 +55,17 @@ before_script:
.gce_variables: &gce_variables .gce_variables: &gce_variables
GCE_USER: travis GCE_USER: travis
SSH_USER: $GCE_USER SSH_USER: $GCE_USER
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
CONTAINER_ENGINE: docker
PRIVATE_KEY: $GCE_PRIVATE_KEY
GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET
CLOUD_MACHINE_TYPE: "g1-small" CLOUD_MACHINE_TYPE: "g1-small"
GCE_PREEMPTIBLE: "false" CI_PLATFORM: "gce"
ANSIBLE_KEEP_REMOTE_FILES: "1" PRIVATE_KEY: $GCE_PRIVATE_KEY
ANSIBLE_CONFIG: ./tests/ansible.cfg
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
UPGRADE_TEST: "false"
KUBEADM_ENABLED: "false"
LOG_LEVEL: "-vv"
MAGIC: "ci check this"
.gce: &gce .do_variables: &do_variables
PRIVATE_KEY: $DO_PRIVATE_KEY
CI_PLATFORM: "do"
SSH_USER: root
.testcases: &testcases
<<: *job <<: *job
<<: *docker_service <<: *docker_service
cache: cache:
@@ -65,13 +75,10 @@ before_script:
- $HOME/.cache - $HOME/.cache
before_script: before_script:
- docker info - docker info
- pip install -r tests/requirements.txt - /usr/bin/python -m pip install -r requirements.txt
- /usr/bin/python -m pip install -r tests/requirements.txt
- mkdir -p /.ssh - mkdir -p /.ssh
- mkdir -p $HOME/.ssh - mkdir -p $HOME/.ssh
- echo $PRIVATE_KEY | base64 -d > $HOME/.ssh/id_rsa
- echo $GCE_PEM_FILE | base64 -d > $HOME/.ssh/gce
- echo $GCE_CREDENTIALS > $HOME/.ssh/gce.json
- chmod 400 $HOME/.ssh/id_rsa
- ansible-playbook --version - ansible-playbook --version
- export PYPATH=$([[ ! "$CI_JOB_NAME" =~ "coreos" ]] && echo /usr/bin/python || echo /opt/bin/python) - export PYPATH=$([[ ! "$CI_JOB_NAME" =~ "coreos" ]] && echo /usr/bin/python || echo /opt/bin/python)
- echo "CI_JOB_NAME is $CI_JOB_NAME" - echo "CI_JOB_NAME is $CI_JOB_NAME"
@@ -81,35 +88,28 @@ before_script:
- ls - ls
- echo ${PWD} - echo ${PWD}
- echo "${STARTUP_SCRIPT}" - echo "${STARTUP_SCRIPT}"
- > - cd tests && make create-${CI_PLATFORM} -s ; cd -
ansible-playbook tests/cloud_playbooks/create-gce.yml -i tests/local_inventory/hosts.cfg -c local
${LOG_LEVEL}
-e gce_credentials_file=${HOME}/.ssh/gce.json
-e gce_project_id=${GCE_PROJECT_ID}
-e gce_service_account_email=${GCE_ACCOUNT}
-e inventory_path=${PWD}/inventory/inventory.ini
-e test_id=${TEST_ID}
-e preemptible=$GCE_PREEMPTIBLE
# Check out latest tag if testing upgrade # Check out latest tag if testing upgrade
# Uncomment when gitlab kargo repo has tags # Uncomment when gitlab kubespray repo has tags
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1)) #- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
- test "${UPGRADE_TEST}" != "false" && git checkout 72ae7638bcc94c66afa8620dfa4ad9a9249327ea - test "${UPGRADE_TEST}" != "false" && git checkout f7d52564aad2ff8e337634951beb4a881c0e8aa6
# Checkout the CI vars file so it is available # Checkout the CI vars file so it is available
- test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml - test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml
# Workaround https://github.com/kubernetes-incubator/kubespray/issues/2021
- 'sh -c "echo ignore_assert_errors: true | tee -a tests/files/${CI_JOB_NAME}.yml"'
# Create cluster # Create cluster
- > - >
ansible-playbook ansible-playbook
-i inventory/inventory.ini -i ${ANSIBLE_INVENTORY}
-b --become-user=root -b --become-user=root
--private-key=${HOME}/.ssh/id_rsa --private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER -u $SSH_USER
${SSH_ARGS} ${SSH_ARGS}
${LOG_LEVEL} ${LOG_LEVEL}
-e @${CI_TEST_VARS} -e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e ansible_ssh_user=${SSH_USER} -e ansible_ssh_user=${SSH_USER}
-e local_release_dir=${PWD}/downloads -e local_release_dir=${PWD}/downloads
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
@@ -122,14 +122,13 @@ before_script:
test "${UPGRADE_TEST}" == "graceful" && PLAYBOOK="upgrade-cluster.yml"; test "${UPGRADE_TEST}" == "graceful" && PLAYBOOK="upgrade-cluster.yml";
git checkout "${CI_BUILD_REF}"; git checkout "${CI_BUILD_REF}";
ansible-playbook ansible-playbook
-i inventory/inventory.ini -i ${ANSIBLE_INVENTORY}
-b --become-user=root -b --become-user=root
--private-key=${HOME}/.ssh/id_rsa --private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER -u $SSH_USER
${SSH_ARGS} ${SSH_ARGS}
${LOG_LEVEL} ${LOG_LEVEL}
-e @${CI_TEST_VARS} -e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e ansible_ssh_user=${SSH_USER} -e ansible_ssh_user=${SSH_USER}
-e local_release_dir=${PWD}/downloads -e local_release_dir=${PWD}/downloads
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
@@ -139,20 +138,20 @@ before_script:
# Tests Cases # Tests Cases
## Test Master API ## Test Master API
- > - >
ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}" -e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
## Ping the between 2 pod ## Ping the between 2 pod
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/030_check-network.yml $LOG_LEVEL - ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/030_check-network.yml $LOG_LEVEL
## Advanced DNS checks ## Advanced DNS checks
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/040_check-network-adv.yml $LOG_LEVEL - ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/040_check-network-adv.yml $LOG_LEVEL
## Idempotency checks 1/5 (repeat deployment) ## Idempotency checks 1/5 (repeat deployment)
- > - >
if [ "${IDEMPOT_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook ansible-playbook
-i inventory/inventory.ini -i ${ANSIBLE_INVENTORY}
-b --become-user=root -b --become-user=root
--private-key=${HOME}/.ssh/id_rsa --private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER -u $SSH_USER
@@ -169,7 +168,7 @@ before_script:
- > - >
if [ "${IDEMPOT_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook ansible-playbook
-i inventory/inventory.ini -i ${ANSIBLE_INVENTORY}
-b --become-user=root -b --become-user=root
--private-key=${HOME}/.ssh/id_rsa --private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER -u $SSH_USER
@@ -184,7 +183,7 @@ before_script:
- > - >
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook ansible-playbook
-i inventory/inventory.ini -i ${ANSIBLE_INVENTORY}
-b --become-user=root -b --become-user=root
--private-key=${HOME}/.ssh/id_rsa --private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER -u $SSH_USER
@@ -201,7 +200,7 @@ before_script:
- > - >
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook ansible-playbook
-i inventory/inventory.ini -i ${ANSIBLE_INVENTORY}
-b --become-user=root -b --become-user=root
--private-key=${HOME}/.ssh/id_rsa --private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER -u $SSH_USER
@@ -217,107 +216,119 @@ before_script:
## Idempotency checks 5/5 (Advanced DNS checks) ## Idempotency checks 5/5 (Advanced DNS checks)
- > - >
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH}
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root
--limit "all:!fake_hosts" --limit "all:!fake_hosts"
tests/testcases/040_check-network-adv.yml $LOG_LEVEL; tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
fi fi
after_script: after_script:
- > - cd tests && make delete-${CI_PLATFORM} -s ; cd -
ansible-playbook -i inventory/inventory.ini tests/cloud_playbooks/delete-gce.yml -c local $LOG_LEVEL
-e @${CI_TEST_VARS} .gce: &gce
-e test_id=${TEST_ID} <<: *testcases
-e gce_project_id=${GCE_PROJECT_ID} variables:
-e gce_service_account_email=${GCE_ACCOUNT} <<: *gce_variables
-e gce_credentials_file=${HOME}/.ssh/gce.json
-e inventory_path=${PWD}/inventory/inventory.ini .do: &do
variables:
<<: *do_variables
<<: *testcases
# Test matrix. Leave the comments for markup scripts. # Test matrix. Leave the comments for markup scripts.
.coreos_calico_aio_variables: &coreos_calico_aio_variables .coreos_calico_aio_variables: &coreos_calico_aio_variables
# stage: deploy-gce-part1 # stage: deploy-part1
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables .ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
# stage: deploy-gce-part1 # stage: deploy-part1
UPGRADE_TEST: "graceful" UPGRADE_TEST: "graceful"
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables .centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-gce-part1 # stage: deploy-part1
UPGRADE_TEST: "graceful" UPGRADE_TEST: "graceful"
.ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables .ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables
# stage: deploy-gce-part1 # stage: deploy-part1
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.ubuntu_contiv_sep_variables: &ubuntu_contiv_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.coreos_cilium_variables: &coreos_cilium_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_cilium_sep_variables: &ubuntu_cilium_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.rhel7_weave_variables: &rhel7_weave_variables .rhel7_weave_variables: &rhel7_weave_variables
# stage: deploy-gce-part1 # stage: deploy-part1
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.centos7_flannel_addons_variables: &centos7_flannel_addons_variables .centos7_flannel_addons_variables: &centos7_flannel_addons_variables
# stage: deploy-gce-part2 # stage: deploy-part2
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.debian8_calico_variables: &debian8_calico_variables .debian8_calico_variables: &debian8_calico_variables
# stage: deploy-gce-part2 # stage: deploy-part2
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.coreos_canal_variables: &coreos_canal_variables .coreos_canal_variables: &coreos_canal_variables
# stage: deploy-gce-part2 # stage: deploy-part2
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.rhel7_canal_sep_variables: &rhel7_canal_sep_variables .rhel7_canal_sep_variables: &rhel7_canal_sep_variables
# stage: deploy-gce-special # stage: deploy-special
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables .ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables
# stage: deploy-gce-special # stage: deploy-special
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.centos7_calico_ha_variables: &centos7_calico_ha_variables .centos7_calico_ha_variables: &centos7_calico_ha_variables
# stage: deploy-gce-special # stage: deploy-special
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables .coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
# stage: deploy-gce-special # stage: deploy-special
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables .ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
# stage: deploy-gce-part1 # stage: deploy-part1
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables .ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables
# stage: deploy-gce-part1 # stage: deploy-part1
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.ubuntu_flannel_variables: &ubuntu_flannel_variables .ubuntu_flannel_variables: &ubuntu_flannel_variables
# stage: deploy-gce-special # stage: deploy-special
MOVED_TO_GROUP_VARS: "true" MOVED_TO_GROUP_VARS: "true"
.opensuse_canal_variables: &opensuse_canal_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto) # Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
coreos-calico-aio: ### PR JOBS PART1
stage: deploy-gce-part1 gce_coreos-calico-aio:
stage: deploy-part1
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
<<: *gce_variables
<<: *coreos_calico_aio_variables <<: *coreos_calico_aio_variables
<<: *gce_variables
when: on_success when: on_success
except: ['triggers'] except: ['triggers']
only: [/^pr-.*$/] only: [/^pr-.*$/]
coreos-calico-sep-triggers: ### PR JOBS PART2
stage: deploy-gce-part1 gce_centos7-flannel-addons:
<<: *job stage: deploy-part2
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_calico_aio_variables
when: on_success
only: ['triggers']
centos7-flannel-addons:
stage: deploy-gce-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -327,18 +338,8 @@ centos7-flannel-addons:
except: ['triggers'] except: ['triggers']
only: [/^pr-.*$/] only: [/^pr-.*$/]
centos7-flannel-addons-triggers: gce_ubuntu-weave-sep:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_flannel_addons_variables
when: on_success
only: ['triggers']
ubuntu-weave-sep:
stage: deploy-gce-special
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -348,8 +349,40 @@ ubuntu-weave-sep:
except: ['triggers'] except: ['triggers']
only: [/^pr-.*$/] only: [/^pr-.*$/]
ubuntu-weave-sep-triggers: ### MANUAL JOBS
stage: deploy-gce-part1 gce_coreos-calico-sep-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_calico_aio_variables
when: on_success
only: ['triggers']
gce_ubuntu-canal-ha-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_ha_variables
when: on_success
only: ['triggers']
gce_centos7-flannel-addons-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_flannel_addons_variables
when: on_success
only: ['triggers']
gce_ubuntu-weave-sep-triggers:
stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -359,8 +392,18 @@ ubuntu-weave-sep-triggers:
only: ['triggers'] only: ['triggers']
# More builds for PRs/merges (manual) and triggers (auto) # More builds for PRs/merges (manual) and triggers (auto)
ubuntu-canal-ha: do_ubuntu-canal-ha:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job
<<: *do
variables:
<<: *do_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-canal-ha:
stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -370,18 +413,8 @@ ubuntu-canal-ha:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
ubuntu-canal-ha-triggers: gce_ubuntu-canal-kubeadm:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_ha_variables
when: on_success
only: ['triggers']
ubuntu-canal-kubeadm:
stage: deploy-gce-part1
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -391,8 +424,8 @@ ubuntu-canal-kubeadm:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
ubuntu-canal-kubeadm-triggers: gce_ubuntu-canal-kubeadm-triggers:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -401,8 +434,8 @@ ubuntu-canal-kubeadm-triggers:
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
centos-weave-kubeadm: gce_centos-weave-kubeadm:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -412,8 +445,8 @@ centos-weave-kubeadm:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
centos-weave-kubeadm-triggers: gce_centos-weave-kubeadm-triggers:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -422,8 +455,41 @@ centos-weave-kubeadm-triggers:
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
rhel7-weave: gce_ubuntu-contiv-sep:
stage: deploy-gce-part1 stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_contiv_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-cilium:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_cilium_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-cilium-sep:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_cilium_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-weave:
stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -433,8 +499,8 @@ rhel7-weave:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
rhel7-weave-triggers: gce_rhel7-weave-triggers:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -443,8 +509,8 @@ rhel7-weave-triggers:
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
debian8-calico-upgrade: gce_debian8-calico-upgrade:
stage: deploy-gce-part2 stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -454,8 +520,8 @@ debian8-calico-upgrade:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
debian8-calico-triggers: gce_debian8-calico-triggers:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -464,8 +530,8 @@ debian8-calico-triggers:
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
coreos-canal: gce_coreos-canal:
stage: deploy-gce-part2 stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -475,8 +541,8 @@ coreos-canal:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
coreos-canal-triggers: gce_coreos-canal-triggers:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -485,8 +551,8 @@ coreos-canal-triggers:
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
rhel7-canal-sep: gce_rhel7-canal-sep:
stage: deploy-gce-special stage: deploy-special
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -496,8 +562,8 @@ rhel7-canal-sep:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/,] only: ['master', /^pr-.*$/,]
rhel7-canal-sep-triggers: gce_rhel7-canal-sep-triggers:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -506,8 +572,8 @@ rhel7-canal-sep-triggers:
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
centos7-calico-ha: gce_centos7-calico-ha:
stage: deploy-gce-special stage: deploy-special
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -517,8 +583,8 @@ centos7-calico-ha:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
centos7-calico-ha-triggers: gce_centos7-calico-ha-triggers:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -527,9 +593,20 @@ centos7-calico-ha-triggers:
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
gce_opensuse-canal:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *opensuse_canal_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
# no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613 # no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613
coreos-alpha-weave-ha: gce_coreos-alpha-weave-ha:
stage: deploy-gce-special stage: deploy-special
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -539,8 +616,8 @@ coreos-alpha-weave-ha:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
ubuntu-rkt-sep: gce_ubuntu-rkt-sep:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -550,8 +627,8 @@ ubuntu-rkt-sep:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
ubuntu-vault-sep: gce_ubuntu-vault-sep:
stage: deploy-gce-part1 stage: deploy-part2
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:
@@ -561,8 +638,8 @@ ubuntu-vault-sep:
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
ubuntu-flannel-sep: gce_ubuntu-flannel-sep:
stage: deploy-gce-special stage: deploy-special
<<: *job <<: *job
<<: *gce <<: *gce
variables: variables:

16
Dockerfile Normal file
View File

@@ -0,0 +1,16 @@
FROM ubuntu:16.04
RUN mkdir /kubespray
WORKDIR /kubespray
RUN apt update -y && \
apt install -y \
libssl-dev python-dev sshpass apt-transport-https \
ca-certificates curl gnupg2 software-properties-common python-pip
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install docker-ce -y
COPY . .
RUN /usr/bin/python -m pip install pip -U && /usr/bin/python -m pip install -r tests/requirements.txt && python -m pip install -r requirements.txt

188
README.md
View File

@@ -1,67 +1,93 @@
![Kubernetes Logo](https://s28.postimg.org/lf3q4ocpp/k8s.png) ![Kubernetes Logo](https://s28.postimg.org/lf3q4ocpp/k8s.png)
## Deploy a production ready kubernetes cluster Deploy a Production Ready Kubernetes Cluster
============================================
If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **#kubespray**. If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
- Can be deployed on **AWS, GCE, Azure, OpenStack or Baremetal** -   Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere or Baremetal**
- **High available** cluster - **High available** cluster
- **Composable** (Choice of the network plugin for instance) - **Composable** (Choice of the network plugin for instance)
- Support most popular **Linux distributions** - Support most popular **Linux distributions**
- **Continuous integration tests** - **Continuous integration tests**
Quick Start
-----------
To deploy the cluster you can use : To deploy the cluster you can use :
[**kubespray-cli**](https://github.com/kubespray/kubespray-cli) <br> ### Ansible
**Ansible** usual commands and [**inventory builder**](https://github.com/kubernetes-incubator/kubespray/blob/master/contrib/inventory_builder/inventory.py) <br>
**vagrant** by simply running `vagrant up` (for tests purposes) <br>
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
* [Requirements](#requirements) # Update Ansible inventory file with inventory builder
* [Kubespray vs ...](docs/comparisons.md) declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
* [Getting started](docs/getting-started.md) CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
* [Ansible inventory and tags](docs/ansible.md)
* [Integration with existing ansible repo](docs/integration.md)
* [Deployment data variables](docs/vars.md)
* [DNS stack](docs/dns-stack.md)
* [HA mode](docs/ha-mode.md)
* [Network plugins](#network-plugins)
* [Vagrant install](docs/vagrant.md)
* [CoreOS bootstrap](docs/coreos.md)
* [Debian Jessie setup](docs/debian.md)
* [Downloaded artifacts](docs/downloads.md)
* [Cloud providers](docs/cloud.md)
* [OpenStack](docs/openstack.md)
* [AWS](docs/aws.md)
* [Azure](docs/azure.md)
* [vSphere](docs/vsphere.md)
* [Large deployments](docs/large-deployments.md)
* [Upgrades basics](docs/upgrades.md)
* [Roadmap](docs/roadmap.md)
Supported Linux distributions # Review and change parameters under ``inventory/mycluster/group_vars``
=============== cat inventory/mycluster/group_vars/all.yml
cat inventory/mycluster/group_vars/k8s-cluster.yml
* **Container Linux by CoreOS** # Deploy Kubespray with Ansible Playbook
* **Debian** Jessie ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml
* **Ubuntu** 16.04
* **CentOS/RHEL** 7 ### Vagrant
# Simply running `vagrant up` (for tests purposes)
vagrant up
Documents
---------
- [Requirements](#requirements)
- [Kubespray vs ...](docs/comparisons.md)
- [Getting started](docs/getting-started.md)
- [Ansible inventory and tags](docs/ansible.md)
- [Integration with existing ansible repo](docs/integration.md)
- [Deployment data variables](docs/vars.md)
- [DNS stack](docs/dns-stack.md)
- [HA mode](docs/ha-mode.md)
- [Network plugins](#network-plugins)
- [Vagrant install](docs/vagrant.md)
- [CoreOS bootstrap](docs/coreos.md)
- [Debian Jessie setup](docs/debian.md)
- [openSUSE setup](docs/opensuse.md)
- [Downloaded artifacts](docs/downloads.md)
- [Cloud providers](docs/cloud.md)
- [OpenStack](docs/openstack.md)
- [AWS](docs/aws.md)
- [Azure](docs/azure.md)
- [vSphere](docs/vsphere.md)
- [Large deployments](docs/large-deployments.md)
- [Upgrades basics](docs/upgrades.md)
- [Roadmap](docs/roadmap.md)
Supported Linux Distributions
-----------------------------
- **Container Linux by CoreOS**
- **Debian** Jessie, Stretch, Wheezy
- **Ubuntu** 16.04
- **CentOS/RHEL** 7
- **Fedora/CentOS** Atomic
- **openSUSE** Leap 42.3/Tumbleweed
Note: Upstart/SysV init based OS types are not supported. Note: Upstart/SysV init based OS types are not supported.
Versions of supported components Versions of supported components
-------------------------------- --------------------------------
- [kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.9.5
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.8.1 <br> - [etcd](https://github.com/coreos/etcd/releases) v3.2.4
[etcd](https://github.com/coreos/etcd/releases) v3.2.4 <br> - [flanneld](https://github.com/coreos/flannel/releases) v0.10.0
[flanneld](https://github.com/coreos/flannel/releases) v0.8.0 <br> - [calico](https://docs.projectcalico.org/v2.6/releases/) v2.6.8
[calico](https://docs.projectcalico.org/v2.5/releases/) v2.5.0 <br> - [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
[canal](https://github.com/projectcalico/canal) (given calico/flannel versions) <br> - [cilium](https://github.com/cilium/cilium) v1.0.0-rc8
[weave](http://weave.works/) v2.0.1 <br> - [contiv](https://github.com/contiv/install/releases) v1.1.7
[docker](https://www.docker.com/) v1.13 (see note)<br> - [weave](http://weave.works/) v2.2.1
[rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 (see Note 2)<br> - [docker](https://www.docker.com/) v17.03 (see note)
- [rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 (see Note 2)
Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin). Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
@@ -71,52 +97,64 @@ plugins' related OS services. Also note, only one of the supported network
plugins can be deployed for a given single cluster. plugins can be deployed for a given single cluster.
Requirements Requirements
-------------- ------------
* **Ansible v2.4 (or newer) and python-netaddr is installed on the machine - **Ansible v2.4 (or newer) and python-netaddr is installed on the machine
that will run Ansible commands** that will run Ansible commands**
* **Jinja 2.9 (or newer) is required to run the Ansible Playbooks** - **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
* The target servers must have **access to the Internet** in order to pull docker images. - The target servers must have **access to the Internet** in order to pull docker images.
* The target servers are configured to allow **IPv4 forwarding**. - The target servers are configured to allow **IPv4 forwarding**.
* **Your ssh key must be copied** to all the servers part of your inventory. - **Your ssh key must be copied** to all the servers part of your inventory.
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. - The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall. in order to avoid any issue during deployment you should disable your firewall.
- If kubespray is ran from non-root user account, correct privilege escalation method
should be configured in the target servers. Then the `ansible_become` flag
or command parameters `--become or -b` should be specified.
Network Plugins
---------------
## Network plugins You can choose between 6 network plugins. (default: `calico`, except Vagrant uses `flannel`)
You can choose between 4 network plugins. (default: `calico`, except Vagrant uses `flannel`) - [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
* [**flannel**](docs/flannel.md): gre/vxlan (layer 2) networking. - [calico](docs/calico.md): bgp (layer 3) networking.
* [**calico**](docs/calico.md): bgp (layer 3) networking. - [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
* [**canal**](https://github.com/projectcalico/canal): a composition of calico and flannel plugins. - [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
* [**weave**](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. <br> - [contiv](docs/contiv.md): supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)). apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)).
The choice is defined with the variable `kube_network_plugin`. There is also an The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead. option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md). See also [Network checker](docs/netcheck.md).
## Community docs and resources Community docs and resources
- [kubernetes.io/docs/getting-started-guides/kubespray/](https://kubernetes.io/docs/getting-started-guides/kubespray/) ----------------------------
- [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
## Tools and projects on top of Kubespray - [kubernetes.io/docs/getting-started-guides/kubespray/](https://kubernetes.io/docs/getting-started-guides/kubespray/)
- [Digital Rebar](https://github.com/digitalrebar/digitalrebar) - [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
- [Kubespray-cli](https://github.com/kubespray/kubespray-cli) - [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
- [Fuel-ccp-installer](https://github.com/openstack/fuel-ccp-installer) - [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
- [Terraform Contrib](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform)
## CI Tests Tools and projects on top of Kubespray
--------------------------------------
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/master/doc/integrations/ansible.rst)
- [Fuel-ccp-installer](https://github.com/openstack/fuel-ccp-installer)
- [Terraform Contrib](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform)
CI Tests
--------
![Gitlab Logo](https://s27.postimg.org/wmtaig1wz/gitlabci.png) ![Gitlab Logo](https://s27.postimg.org/wmtaig1wz/gitlabci.png)
[![Build graphs](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/badges/master/build.svg)](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/pipelines) </br> [![Build graphs](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/badges/master/build.svg)](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/pipelines)
CI/end-to-end tests sponsored by Google (GCE), DigitalOcean, [teuto.net](https://teuto.net/) (openstack). CI/end-to-end tests sponsored by Google (GCE)
See the [test matrix](docs/test_cases.md) for details. See the [test matrix](docs/test_cases.md) for details.

48
Vagrantfile vendored
View File

@@ -3,18 +3,23 @@
require 'fileutils' require 'fileutils'
Vagrant.require_version ">= 1.9.0" Vagrant.require_version ">= 2.0.0"
CONFIG = File.join(File.dirname(__FILE__), "vagrant/config.rb") CONFIG = File.join(File.dirname(__FILE__), "vagrant/config.rb")
COREOS_URL_TEMPLATE = "https://storage.googleapis.com/%s.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json" COREOS_URL_TEMPLATE = "https://storage.googleapis.com/%s.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json"
# Uniq disk UUID for libvirt
DISK_UUID = Time.now.utc.to_i
SUPPORTED_OS = { SUPPORTED_OS = {
"coreos-stable" => {box: "coreos-stable", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["stable"]}, "coreos-stable" => {box: "coreos-stable", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["stable"]},
"coreos-alpha" => {box: "coreos-alpha", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["alpha"]}, "coreos-alpha" => {box: "coreos-alpha", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["alpha"]},
"coreos-beta" => {box: "coreos-beta", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["beta"]}, "coreos-beta" => {box: "coreos-beta", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["beta"]},
"ubuntu" => {box: "bento/ubuntu-16.04", bootstrap_os: "ubuntu", user: "vagrant"}, "ubuntu" => {box: "bento/ubuntu-16.04", bootstrap_os: "ubuntu", user: "vagrant"},
"centos" => {box: "bento/centos-7.3", bootstrap_os: "centos", user: "vagrant"}, "centos" => {box: "centos/7", bootstrap_os: "centos", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-42.3-x86_64", bootstrap_os: "opensuse", use: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", bootstrap_os: "opensuse", use: "vagrant"},
} }
# Defaults for config options defined in CONFIG # Defaults for config options defined in CONFIG
@@ -34,6 +39,11 @@ $etcd_instances = $num_instances
$kube_master_instances = $num_instances == 1 ? $num_instances : ($num_instances - 1) $kube_master_instances = $num_instances == 1 ? $num_instances : ($num_instances - 1)
# All nodes are kube nodes # All nodes are kube nodes
$kube_node_instances = $num_instances $kube_node_instances = $num_instances
# The following only works when using the libvirt provider
$kube_node_instances_with_disks = false
$kube_node_instances_with_disks_size = "20G"
$kube_node_instances_with_disks_number = 2
$local_release_dir = "/vagrant/temp" $local_release_dir = "/vagrant/temp"
host_vars = {} host_vars = {}
@@ -44,7 +54,7 @@ end
$box = SUPPORTED_OS[$os][:box] $box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example # if $inventory is not set, try to use example
$inventory = File.join(File.dirname(__FILE__), "inventory") if ! $inventory $inventory = File.join(File.dirname(__FILE__), "inventory", "sample") if ! $inventory
# if $inventory has a hosts file use it, otherwise copy over vars etc # if $inventory has a hosts file use it, otherwise copy over vars etc
# to where vagrant expects dynamic inventory to be. # to where vagrant expects dynamic inventory to be.
@@ -53,7 +63,7 @@ if ! File.exist?(File.join(File.dirname($inventory), "hosts"))
"provisioners", "ansible") "provisioners", "ansible")
FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible) FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
if ! File.exist?(File.join($vagrant_ansible,"inventory")) if ! File.exist?(File.join($vagrant_ansible,"inventory"))
FileUtils.ln_s($inventory, $vagrant_ansible) FileUtils.ln_s($inventory, File.join($vagrant_ansible,"inventory"))
end end
end end
@@ -76,7 +86,6 @@ Vagrant.configure("2") do |config|
if Vagrant.has_plugin?("vagrant-vbguest") then if Vagrant.has_plugin?("vagrant-vbguest") then
config.vbguest.auto_update = false config.vbguest.auto_update = false
end end
(1..$num_instances).each do |i| (1..$num_instances).each do |i|
config.vm.define vm_name = "%s-%02d" % [$instance_name_prefix, i] do |config| config.vm.define vm_name = "%s-%02d" % [$instance_name_prefix, i] do |config|
config.vm.hostname = vm_name config.vm.hostname = vm_name
@@ -102,8 +111,10 @@ Vagrant.configure("2") do |config|
end end
end end
config.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z']
$shared_folders.each do |src, dst| $shared_folders.each do |src, dst|
config.vm.synced_folder src, dst config.vm.synced_folder src, dst, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z']
end end
config.vm.provider :virtualbox do |vb| config.vm.provider :virtualbox do |vb|
@@ -112,6 +123,10 @@ Vagrant.configure("2") do |config|
vb.cpus = $vm_cpus vb.cpus = $vm_cpus
end end
config.vm.provider :libvirt do |lv|
lv.memory = $vm_memory
end
ip = "#{$subnet}.#{i+100}" ip = "#{$subnet}.#{i+100}"
host_vars[vm_name] = { host_vars[vm_name] = {
"ip": ip, "ip": ip,
@@ -122,11 +137,20 @@ Vagrant.configure("2") do |config|
} }
config.vm.network :private_network, ip: ip config.vm.network :private_network, ip: ip
# workaround for Vagrant 1.9.1 and centos vm # Disable swap for each vm
# https://github.com/hashicorp/vagrant/issues/8096 config.vm.provision "shell", inline: "swapoff -a"
if Vagrant::VERSION == "1.9.1" && $os == "centos"
config.vm.provision "shell", inline: "service network restart", run: "always" if $kube_node_instances_with_disks
# Libvirt
driverletters = ('a'..'z').to_a
config.vm.provider :libvirt do |lv|
# always make /dev/sd{a/b/c} so that CI can ensure that
# virtualbox and libvirt will have the same devices to use for OSDs
(1..$kube_node_instances_with_disks_number).each do |d|
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "ide"
end
end
end end
# Only execute once the Ansible provisioner, # Only execute once the Ansible provisioner,
@@ -137,7 +161,7 @@ Vagrant.configure("2") do |config|
if File.exist?(File.join(File.dirname($inventory), "hosts")) if File.exist?(File.join(File.dirname($inventory), "hosts"))
ansible.inventory_path = $inventory ansible.inventory_path = $inventory
end end
ansible.sudo = true ansible.become = true
ansible.limit = "all" ansible.limit = "all"
ansible.host_key_checking = false ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache"] ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache"]

View File

@@ -1,7 +1,6 @@
[ssh_connection] [ssh_connection]
pipelining=True pipelining=True
ansible_ssh_common_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
#ansible_ssh_common_args = -F {{ inventory_dir|quote }}/ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100
#control_path = ~/.ssh/ansible-%%r@%%h:%%p #control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults] [defaults]
host_key_checking=False host_key_checking=False
@@ -11,4 +10,6 @@ fact_caching_connection = /tmp
stdout_callback = skippy stdout_callback = skippy
library = ./library library = ./library
callback_whitelist = profile_tasks callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds

View File

@@ -21,6 +21,12 @@
vars: vars:
ansible_ssh_pipelining: true ansible_ssh_pipelining: true
gather_facts: true gather_facts: true
pre_tasks:
- name: gather facts from all instances
setup:
delegate_to: "{{item}}"
delegate_facts: True
with_items: "{{ groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([]) }}"
- hosts: k8s-cluster:etcd:calico-rr - hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -32,12 +38,14 @@
tags: rkt tags: rkt
when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type, vault_deployment_type]" when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type, vault_deployment_type]"
- { role: download, tags: download, skip_downloads: false } - { role: download, tags: download, skip_downloads: false }
environment: "{{proxy_env}}"
- hosts: etcd:k8s-cluster:vault - hosts: etcd:k8s-cluster:vault:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kubespray-defaults, when: "cert_management == 'vault'" } - { role: kubespray-defaults, when: "cert_management == 'vault'" }
- { role: vault, tags: vault, vault_bootstrap: true, when: "cert_management == 'vault'" } - { role: vault, tags: vault, vault_bootstrap: true, when: "cert_management == 'vault'" }
environment: "{{proxy_env}}"
- hosts: etcd - hosts: etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -45,23 +53,25 @@
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: true } - { role: etcd, tags: etcd, etcd_cluster_setup: true }
- hosts: k8s-cluster - hosts: k8s-cluster:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: false } - { role: etcd, tags: etcd, etcd_cluster_setup: false }
- hosts: etcd:k8s-cluster:vault - hosts: etcd:k8s-cluster:vault:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: vault, tags: vault, when: "cert_management == 'vault'"} - { role: vault, tags: vault, when: "cert_management == 'vault'"}
environment: "{{proxy_env}}"
- hosts: k8s-cluster - hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: kubernetes/node, tags: node } - { role: kubernetes/node, tags: node }
environment: "{{proxy_env}}"
- hosts: kube-master - hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -78,13 +88,20 @@
- { role: kubernetes/kubeadm, tags: kubeadm, when: "kubeadm_enabled" } - { role: kubernetes/kubeadm, tags: kubeadm, when: "kubeadm_enabled" }
- { role: network_plugin, tags: network } - { role: network_plugin, tags: network }
- hosts: kube-master - hosts: kube-master[0]
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" } - { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/network_plugin, tags: network } - { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller } - { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
- hosts: calico-rr - hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -98,6 +115,7 @@
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq } - { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf } - { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf }
environment: "{{proxy_env}}"
- hosts: kube-master[0] - hosts: kube-master[0]
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"

View File

@@ -1,58 +1,3 @@
## Kubernetes Community Code of Conduct # Kubernetes Community Code of Conduct
### Contributor Code of Conduct Please refer to our [Kubernetes Community Code of Conduct](https://git.k8s.io/community/code-of-conduct.md)
As contributors and maintainers of this project, and in the interest of fostering
an open and welcoming community, we pledge to respect all people who contribute
through reporting issues, posting feature requests, updating documentation,
submitting pull requests or patches, and other activities.
We are committed to making participation in this project a harassment-free experience for
everyone, regardless of level of experience, gender, gender identity and expression,
sexual orientation, disability, personal appearance, body size, race, ethnicity, age,
religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing other's private information, such as physical or electronic addresses,
without explicit permission
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are not
aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers
commit themselves to fairly and consistently applying these principles to every aspect
of managing this project. Project maintainers who do not follow or enforce the Code of
Conduct may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a Kubernetes maintainer, Sarah Novotny <sarahnovotny@google.com>, and/or Dan Kohn <dan@linuxfoundation.org>.
This Code of Conduct is adapted from the Contributor Covenant
(http://contributor-covenant.org), version 1.2.0, available at
http://contributor-covenant.org/version/1/2/0/
### Kubernetes Events Code of Conduct
Kubernetes events are working conferences intended for professional networking and collaboration in the
Kubernetes community. Attendees are expected to behave according to professional standards and in accordance
with their employer's policies on appropriate workplace behavior.
While at Kubernetes events or related social networking opportunities, attendees should not engage in
discriminatory or offensive speech or actions regarding gender, sexuality, race, or religion. Speakers should
be especially aware of these concerns.
The Kubernetes team does not condone any statements by speakers contrary to these standards. The Kubernetes
team reserves the right to deny entrance and/or eject from an event (without refund) any individual found to
be engaging in discriminatory or offensive speech or actions.
Please bring any concerns to the immediate attention of Kubernetes event staff.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/code-of-conduct.md?pixel)]()

View File

@@ -59,6 +59,6 @@ It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell ```shell
$ cd kubespray-root-dir $ cd kubespray-root-dir
$ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/group_vars/all.yml" cluster.yml $ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all.yml" cluster.yml
``` ```

View File

@@ -1,6 +1,6 @@
{% for vm in vm_ip_list %} {% for vm in vm_ip_list %}
{% if not use_bastion or vm.virtualMachinename == 'bastion' %} {% if not use_bastion or vm.virtualMachine.name == 'bastion' %}
{{ vm.virtualMachine.name }} ansible_ssh_host={{ vm.virtualMachine.network.publicIpAddresses[0].ipAddress }} ip={{ vm.virtualMachine.network.privateIpAddresses[0] }} {{ vm.virtualMachine.name }} ansible_ssh_host={{ vm.virtualMachine.network.publicIpAddresses[0].ipAddress }} ip={{ vm.virtualMachine.network.privateIpAddresses[0] }}
{% else %} {% else %}
{{ vm.virtualMachine.name }} ansible_ssh_host={{ vm.virtualMachine.network.privateIpAddresses[0] }} {{ vm.virtualMachine.name }} ansible_ssh_host={{ vm.virtualMachine.network.privateIpAddresses[0] }}

View File

@@ -54,7 +54,7 @@ def get_var_as_bool(name, default):
# Configurable as shell vars start # Configurable as shell vars start
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory.cfg") CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.ini")
# Reconfigures cluster distribution at scale # Reconfigures cluster distribution at scale
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50)) SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200)) MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200))
@@ -318,7 +318,7 @@ Delete a host by id: inventory.py -node1
Configurable env vars: Configurable env vars:
DEBUG Enable debug printing. Default: True DEBUG Enable debug printing. Default: True
CONFIG_FILE File to write config to Default: ./inventory.cfg CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.ini
HOST_PREFIX Host prefix for generated hosts. Default: node HOST_PREFIX Host prefix for generated hosts. Default: node
SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50 SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50
MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200 MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200

View File

@@ -6,16 +6,16 @@ You can either deploy using Ansible on its own by supplying your own inventory f
In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group. In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group.
Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/k8s_gfs_inventory`. Make sure that the settings on `inventory/group_vars/all.yml` make sense with your deployment. Then execute change to the kubespray root folder, and execute (supposing that the machines are all using ubuntu): Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/sample/k8s_gfs_inventory`. Make sure that the settings on `inventory/sample/group_vars/all.yml` make sense with your deployment. Then execute change to the kubespray root folder, and execute (supposing that the machines are all using ubuntu):
``` ```
ansible-playbook -b --become-user=root -i inventory/k8s_gfs_inventory --user=ubuntu ./cluster.yml ansible-playbook -b --become-user=root -i inventory/sample/k8s_gfs_inventory --user=ubuntu ./cluster.yml
``` ```
This will provision your Kubernetes cluster. Then, to provision and configure the GlusterFS cluster, from the same directory execute: This will provision your Kubernetes cluster. Then, to provision and configure the GlusterFS cluster, from the same directory execute:
``` ```
ansible-playbook -b --become-user=root -i inventory/k8s_gfs_inventory --user=ubuntu ./contrib/network-storage/glusterfs/glusterfs.yml ansible-playbook -b --become-user=root -i inventory/sample/k8s_gfs_inventory --user=ubuntu ./contrib/network-storage/glusterfs/glusterfs.yml
``` ```
If your machines are not using Ubuntu, you need to change the `--user=ubuntu` to the correct user. Alternatively, if your Kubernetes machines are using one OS and your GlusterFS a different one, you can instead specify the `ansible_ssh_user=<correct-user>` variable in the inventory file that you just created, for each machine/VM: If your machines are not using Ubuntu, you need to change the `--user=ubuntu` to the correct user. Alternatively, if your Kubernetes machines are using one OS and your GlusterFS a different one, you can instead specify the `ansible_ssh_user=<correct-user>` variable in the inventory file that you just created, for each machine/VM:

View File

@@ -1,60 +0,0 @@
%global srcname ansible_kubespray
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
Name: ansible-kubespray
Version: XXX
Release: XXX
Summary: Ansible modules for installing Kubernetes
Group: System Environment/Libraries
License: ASL 2.0
Vendor: Kubespray <smainklh@gmail.com>
Url: https://github.com/kubernetes-incubator/kubespray
Source0: https://github.com/kubernetes-incubator/kubespray/archive/%{upstream_version}.tar.gz
BuildArch: noarch
BuildRequires: git
BuildRequires: python2-devel
BuildRequires: python-setuptools
BuildRequires: python-d2to1
BuildRequires: python-pbr
Requires: ansible
Requires: python-jinja2
Requires: python-netaddr
%description
Ansible-kubespray is a set of Ansible modules and playbooks for
installing a Kubernetes cluster. If you have questions, join us
on the https://slack.k8s.io, channel '#kubespray'.
%prep
%autosetup -n %{name}-%{upstream_version} -S git
%build
%{__python2} setup.py build
%install
export PBR_VERSION=%{version}
export SKIP_PIP_INSTALL=1
%{__python2} setup.py install --skip-build --root %{buildroot}
%files
%doc README.md
%doc inventory/inventory.example
%config /etc/kubespray/ansible.cfg
%config /etc/kubespray/inventory/group_vars/all.yml
%config /etc/kubespray/inventory/group_vars/k8s-cluster.yml
%license LICENSE
%{python2_sitelib}/%{srcname}-%{version}-py%{python2_version}.egg-info
/usr/local/share/kubespray/roles/
/usr/local/share/kubespray/playbooks/
%defattr(-,root,root)
%changelog

View File

@@ -0,0 +1,62 @@
%global srcname kubespray
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
Name: kubespray
Version: master
Release: %(git describe | sed -r 's/v(\S+-?)-(\S+)-(\S+)/\1.dev\2+\3/')
Summary: Ansible modules for installing Kubernetes
Group: System Environment/Libraries
License: ASL 2.0
Url: https://github.com/kubernetes-incubator/kubespray
Source0: https://github.com/kubernetes-incubator/kubespray/archive/%{upstream_version}.tar.gz#/%{name}-%{release}.tar.gz
BuildArch: noarch
BuildRequires: git
BuildRequires: python2
BuildRequires: python2-devel
BuildRequires: python2-setuptools
BuildRequires: python-d2to1
BuildRequires: python2-pbr
Requires: ansible >= 2.4.0
Requires: python-jinja2 >= 2.10
Requires: python-netaddr
Requires: python-pbr
%description
Ansible-kubespray is a set of Ansible modules and playbooks for
installing a Kubernetes cluster. If you have questions, join us
on the https://slack.k8s.io, channel '#kubespray'.
%prep
%autosetup -n %{name}-%{upstream_version} -S git
%build
export PBR_VERSION=%{release}
%{__python2} setup.py build bdist_rpm
%install
export PBR_VERSION=%{release}
export SKIP_PIP_INSTALL=1
%{__python2} setup.py install --skip-build --root %{buildroot} bdist_rpm
%files
%doc %{_docdir}/%{name}/README.md
%doc %{_docdir}/%{name}/inventory/sample/hosts.ini
%config %{_sysconfdir}/%{name}/ansible.cfg
%config %{_sysconfdir}/%{name}/inventory/sample/group_vars/all.yml
%config %{_sysconfdir}/%{name}/inventory/sample/group_vars/k8s-cluster.yml
%license %{_docdir}/%{name}/LICENSE
%{python2_sitelib}/%{srcname}-%{release}-py%{python2_version}.egg-info
%{_datarootdir}/%{name}/roles/
%{_datarootdir}/%{name}/playbooks/
%defattr(-,root,root)
%changelog

View File

@@ -24,7 +24,7 @@ export AWS_DEFAULT_REGION="zzz"
``` ```
- Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars` - Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars`
- Update `contrib/terraform/aws/terraform.tfvars` with your data - Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use CoreOS as base image. If you want to change this behaviour, see note "Using other distrib than CoreOs" below.
- Allocate a new AWS Elastic IP. Use this for your `loadbalancer_apiserver_address` value (below) - Allocate a new AWS Elastic IP. Use this for your `loadbalancer_apiserver_address` value (below)
- Create an AWS EC2 SSH Key - Create an AWS EC2 SSH Key
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials - Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
@@ -36,17 +36,72 @@ terraform apply -var-file=credentials.tfvars -var 'loadbalancer_apiserver_addres
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory` - Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
- Ansible will automatically generate an ssh config file for your bastion hosts. To make use of it, make sure you have a line in your `ansible.cfg` file that looks like the following: - Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated ssh-bastion.conf.
Ansible automatically detects bastion and changes ssh_args
```commandline ```commandline
ssh_args = -F ./ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m ssh -F ./ssh-bastion.conf user@$ip
``` ```
- Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag. - Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
Example (this one assumes you are using CoreOS) Example (this one assumes you are using CoreOS)
```commandline ```commandline
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_ssh_user=core -e bootstrap_os=coreos -b --become-user=root --flush-cache ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_ssh_user=core -e bootstrap_os=coreos -b --become-user=root --flush-cache
``` ```
***Using other distrib than CoreOs***
If you want to use another distribution than CoreOS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
For example, to use:
- Debian Jessie, replace 'data "aws_ami" "distro"' in variables.tf with
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = ["debian-jessie-amd64-hvm-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["379101102735"]
}
- Ubuntu 16.04, replace 'data "aws_ami" "distro"' in variables.tf with
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
- Centos 7, replace 'data "aws_ami" "distro"' in variables.tf with
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = ["dcos-centos7-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["688023202711"]
}
**Troubleshooting** **Troubleshooting**

View File

@@ -8,6 +8,8 @@ provider "aws" {
region = "${var.AWS_DEFAULT_REGION}" region = "${var.AWS_DEFAULT_REGION}"
} }
data "aws_availability_zones" "available" {}
/* /*
* Calling modules who create the initial AWS VPC / AWS ELB * Calling modules who create the initial AWS VPC / AWS ELB
* and AWS IAM Roles for Kubernetes Deployment * and AWS IAM Roles for Kubernetes Deployment
@@ -18,7 +20,7 @@ module "aws-vpc" {
aws_cluster_name = "${var.aws_cluster_name}" aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}" aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
aws_avail_zones="${var.aws_avail_zones}" aws_avail_zones="${slice(data.aws_availability_zones.available.names,0,2)}"
aws_cidr_subnets_private="${var.aws_cidr_subnets_private}" aws_cidr_subnets_private="${var.aws_cidr_subnets_private}"
aws_cidr_subnets_public="${var.aws_cidr_subnets_public}" aws_cidr_subnets_public="${var.aws_cidr_subnets_public}"
default_tags="${var.default_tags}" default_tags="${var.default_tags}"
@@ -31,7 +33,7 @@ module "aws-elb" {
aws_cluster_name="${var.aws_cluster_name}" aws_cluster_name="${var.aws_cluster_name}"
aws_vpc_id="${module.aws-vpc.aws_vpc_id}" aws_vpc_id="${module.aws-vpc.aws_vpc_id}"
aws_avail_zones="${var.aws_avail_zones}" aws_avail_zones="${slice(data.aws_availability_zones.available.names,0,2)}"
aws_subnet_ids_public="${module.aws-vpc.aws_subnet_ids_public}" aws_subnet_ids_public="${module.aws-vpc.aws_subnet_ids_public}"
aws_elb_api_port = "${var.aws_elb_api_port}" aws_elb_api_port = "${var.aws_elb_api_port}"
k8s_secure_api_port = "${var.k8s_secure_api_port}" k8s_secure_api_port = "${var.k8s_secure_api_port}"
@@ -49,12 +51,13 @@ module "aws-iam" {
* Create Bastion Instances in AWS * Create Bastion Instances in AWS
* *
*/ */
resource "aws_instance" "bastion-server" { resource "aws_instance" "bastion-server" {
ami = "${var.aws_bastion_ami}" ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_bastion_size}" instance_type = "${var.aws_bastion_size}"
count = "${length(var.aws_cidr_subnets_public)}" count = "${length(var.aws_cidr_subnets_public)}"
associate_public_ip_address = true associate_public_ip_address = true
availability_zone = "${element(var.aws_avail_zones,count.index)}" availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}" subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}"
@@ -76,13 +79,13 @@ resource "aws_instance" "bastion-server" {
*/ */
resource "aws_instance" "k8s-master" { resource "aws_instance" "k8s-master" {
ami = "${var.aws_cluster_ami}" ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_master_size}" instance_type = "${var.aws_kube_master_size}"
count = "${var.aws_kube_master_num}" count = "${var.aws_kube_master_num}"
availability_zone = "${element(var.aws_avail_zones,count.index)}" availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}" subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
@@ -95,7 +98,7 @@ resource "aws_instance" "k8s-master" {
tags = "${merge(var.default_tags, map( tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-master${count.index}", "Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
"Cluster", "${var.aws_cluster_name}", "kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "master" "Role", "master"
))}" ))}"
} }
@@ -108,13 +111,13 @@ resource "aws_elb_attachment" "attach_master_nodes" {
resource "aws_instance" "k8s-etcd" { resource "aws_instance" "k8s-etcd" {
ami = "${var.aws_cluster_ami}" ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_etcd_size}" instance_type = "${var.aws_etcd_size}"
count = "${var.aws_etcd_num}" count = "${var.aws_etcd_num}"
availability_zone = "${element(var.aws_avail_zones,count.index)}" availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}" subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
@@ -124,7 +127,7 @@ resource "aws_instance" "k8s-etcd" {
tags = "${merge(var.default_tags, map( tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}", "Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}",
"Cluster", "${var.aws_cluster_name}", "kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "etcd" "Role", "etcd"
))}" ))}"
@@ -132,12 +135,12 @@ resource "aws_instance" "k8s-etcd" {
resource "aws_instance" "k8s-worker" { resource "aws_instance" "k8s-worker" {
ami = "${var.aws_cluster_ami}" ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_worker_size}" instance_type = "${var.aws_kube_worker_size}"
count = "${var.aws_kube_worker_num}" count = "${var.aws_kube_worker_num}"
availability_zone = "${element(var.aws_avail_zones,count.index)}" availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}" subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ] vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
@@ -148,7 +151,7 @@ resource "aws_instance" "k8s-worker" {
tags = "${merge(var.default_tags, map( tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}", "Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
"Cluster", "${var.aws_cluster_name}", "kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "worker" "Role", "worker"
))}" ))}"
@@ -162,7 +165,7 @@ resource "aws_instance" "k8s-worker" {
*/ */
data "template_file" "inventory" { data "template_file" "inventory" {
template = "${file("${path.module}/templates/inventory.tpl")}" template = "${file("${path.module}/templates/inventory.tpl")}"
vars { vars {
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_host=%s" , aws_instance.bastion-server.*.public_ip))}" public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_host=%s" , aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n",formatlist("%s ansible_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}" connection_strings_master = "${join("\n",formatlist("%s ansible_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
@@ -172,8 +175,6 @@ data "template_file" "inventory" {
list_node = "${join("\n",aws_instance.k8s-worker.*.tags.Name)}" list_node = "${join("\n",aws_instance.k8s-worker.*.tags.Name)}"
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}" list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\"" elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
elb_api_port = "loadbalancer_apiserver.port=${var.aws_elb_api_port}"
loadbalancer_apiserver_address = "loadbalancer_apiserver.address=${var.loadbalancer_apiserver_address}"
} }
} }

View File

@@ -43,7 +43,7 @@ resource "aws_elb" "aws-elb-api" {
healthy_threshold = 2 healthy_threshold = 2
unhealthy_threshold = 2 unhealthy_threshold = 2
timeout = 3 timeout = 3
target = "HTTP:8080/" target = "TCP:${var.k8s_secure_api_port}"
interval = 30 interval = 30
} }

View File

@@ -34,7 +34,8 @@ resource "aws_subnet" "cluster-vpc-subnets-public" {
cidr_block = "${element(var.aws_cidr_subnets_public, count.index)}" cidr_block = "${element(var.aws_cidr_subnets_public, count.index)}"
tags = "${merge(var.default_tags, map( tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public" "Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member"
))}" ))}"
} }

View File

@@ -16,6 +16,6 @@ output "aws_security_group" {
} }
output "default_tags" { output "default_tags" {
value = "${default_tags}" value = "${var.default_tags}"
} }

View File

@@ -24,5 +24,5 @@ output "inventory" {
} }
output "default_tags" { output "default_tags" {
value = "${default_tags}" value = "${var.default_tags}"
} }

View File

@@ -24,6 +24,4 @@ kube-master
[k8s-cluster:vars] [k8s-cluster:vars]
${elb_api_fqdn} ${elb_api_fqdn}
${elb_api_port}
${loadbalancer_apiserver_address}

View File

@@ -5,10 +5,8 @@ aws_cluster_name = "devtest"
aws_vpc_cidr_block = "10.250.192.0/18" aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"] aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"] aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
aws_avail_zones = ["us-west-2a","us-west-2b"]
#Bastion Host #Bastion Host
aws_bastion_ami = "ami-db56b9a3"
aws_bastion_size = "t2.medium" aws_bastion_size = "t2.medium"
@@ -23,8 +21,6 @@ aws_etcd_size = "t2.medium"
aws_kube_worker_num = 4 aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium" aws_kube_worker_size = "t2.medium"
aws_cluster_ami = "ami-db56b9a3"
#Settings AWS ELB #Settings AWS ELB
aws_elb_api_port = 6443 aws_elb_api_port = 6443

View File

@@ -20,6 +20,21 @@ variable "aws_cluster_name" {
description = "Name of AWS Cluster" description = "Name of AWS Cluster"
} }
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = ["CoreOS-stable-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["595879546273"] #CoreOS
}
//AWS VPC Variables //AWS VPC Variables
@@ -27,11 +42,6 @@ variable "aws_vpc_cidr_block" {
description = "CIDR Block for VPC" description = "CIDR Block for VPC"
} }
variable "aws_avail_zones" {
description = "Availability Zones Used"
type = "list"
}
variable "aws_cidr_subnets_private" { variable "aws_cidr_subnets_private" {
description = "CIDR Blocks for private subnets in Availability Zones" description = "CIDR Blocks for private subnets in Availability Zones"
type = "list" type = "list"
@@ -44,10 +54,6 @@ variable "aws_cidr_subnets_public" {
//AWS EC2 Settings //AWS EC2 Settings
variable "aws_bastion_ami" {
description = "AMI ID for Bastion Host in chosen AWS Region"
}
variable "aws_bastion_size" { variable "aws_bastion_size" {
description = "EC2 Instance Size of Bastion Host" description = "EC2 Instance Size of Bastion Host"
} }
@@ -81,9 +87,6 @@ variable "aws_kube_worker_size" {
description = "Instance size of Kubernetes Worker Nodes" description = "Instance size of Kubernetes Worker Nodes"
} }
variable "aws_cluster_ami" {
description = "AMI ID for Kubernetes Cluster"
}
/* /*
* AWS ELB Settings * AWS ELB Settings
* *
@@ -96,10 +99,6 @@ variable "k8s_secure_api_port" {
description = "Secure Port of K8S API Server" description = "Secure Port of K8S API Server"
} }
variable "loadbalancer_apiserver_address" {
description= "Bind Address for ELB of K8s API Server"
}
variable "default_tags" { variable "default_tags" {
description = "Default tags for all resources" description = "Default tags for all resources"
type = "map" type = "map"

View File

@@ -0,0 +1,4 @@
.terraform
*.tfvars
*.tfstate
*.tfstate.backup

View File

@@ -5,55 +5,287 @@ Openstack.
## Status ## Status
This will install a Kubernetes cluster on an Openstack Cloud. It has been tested on a This will install a Kubernetes cluster on an Openstack Cloud. It should work on
OpenStack Cloud provided by [BlueBox](https://www.blueboxcloud.com/) and on OpenStack at [EMBL-EBI's](http://www.ebi.ac.uk/) [EMBASSY Cloud](http://www.embassycloud.org/). This should work on most modern installs of OpenStack that support the basic most modern installs of OpenStack that support the basic services.
services.
There are some assumptions made to try and ensure it will work on your openstack cluster. ## Approach
The terraform configuration inspects variables found in
[variables.tf](variables.tf) to create resources in your OpenStack cluster.
There is a [python script](../terraform.py) that reads the generated`.tfstate`
file to generate a dynamic inventory that is consumed by the main ansible script
to actually install kubernetes and stand up the cluster.
* floating-ips are used for access, but you can have masters and nodes that don't use floating-ips if needed. You need currently at least 1 floating ip, which needs to be used on a master. If using more than one, at least one should be on a master for bastions to work fine. ### Networking
* you already have a suitable OS image in glance The configuration includes creating a private subnet with a router to the
* you already have both an internal network and a floating-ip pool created external net. It will allocate floating IPs from a pool and assign them to the
* you have security-groups enabled hosts where that makes sense. You have the option of creating bastion hosts
inside the private subnet to access the nodes there. Alternatively, a node with
a floating IP can be used as a jump host to nodes without.
### Kubernetes Nodes
You can create many different kubernetes topologies by setting the number of
different classes of hosts. For each class there are options for allocating
floating IP addresses or not.
- Master nodes with etcd
- Master nodes without etcd
- Standalone etcd hosts
- Kubernetes worker nodes
Note that the Ansible script will report an invalid configuration if you wind up
with an even number of etcd instances since that is not a valid configuration.
### GlusterFS
The Terraform configuration supports provisioning of an optional GlusterFS
shared file system based on a separate set of VMs. To enable this, you need to
specify:
- the number of Gluster hosts (minimum 2)
- Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks
- Other properties related to provisioning the hosts
Even if you are using Container Linux by CoreOS for your cluster, you will still
need the GlusterFS VMs to be based on either Debian or RedHat based images.
Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through
binaries available on hyperkube v1.4.3_coreos.0 or higher.
## Requirements ## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html) - [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
- [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
- you already have a suitable OS image in Glance
- you already have a floating IP pool created
- you have security groups enabled
- you have a pair of keys generated that can be used to secure the new hosts
## Module Architecture
The configuration is divided into three modules:
- Network
- IPs
- Compute
The main reason for splitting the configuration up in this way is to easily
accommodate situations where floating IPs are limited by a quota or if you have
any external references to the floating IP (e.g. DNS) that would otherwise have
to be updated.
You can force your existing IPs by modifying the compute variables in
`kubespray.tf` as follows:
```
k8s_master_fips = ["151.101.129.67"]
k8s_node_fips = ["151.101.129.68"]
```
## Terraform ## Terraform
Terraform will be used to provision all of the OpenStack resources with base software as appropriate.
Terraform will be used to provision all of the OpenStack resources. It is also used to deploy and provision the software ### Configuration
requirements.
### Prep #### Inventory files
#### OpenStack Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
Ensure your OpenStack **Identity v2** credentials are loaded in environment variables. This can be done by downloading a credentials .rc file from your OpenStack dashboard and sourcing it: ```ShellSession
$ cp -LRp contrib/terraform/openstack/sample-inventory inventory/$CLUSTER
``` $ cd inventory/$CLUSTER
$ source ~/.stackrc $ ln -s ../../contrib/terraform/openstack/hosts
``` ```
> You must set **OS_REGION_NAME** and **OS_TENANT_ID** environment variables not required by openstack CLI This will be the base for subsequent Terraform commands.
You will need two networks before installing, an internal network and #### OpenStack access and credentials
an external (floating IP Pool) network. The internet network can be shared as
we use security groups to provide network segregation. Due to the many
differences between OpenStack installs the Terraform does not attempt to create
these for you.
By default Terraform will expect that your networks are called `internal` and No provider variables are hardcoded inside `variables.tf` because Terraform
`external`. You can change this by altering the Terraform variables `network_name` and `floatingip_pool`. This can be done on a new variables file or through environment variables. supports various authentication methods for OpenStack: the older script and
environment method (using `openrc`) as well as a newer declarative method, and
different OpenStack environments may support Identity API version 2 or 3.
A full list of variables you can change can be found at [variables.tf](variables.tf). These are examples and may vary depending on your OpenStack cloud provider,
for an exhaustive list on how to authenticate on OpenStack with Terraform
please read the [OpenStack provider documentation](https://www.terraform.io/docs/providers/openstack/).
All OpenStack resources will use the Terraform variable `cluster_name` ( ##### Declarative method (recommended)
default `example`) in their name to make it easier to track. For example the
first compute resource will be named `example-kubernetes-1`.
#### Terraform The recommended authentication method is to describe credentials in a YAML file `clouds.yaml` that can be stored in:
* the current directory
* `~/.config/openstack`
* `/etc/openstack`
`clouds.yaml`:
```
clouds:
mycloud:
auth:
auth_url: https://openstack:5000/v3
username: "username"
project_name: "projectname"
project_id: projectid
user_domain_name: "Default"
password: "password"
region_name: "RegionOne"
interface: "public"
identity_api_version: 3
```
If you have multiple clouds defined in your `clouds.yaml` file you can choose
the one you want to use with the environment variable `OS_CLOUD`:
```
export OS_CLOUD=mycloud
```
##### Openrc method (deprecated)
When using classic environment variables, Terraform uses default `OS_*`
environment variables. A script suitable for your environment may be available
from Horizon under *Project* -> *Compute* -> *Access & Security* -> *API Access*.
With identity v2:
```
source openrc
env | grep OS
OS_AUTH_URL=https://openstack:5000/v2.0
OS_PROJECT_ID=projectid
OS_PROJECT_NAME=projectname
OS_USERNAME=username
OS_PASSWORD=password
OS_REGION_NAME=RegionOne
OS_INTERFACE=public
OS_IDENTITY_API_VERSION=2
```
With identity v3:
```
source openrc
env | grep OS
OS_AUTH_URL=https://openstack:5000/v3
OS_PROJECT_ID=projectid
OS_PROJECT_NAME=username
OS_PROJECT_DOMAIN_ID=default
OS_USERNAME=username
OS_PASSWORD=password
OS_REGION_NAME=RegionOne
OS_INTERFACE=public
OS_IDENTITY_API_VERSION=3
OS_USER_DOMAIN_NAME=Default
```
Terraform does not support a mix of DomainName and DomainID, choose one or the
other:
```
* provider.openstack: You must provide exactly one of DomainID or DomainName to authenticate by Username
```
```
unset OS_USER_DOMAIN_NAME
export OS_USER_DOMAIN_ID=default
or
unset OS_PROJECT_DOMAIN_ID
set OS_PROJECT_DOMAIN_NAME=Default
```
#### Cluster variables
The construction of the cluster is driven by values found in
[variables.tf](variables.tf).
For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|Variable | Description |
|---------|-------------|
|`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. |
|`network_name` | The name to be given to the internal network that will be generated |
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|`external_net` | UUID of the external network that will be routed to |
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through`nova flavor-list` |
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
|`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
|`number_of_k8s_masters`, `number_of_k8s_masters_no_floating_ip` | Number of nodes that serve as both master and etcd. These can be provisioned with or without floating IP addresses|
|`number_of_k8s_masters_no_etcd`, `number_of_k8s_masters_no_floating_ip_no_etcd` | Number of nodes that serve as just master with no etcd. These can be provisioned with or without floating IP addresses |
|`number_of_etcd` | Number of pure etcd nodes |
|`number_of_k8s_nodes`, `number_of_k8s_nodes_no_floating_ip` | Kubernetes worker nodes. These can be provisioned with or without floating ip addresses. |
|`number_of_bastions` | Number of bastion hosts to create. Scripts assume this is really just zero or one |
|`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
| `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
#### Terraform state files
In the cluster's inventory folder, the following files might be created (either by Terraform
or manually), to prevent you from pushing them accidentally they are in a
`.gitignore` file in the `terraform/openstack` directory :
* `.terraform`
* `.tfvars`
* `.tfstate`
* `.tfstate.backup`
You can still add them manually if you want to.
### Initialization
Before Terraform can operate on your cluster you need to install the required
plugins. This is accomplished as follows:
```ShellSession
$ cd inventory/$CLUSTER
$ terraform init ../../contrib/terraform/openstack
```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
### Provisioning cluster
You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
$ terraform apply -var-file=cluster.tf ../../contrib/terraform/openstack
```
if you chose to create a bastion host, this script will create
`contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for Ansible to
be able to access your machines tunneling through the bastion's IP address. If
you want to manually handle the ssh tunneling to these machines, please delete
or move that file. If you want to use this, just leave it there, as ansible will
pick it up automatically.
### Destroying cluster
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
$ terraform destroy -var-file=cluster.tf ../../contrib/terraform/openstack
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
* remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
* clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
### Debugging
You can enable debugging output from Terraform by setting
`OS_DEBUG` to 1 and`TF_LOG` to`DEBUG` before running the Terraform command.
### Terraform output
Terraform can output values that are useful for configure Neutron/Octavia LBaaS or Cinder persistent volume provisioning as part of your Kubernetes deployment:
- `private_subnet_id`: the subnet where your instances are running is used for `openstack_lbaas_subnet_id`
- `floating_network_id`: the network_id where the floating IP are provisioned is used for `openstack_lbaas_floating_network_id`
## Ansible
### Node access
#### SSH
Ensure your local ssh-agent is running and your ssh key has been added. This Ensure your local ssh-agent is running and your ssh key has been added. This
step is required by the terraform provisioner: step is required by the terraform provisioner:
@@ -63,143 +295,22 @@ $ eval $(ssh-agent -s)
$ ssh-add ~/.ssh/id_rsa $ ssh-add ~/.ssh/id_rsa
``` ```
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
Ensure that you have your Openstack credentials loaded into Terraform #### Bastion host
environment variables. Likely via a command similar to:
If you are not using a bastion host, but not all of your nodes have floating IPs, create a file `inventory/$CLUSTER/group_vars/no-floating.yml` with the following content. Use one of your nodes with a floating IP (this should have been output at the end of the Terraform step) and the appropriate user for that OS, or if you have another jump host, use that.
``` ```
$ echo Setting up Terraform creds && \ ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -W %h:%p -q USER@MASTER_IP"'
export TF_VAR_username=${OS_USERNAME} && \
export TF_VAR_password=${OS_PASSWORD} && \
export TF_VAR_tenant=${OS_TENANT_NAME} && \
export TF_VAR_auth_url=${OS_AUTH_URL}
``` ```
##### Alternative: etcd inside masters #### Test access
If you want to provision master or node VMs that don't use floating ips and where etcd is inside masters, write on a `my-terraform-vars.tfvars` file, for example: Make sure you can connect to the hosts. Note that Container Linux by CoreOS will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
``` ```
number_of_k8s_masters = "1" $ ansible -i inventory/$CLUSTER/hosts -m ping all
number_of_k8s_masters_no_floating_ip = "2"
number_of_k8s_nodes_no_floating_ip = "1"
number_of_k8s_nodes = "0"
```
This will provision one VM as master using a floating ip, two additional masters using no floating ips (these will only have private ips inside your tenancy) and one VM as node, again without a floating ip.
##### Alternative: etcd on separate machines
If you want to provision master or node VMs that don't use floating ips and where **etcd is on separate nodes from Kubernetes masters**, write on a `my-terraform-vars.tfvars` file, for example:
```
number_of_etcd = "3"
number_of_k8s_masters = "0"
number_of_k8s_masters_no_etcd = "1"
number_of_k8s_masters_no_floating_ip = "0"
number_of_k8s_masters_no_floating_ip_no_etcd = "2"
number_of_k8s_nodes_no_floating_ip = "1"
number_of_k8s_nodes = "2"
flavor_k8s_node = "desired-flavor-id"
flavor_k8s_master = "desired-flavor-id"
flavor_etcd = "desired-flavor-id"
```
This will provision one VM as master using a floating ip, two additional masters using no floating ips (these will only have private ips inside your tenancy), two VMs as nodes with floating ips, one VM as node without floating ip and three VMs for etcd.
##### Alternative: add GlusterFS
Additionally, now the terraform based installation supports provisioning of a GlusterFS shared file system based on a separate set of VMs, running either a Debian or RedHat based set of VMs. To enable this, you need to add to your `my-terraform-vars.tfvars` the following variables:
```
# Flavour depends on your openstack installation, you can get available flavours through `nova flavor-list`
flavor_gfs_node = "af659280-5b8a-42b5-8865-a703775911da"
# This is the name of an image already available in your openstack installation.
image_gfs = "Ubuntu 15.10"
number_of_gfs_nodes_no_floating_ip = "3"
# This is the size of the non-ephemeral volumes to be attached to store the GlusterFS bricks.
gfs_volume_size_in_gb = "50"
# The user needed for the image choosen for GlusterFS.
ssh_user_gfs = "ubuntu"
```
If these variables are provided, this will give rise to a new ansible group called `gfs-cluster`, for which we have added ansible roles to execute in the ansible provisioning step. If you are using Container Linux by CoreOS, these GlusterFS VM necessarily need to be either Debian or RedHat based VMs, Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through binaries available on hyperkube v1.4.3_coreos.0 or higher.
GlusterFS is not deployed by the standard `cluster.yml` playbook, see the [glusterfs playbook documentation](../../network-storage/glusterfs/README.md) for instructions.
# Configure Cluster variables
Edit `inventory/group_vars/all.yml`:
- Set variable **bootstrap_os** according selected image
```
# Valid bootstrap options (required): ubuntu, coreos, centos, none
bootstrap_os: coreos
```
- **bin_dir**
```
# Directory where the binaries will be installed
# Default:
# bin_dir: /usr/local/bin
# For Container Linux by CoreOS:
bin_dir: /opt/bin
```
- and **cloud_provider**
```
cloud_provider: openstack
```
Edit `inventory/group_vars/k8s-cluster.yml`:
- Set variable **kube_network_plugin** according selected networking
```
# Choose network plugin (calico, weave or flannel)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: flannel
```
> flannel works out-of-the-box
> calico requires allowing service's and pod's subnets on according OpenStack Neutron ports
- Set variable **resolvconf_mode**
```
# Can be docker_dns, host_resolvconf or none
# Default:
# resolvconf_mode: docker_dns
# For Container Linux by CoreOS:
resolvconf_mode: host_resolvconf
```
For calico configure OpenStack Neutron ports: [OpenStack](/docs/openstack.md)
# Provision a Kubernetes Cluster on OpenStack
If not using a tfvars file for your setup, then execute:
```
terraform apply -state=contrib/terraform/openstack/terraform.tfstate contrib/terraform/openstack
openstack_compute_secgroup_v2.k8s_master: Creating...
description: "" => "example - Kubernetes Master"
name: "" => "example-k8s-master"
rule.#: "" => "<computed>"
...
...
Apply complete! Resources: 9 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: contrib/terraform/openstack/terraform.tfstate
```
Alternatively, if you wrote your terraform variables on a file `my-terraform-vars.tfvars`, your command would look like:
```
terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-terraform-vars.tfvars contrib/terraform/openstack
```
if you choose to add masters or nodes without floating ips (only internal ips on your OpenStack tenancy), this script will create as well a file `contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for ansible to be able to access your machines tunneling through the first floating ip used. If you want to manually handling the ssh tunneling to these machines, please delete or move that file. If you want to use this, just leave it there, as ansible will pick it up automatically.
Make sure you can connect to the hosts:
```
$ ansible -i contrib/terraform/openstack/hosts -m ping all
example-k8s_node-1 | SUCCESS => { example-k8s_node-1 | SUCCESS => {
"changed": false, "changed": false,
"ping": "pong" "ping": "pong"
@@ -214,20 +325,59 @@ example-k8s-master-1 | SUCCESS => {
} }
``` ```
if you are deploying a system that needs bootstrapping, like Container Linux by CoreOS, these might have a state `FAILED` due to Container Linux by CoreOS not having python. As long as the state is not `UNREACHABLE`, this is fine. If it fails try to connect manually via SSH. It could be something as simple as a stale host key.
if it fails try to connect manually via SSH ... it could be somthing as simple as a stale host key. ### Configure cluster variables
Deploy kubernetes:
Edit `inventory/$CLUSTER/group_vars/all.yml`:
- Set variable **bootstrap_os** appropriately for your desired image:
``` ```
$ ansible-playbook --become -i contrib/terraform/openstack/hosts cluster.yml # Valid bootstrap options (required): ubuntu, coreos, centos, none
bootstrap_os: coreos
```
- **bin_dir**:
```
# Directory where the binaries will be installed
# Default:
# bin_dir: /usr/local/bin
# For Container Linux by CoreOS:
bin_dir: /opt/bin
```
- and **cloud_provider**:
```
cloud_provider: openstack
```
Edit `inventory/$CLUSTER/group_vars/k8s-cluster.yml`:
- Set variable **kube_network_plugin** to your desired networking plugin.
- **flannel** works out-of-the-box
- **calico** requires [configuring OpenStack Neutron ports](/docs/openstack.md) to allow service and pod subnets
```
# Choose network plugin (calico, weave or flannel)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: flannel
```
- Set variable **resolvconf_mode**
```
# Can be docker_dns, host_resolvconf or none
# Default:
# resolvconf_mode: docker_dns
# For Container Linux by CoreOS:
resolvconf_mode: host_resolvconf
``` ```
# Set up local kubectl ### Deploy Kubernetes
1. Install kubectl on your workstation:
[Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) ```
2. Add route to internal IP of master node (if needed): $ ansible-playbook --become -i inventory/$CLUSTER/hosts cluster.yml
```
This will take some time as there are many tasks to run.
## Kubernetes
### Set up kubectl
1. [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your workstation
2. Add a route to the internal IP of a master node (if needed):
``` ```
sudo route add [master-internal-ip] gw [router-ip] sudo route add [master-internal-ip] gw [router-ip]
``` ```
@@ -235,48 +385,53 @@ or
``` ```
sudo route add -net [internal-subnet]/24 gw [router-ip] sudo route add -net [internal-subnet]/24 gw [router-ip]
``` ```
3. List Kubernetes certs&keys: 3. List Kubernetes certificates & keys:
``` ```
ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/ ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/
``` ```
4. Get admin's certs&key: 4. Get `admin`'s certificates and keys:
``` ```
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1-key.pem > admin-key.pem ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1-key.pem > admin-key.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1.pem > admin.pem ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1.pem > admin.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
``` ```
5. Edit OpenStack Neutron master's Security Group to allow TCP connections to port 6443 5. Configure kubectl:
6. Configure kubectl: ```ShellSession
``` $ kubectl config set-cluster default-cluster --server=https://[master-internal-ip]:6443 \
kubectl config set-cluster default-cluster --server=https://[master-internal-ip]:6443 \ --certificate-authority=ca.pem
--certificate-authority=ca.pem
kubectl config set-credentials default-admin \ $ kubectl config set-credentials default-admin \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--client-key=admin-key.pem \ --client-key=admin-key.pem \
--client-certificate=admin.pem --client-certificate=admin.pem
kubectl config set-context default-system --cluster=default-cluster --user=default-admin $ kubectl config set-context default-system --cluster=default-cluster --user=default-admin
kubectl config use-context default-system $ kubectl config use-context default-system
``` ```
7. Check it: 7. Check it:
``` ```
kubectl version kubectl version
``` ```
# What's next If you are using floating ip addresses then you may get this error:
[Start Hello Kubernetes Service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/)
# clean up:
``` ```
$ terraform destroy Unable to connect to the server: x509: certificate is valid for 10.0.0.6, 10.0.0.6, 10.233.0.1, 127.0.0.1, not 132.249.238.25
Do you really want to destroy?
Terraform will delete all your managed infrastructure.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
...
...
Apply complete! Resources: 0 added, 0 changed, 12 destroyed.
``` ```
You can tell kubectl to ignore this condition by adding the
`--insecure-skip-tls-verify` option.
## GlusterFS
GlusterFS is not deployed by the standard`cluster.yml` playbook, see the
[GlusterFS playbook documentation](../../network-storage/glusterfs/README.md)
for instructions.
Basically you will install Gluster as
```ShellSession
$ ansible-playbook --become -i inventory/$CLUSTER/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
```
## What's next
Try out your new Kubernetes cluster with the [Hello Kubernetes service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/).

View File

@@ -1 +1 @@
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -W %h:%p -q USER@BASTION_ADDRESS"' ansible_ssh_common_args: "-o ProxyCommand='ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -q USER@BASTION_ADDRESS {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}'"

View File

@@ -1 +0,0 @@
../../../inventory/group_vars

View File

@@ -1,226 +1,77 @@
resource "openstack_networking_floatingip_v2" "k8s_master" { module "network" {
count = "${var.number_of_k8s_masters + var.number_of_k8s_masters_no_etcd}" source = "modules/network"
pool = "${var.floatingip_pool}"
external_net = "${var.external_net}"
network_name = "${var.network_name}"
cluster_name = "${var.cluster_name}"
dns_nameservers = "${var.dns_nameservers}"
} }
resource "openstack_networking_floatingip_v2" "k8s_node" { module "ips" {
count = "${var.number_of_k8s_nodes}" source = "modules/ips"
pool = "${var.floatingip_pool}"
number_of_k8s_masters = "${var.number_of_k8s_masters}"
number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
number_of_k8s_nodes = "${var.number_of_k8s_nodes}"
floatingip_pool = "${var.floatingip_pool}"
number_of_bastions = "${var.number_of_bastions}"
external_net = "${var.external_net}"
network_name = "${var.network_name}"
router_id = "${module.network.router_id}"
} }
module "compute" {
source = "modules/compute"
resource "openstack_compute_keypair_v2" "k8s" { cluster_name = "${var.cluster_name}"
name = "kubernetes-${var.cluster_name}" number_of_k8s_masters = "${var.number_of_k8s_masters}"
public_key = "${file(var.public_key_path)}" number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
number_of_etcd = "${var.number_of_etcd}"
number_of_k8s_masters_no_floating_ip = "${var.number_of_k8s_masters_no_floating_ip}"
number_of_k8s_masters_no_floating_ip_no_etcd = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
number_of_k8s_nodes = "${var.number_of_k8s_nodes}"
number_of_bastions = "${var.number_of_bastions}"
number_of_k8s_nodes_no_floating_ip = "${var.number_of_k8s_nodes_no_floating_ip}"
number_of_gfs_nodes_no_floating_ip = "${var.number_of_gfs_nodes_no_floating_ip}"
gfs_volume_size_in_gb = "${var.gfs_volume_size_in_gb}"
public_key_path = "${var.public_key_path}"
image = "${var.image}"
image_gfs = "${var.image_gfs}"
ssh_user = "${var.ssh_user}"
ssh_user_gfs = "${var.ssh_user_gfs}"
flavor_k8s_master = "${var.flavor_k8s_master}"
flavor_k8s_node = "${var.flavor_k8s_node}"
flavor_etcd = "${var.flavor_etcd}"
flavor_gfs_node = "${var.flavor_gfs_node}"
network_name = "${var.network_name}"
flavor_bastion = "${var.flavor_bastion}"
k8s_master_fips = "${module.ips.k8s_master_fips}"
k8s_node_fips = "${module.ips.k8s_node_fips}"
bastion_fips = "${module.ips.bastion_fips}"
network_id = "${module.network.router_id}"
} }
resource "openstack_compute_secgroup_v2" "k8s_master" { output "private_subnet_id" {
name = "${var.cluster_name}-k8s-master" value = "${module.network.subnet_id}"
description = "${var.cluster_name} - Kubernetes Master"
} }
resource "openstack_compute_secgroup_v2" "k8s" { output "floating_network_id" {
name = "${var.cluster_name}-k8s" value = "${var.external_net}"
description = "${var.cluster_name} - Kubernetes"
rule {
ip_protocol = "tcp"
from_port = "22"
to_port = "22"
cidr = "0.0.0.0/0"
}
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
cidr = "0.0.0.0/0"
}
rule {
ip_protocol = "tcp"
from_port = "1"
to_port = "65535"
self = true
}
rule {
ip_protocol = "udp"
from_port = "1"
to_port = "65535"
self = true
}
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
self = true
}
} }
resource "openstack_compute_instance_v2" "k8s_master" { output "router_id" {
name = "${var.cluster_name}-k8s-master-${count.index+1}" value = "${module.network.router_id}"
count = "${var.number_of_k8s_masters}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_master.*.address, count.index)}"
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster,vault"
}
} }
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" { output "k8s_master_fips" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}" value = "${module.ips.k8s_master_fips}"
count = "${var.number_of_k8s_masters_no_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_master.*.address, count.index + var.number_of_k8s_masters)}"
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,kube-node,k8s-cluster,vault"
}
} }
resource "openstack_compute_instance_v2" "etcd" { output "k8s_node_fips" {
name = "${var.cluster_name}-etcd-${count.index+1}" value = "${module.ips.k8s_node_fips}"
count = "${var.number_of_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
} }
output "bastion_fips" {
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" { value = "${module.ips.bastion_fips}"
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
} }
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,kube-node,k8s-cluster,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}" ]
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_node.*.address, count.index)}"
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,vault"
}
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.number_of_k8s_nodes_no_floating_ip}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,vault,no-floating"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
name = "${var.cluster_name}-gfs-nephe-vol-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
description = "Non-ephemeral volume for GlusterFS"
size = "${var.gfs_volume_size_in_gb}"
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
image_name = "${var.image_gfs}"
flavor_id = "${var.flavor_gfs_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user_gfs}"
kubespray_groups = "gfs-cluster,network-storage"
}
volume {
volume_id = "${element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/gfs-cluster.yml"
}
}
#output "msg" {
# value = "Your hosts are ready to go!\nYour ssh hosts are: ${join(", ", openstack_networking_floatingip_v2.k8s_master.*.address )}"
#}

View File

@@ -0,0 +1,306 @@
resource "openstack_compute_keypair_v2" "k8s" {
name = "kubernetes-${var.cluster_name}"
public_key = "${chomp(file(var.public_key_path))}"
}
resource "openstack_compute_secgroup_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master"
description = "${var.cluster_name} - Kubernetes Master"
rule {
ip_protocol = "tcp"
from_port = "6443"
to_port = "6443"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "bastion" {
name = "${var.cluster_name}-bastion"
description = "${var.cluster_name} - Bastion Server"
rule {
ip_protocol = "tcp"
from_port = "22"
to_port = "22"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "k8s" {
name = "${var.cluster_name}-k8s"
description = "${var.cluster_name} - Kubernetes"
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
cidr = "0.0.0.0/0"
}
rule {
ip_protocol = "tcp"
from_port = "1"
to_port = "65535"
self = true
}
rule {
ip_protocol = "udp"
from_port = "1"
to_port = "65535"
self = true
}
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
self = true
}
}
resource "openstack_compute_instance_v2" "bastion" {
name = "${var.cluster_name}-bastion-${count.index+1}"
count = "${var.number_of_bastions}"
image_name = "${var.image}"
flavor_id = "${var.flavor_bastion}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
"default",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "bastion"
depends_on = "${var.network_id}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > contrib/terraform/openstack/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master-${count.index+1}"
count = "${var.number_of_k8s_masters}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
"default",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,k8s-cluster,vault"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.number_of_k8s_masters_no_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,k8s-cluster,vault"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.number_of_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}"]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,vault,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
"default",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
"default",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.number_of_k8s_nodes_no_floating_ip}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
"default",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_floatingip_associate_v2" "bastion" {
count = "${var.number_of_bastions}"
floating_ip = "${var.bastion_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.bastion.*.id, count.index)}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master" {
count = "${var.number_of_k8s_masters}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master.*.id, count.index)}"
floating_ip = "${var.k8s_master_fips[count.index]}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}"
floating_ip = "${var.k8s_node_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.k8s_node.*.id, count.index)}"
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
name = "${var.cluster_name}-glusterfs_volume-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
description = "Non-ephemeral volume for GlusterFS"
size = "${var.gfs_volume_size_in_gb}"
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
image_name = "${var.image_gfs}"
flavor_id = "${var.flavor_gfs_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
"default",
]
metadata = {
ssh_user = "${var.ssh_user_gfs}"
kubespray_groups = "gfs-cluster,network-storage,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {
count = "${var.number_of_gfs_nodes_no_floating_ip}"
instance_id = "${element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip.*.id, count.index)}"
volume_id = "${element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)}"
}

View File

@@ -0,0 +1,57 @@
variable "cluster_name" {}
variable "number_of_k8s_masters" {}
variable "number_of_k8s_masters_no_etcd" {}
variable "number_of_etcd" {}
variable "number_of_k8s_masters_no_floating_ip" {}
variable "number_of_k8s_masters_no_floating_ip_no_etcd" {}
variable "number_of_k8s_nodes" {}
variable "number_of_k8s_nodes_no_floating_ip" {}
variable "number_of_bastions" {}
variable "number_of_gfs_nodes_no_floating_ip" {}
variable "gfs_volume_size_in_gb" {}
variable "public_key_path" {}
variable "image" {}
variable "image_gfs" {}
variable "ssh_user" {}
variable "ssh_user_gfs" {}
variable "flavor_k8s_master" {}
variable "flavor_k8s_node" {}
variable "flavor_etcd" {}
variable "flavor_gfs_node" {}
variable "network_name" {}
variable "flavor_bastion" {}
variable "network_id" {}
variable "k8s_master_fips" {
type = "list"
}
variable "k8s_node_fips" {
type = "list"
}
variable "bastion_fips" {
type = "list"
}

View File

@@ -0,0 +1,23 @@
resource "null_resource" "dummy_dependency" {
triggers {
dependency_id = "${var.router_id}"
}
}
resource "openstack_networking_floatingip_v2" "k8s_master" {
count = "${var.number_of_k8s_masters}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
}
resource "openstack_networking_floatingip_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
}
resource "openstack_networking_floatingip_v2" "bastion" {
count = "${var.number_of_bastions}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
}

View File

@@ -0,0 +1,11 @@
output "k8s_master_fips" {
value = ["${openstack_networking_floatingip_v2.k8s_master.*.address}"]
}
output "k8s_node_fips" {
value = ["${openstack_networking_floatingip_v2.k8s_node.*.address}"]
}
output "bastion_fips" {
value = ["${openstack_networking_floatingip_v2.bastion.*.address}"]
}

View File

@@ -0,0 +1,15 @@
variable "number_of_k8s_masters" {}
variable "number_of_k8s_masters_no_etcd" {}
variable "number_of_k8s_nodes" {}
variable "floatingip_pool" {}
variable "number_of_bastions" {}
variable "external_net" {}
variable "network_name" {}
variable "router_id" {}

View File

@@ -0,0 +1,23 @@
resource "openstack_networking_router_v2" "k8s" {
name = "${var.cluster_name}-router"
admin_state_up = "true"
external_network_id = "${var.external_net}"
}
resource "openstack_networking_network_v2" "k8s" {
name = "${var.network_name}"
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "k8s" {
name = "${var.cluster_name}-internal-network"
network_id = "${openstack_networking_network_v2.k8s.id}"
cidr = "10.0.0.0/24"
ip_version = 4
dns_nameservers = "${var.dns_nameservers}"
}
resource "openstack_networking_router_interface_v2" "k8s" {
router_id = "${openstack_networking_router_v2.k8s.id}"
subnet_id = "${openstack_networking_subnet_v2.k8s.id}"
}

View File

@@ -0,0 +1,7 @@
output "router_id" {
value = "${openstack_networking_router_interface_v2.k8s.id}"
}
output "subnet_id" {
value = "${openstack_networking_subnet_v2.k8s.id}"
}

View File

@@ -0,0 +1,9 @@
variable "external_net" {}
variable "network_name" {}
variable "cluster_name" {}
variable "dns_nameservers" {
type = "list"
}

View File

@@ -0,0 +1,45 @@
# your Kubernetes cluster name here
cluster_name = "i-didnt-read-the-docs"
# SSH key to use for access to nodes
public_key_path = "~/.ssh/id_rsa.pub"
# image to use for bastion, masters, standalone etcd instances, and nodes
image = "<image name>"
# user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.)
ssh_user = "<cloud-provisioned user>"
# 0|1 bastion nodes
number_of_bastions = 0
#flavor_bastion = "<UUID>"
# standalone etcds
number_of_etcd = 0
# masters
number_of_k8s_masters = 1
number_of_k8s_masters_no_etcd = 0
number_of_k8s_masters_no_floating_ip = 0
number_of_k8s_masters_no_floating_ip_no_etcd = 0
flavor_k8s_master = "<UUID>"
# nodes
number_of_k8s_nodes = 2
number_of_k8s_nodes_no_floating_ip = 4
#flavor_k8s_node = "<UUID>"
# GlusterFS
# either 0 or more than one
#number_of_gfs_nodes_no_floating_ip = 0
#gfs_volume_size_in_gb = 150
# Container Linux does not support GlusterFS
#image_gfs = "<image name>"
# May be different from other nodes
#ssh_user_gfs = "ubuntu"
#flavor_gfs_node = "<UUID>"
# networking
network_name = "<network>"
external_net = "<UUID>"
floatingip_pool = "<pool>"

View File

@@ -0,0 +1 @@
../../../../inventory/sample/group_vars

View File

@@ -2,6 +2,10 @@ variable "cluster_name" {
default = "example" default = "example"
} }
variable "number_of_bastions" {
default = 1
}
variable "number_of_k8s_masters" { variable "number_of_k8s_masters" {
default = 2 default = 2
} }
@@ -40,67 +44,70 @@ variable "gfs_volume_size_in_gb" {
variable "public_key_path" { variable "public_key_path" {
description = "The path of the ssh pub key" description = "The path of the ssh pub key"
default = "~/.ssh/id_rsa.pub" default = "~/.ssh/id_rsa.pub"
} }
variable "image" { variable "image" {
description = "the image to use" description = "the image to use"
default = "ubuntu-14.04" default = "ubuntu-14.04"
} }
variable "image_gfs" { variable "image_gfs" {
description = "Glance image to use for GlusterFS" description = "Glance image to use for GlusterFS"
default = "ubuntu-16.04" default = "ubuntu-16.04"
} }
variable "ssh_user" { variable "ssh_user" {
description = "used to fill out tags for ansible inventory" description = "used to fill out tags for ansible inventory"
default = "ubuntu" default = "ubuntu"
} }
variable "ssh_user_gfs" { variable "ssh_user_gfs" {
description = "used to fill out tags for ansible inventory" description = "used to fill out tags for ansible inventory"
default = "ubuntu" default = "ubuntu"
}
variable "flavor_bastion" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
default = 3
} }
variable "flavor_k8s_master" { variable "flavor_k8s_master" {
default = 3 description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
default = 3
} }
variable "flavor_k8s_node" { variable "flavor_k8s_node" {
default = 3 description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
default = 3
} }
variable "flavor_etcd" { variable "flavor_etcd" {
default = 3 description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
default = 3
} }
variable "flavor_gfs_node" { variable "flavor_gfs_node" {
default = 3 description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
default = 3
} }
variable "network_name" { variable "network_name" {
description = "name of the internal network to use" description = "name of the internal network to use"
default = "internal" default = "internal"
}
variable "dns_nameservers" {
description = "An array of DNS name server names used by hosts in this subnet."
type = "list"
default = []
} }
variable "floatingip_pool" { variable "floatingip_pool" {
description = "name of the floating ip pool to use" description = "name of the floating ip pool to use"
default = "external" default = "external"
} }
variable "username" { variable "external_net" {
description = "Your openstack username" description = "uuid of the external/public network"
}
variable "password" {
description = "Your openstack password"
}
variable "tenant" {
description = "Your openstack tenant/project"
}
variable "auth_url" {
description = "Your openstack auth URL"
} }

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python #!/usr/bin/env python2
# #
# Copyright 2015 Cisco Systems, Inc. # Copyright 2015 Cisco Systems, Inc.
# #
@@ -70,6 +70,14 @@ def iterhosts(resources):
yield parser(resource, module_name) yield parser(resource, module_name)
def iterips(resources):
'''yield ip tuples of (instance_id, ip)'''
for module_name, key, resource in resources:
resource_type, name = key.split('.', 1)
if resource_type == 'openstack_compute_floatingip_associate_v2':
yield openstack_floating_ips(resource)
def parses(prefix): def parses(prefix):
def inner(func): def inner(func):
PARSERS[prefix] = func PARSERS[prefix] = func
@@ -298,6 +306,17 @@ def softlayer_host(resource, module_name):
return name, attrs, groups return name, attrs, groups
def openstack_floating_ips(resource):
raw_attrs = resource['primary']['attributes']
attrs = {
'ip': raw_attrs['floating_ip'],
'instance_id': raw_attrs['instance_id'],
}
return attrs
def openstack_floating_ips(resource):
raw_attrs = resource['primary']['attributes']
return raw_attrs['instance_id'], raw_attrs['floating_ip']
@parses('openstack_compute_instance_v2') @parses('openstack_compute_instance_v2')
@calculate_mantl_vars @calculate_mantl_vars
@@ -343,6 +362,8 @@ def openstack_host(resource, module_name):
except (KeyError, ValueError): except (KeyError, ValueError):
attrs.update({'ansible_ssh_host': '', 'publicly_routable': False}) attrs.update({'ansible_ssh_host': '', 'publicly_routable': False})
# Handling of floating IPs has changed: https://github.com/terraform-providers/terraform-provider-openstack/blob/master/CHANGELOG.md#010-june-21-2017
# attrs specific to Ansible # attrs specific to Ansible
if 'metadata.ssh_user' in raw_attrs: if 'metadata.ssh_user' in raw_attrs:
attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user'] attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user']
@@ -656,6 +677,19 @@ def clc_server(resource, module_name):
return name, attrs, groups return name, attrs, groups
def iter_host_ips(hosts, ips):
'''Update hosts that have an entry in the floating IP list'''
for host in hosts:
host_id = host[1]['id']
if host_id in ips:
ip = ips[host_id]
host[1].update({
'access_ip_v4': ip,
'public_ipv4': ip,
'ansible_ssh_host': ip,
})
yield host
## QUERY TYPES ## QUERY TYPES
def query_host(hosts, target): def query_host(hosts, target):
@@ -727,6 +761,13 @@ def main():
parser.exit() parser.exit()
hosts = iterhosts(iterresources(tfstates(args.root))) hosts = iterhosts(iterresources(tfstates(args.root)))
# Perform a second pass on the file to pick up floating_ip entries to update the ip address of referenced hosts
ips = dict(iterips(iterresources(tfstates(args.root))))
if ips:
hosts = iter_host_ips(hosts, ips)
if args.list: if args.list:
output = query_list(hosts) output = query_list(hosts)
if args.nometa: if args.nometa:

View File

@@ -27,7 +27,7 @@ not _kube-node_.
There are also two special groups: There are also two special groups:
* **calico-rr** : explained for [advanced Calico networking cases](calico.md) * **calico-rr** : explained for [advanced Calico networking cases](calico.md)
* **bastion** : configure a bastion host if your nodes are not directly reachable * **bastion** : configure a bastion host if your nodes are not directly reachable
Below is a complete inventory example: Below is a complete inventory example:
@@ -66,10 +66,10 @@ kube-master
Group vars and overriding variables precedence Group vars and overriding variables precedence
---------------------------------------------- ----------------------------------------------
The group variables to control main deployment options are located in the directory ``inventory/group_vars``. The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
Optional variables are located in the `inventory/group_vars/all.yml`. Optional variables are located in the `inventory/sample/group_vars/all.yml`.
Mandatory variables that are common for at least one role (or a node group) can be found in the Mandatory variables that are common for at least one role (or a node group) can be found in the
`inventory/group_vars/k8s-cluster.yml`. `inventory/sample/group_vars/k8s-cluster.yml`.
There are also role vars for docker, rkt, kubernetes preinstall and master roles. There are also role vars for docker, rkt, kubernetes preinstall and master roles.
According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable), According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overriden from the group vars. In order to override, one should use those cannot be overriden from the group vars. In order to override, one should use
@@ -153,16 +153,16 @@ Example command to filter and apply only DNS configuration tasks and skip
everything else related to host OS configuration and downloading images of containers: everything else related to host OS configuration and downloading images of containers:
``` ```
ansible-playbook -i inventory/inventory.ini cluster.yml --tags preinstall,dnsmasq,facts --skip-tags=download,bootstrap-os ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,dnsmasq,facts --skip-tags=download,bootstrap-os
``` ```
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files: And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
``` ```
ansible-playbook -i inventory/inventory.ini -e dnsmasq_dns_server='' cluster.yml --tags resolvconf ansible-playbook -i inventory/sample/hosts.ini -e dnsmasq_dns_server='' cluster.yml --tags resolvconf
``` ```
And this prepares all container images localy (at the ansible runner node) without installing And this prepares all container images localy (at the ansible runner node) without installing
or upgrading related stuff or trying to upload container to K8s cluster nodes: or upgrading related stuff or trying to upload container to K8s cluster nodes:
``` ```
ansible-playbook -i inventory/inventory.ini cluster.yml \ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
-e download_run_once=true -e download_localhost=true \ -e download_run_once=true -e download_localhost=true \
--tags download --skip-tags upload,upgrade --tags download --skip-tags upload,upgrade
``` ```

View File

@@ -7,7 +7,7 @@ Note: Flannel is the only plugin that has currently been tested with atomic
### Vagrant ### Vagrant
* For bootstrapping with Vagrant, use box centos/atomic-host * For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host
* Update VagrantFile variable `local_release_dir` to `/var/vagrant/temp`. * Update VagrantFile variable `local_release_dir` to `/var/vagrant/temp`.
* Update `vm_memory = 2048` and `vm_cpus = 2` * Update `vm_memory = 2048` and `vm_cpus = 2`
* Networking on vagrant hosts has to be brought up manually once they are booted. * Networking on vagrant hosts has to be brought up manually once they are booted.
@@ -17,6 +17,7 @@ Note: Flannel is the only plugin that has currently been tested with atomic
sudo /sbin/ifup enp0s8 sudo /sbin/ifup enp0s8
``` ```
* For users of vagrant-libvirt download qcow2 format from https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/ * For users of vagrant-libvirt download centos/atomic-host qcow2 format from https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/
* For users of vagrant-libvirt download fedora/atomic-host qcow2 format from https://getfedora.org/en/atomic/download/
Then you can proceed to [cluster deployment](#run-deployment) Then you can proceed to [cluster deployment](#run-deployment)

View File

@@ -5,7 +5,7 @@ To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provi
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role. Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets and all instances that kubernetes will be run on with key `kuberentes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targetted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`. You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets and all instances that kubernetes will be run on with key `kubernetes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targetted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
Make sure your VPC has both DNS Hostnames support and Private DNS enabled. Make sure your VPC has both DNS Hostnames support and Private DNS enabled.

View File

@@ -3,20 +3,11 @@ Cloud providers
#### Provisioning #### Provisioning
You can use kubespray-cli to start new instances on cloud providers You can deploy instances in your cloud environment in several different ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation.
here's an example
```
kubespray [aws|gce] --nodes 2 --etcd 3 --cluster-name test-smana
```
#### Deploy kubernetes #### Deploy kubernetes
With kubespray-cli With ansible-playbook command
```
kubespray deploy [--aws|--gce] -u admin
```
Or ansible-playbook command
``` ```
ansible-playbook -u smana -e ansible_ssh_user=admin -e cloud_provider=[aws|gce] -b --become-user=root -i inventory/single.cfg cluster.yml ansible-playbook -u smana -e ansible_ssh_user=admin -e cloud_provider=[aws|gce] -b --become-user=root -i inventory/single.cfg cluster.yml
``` ```

74
docs/contiv.md Normal file
View File

@@ -0,0 +1,74 @@
Contiv
======
Here is the [Contiv documentation](http://contiv.github.io/documents/).
## Administrate Contiv
There are two ways to manage Contiv:
* a web UI managed by the api proxy service
* a CLI named `netctl`
### Interfaces
#### The Web Interface
This UI is hosted on all kubernetes master nodes. The service is available at `https://<one of your master node>:10000`.
You can configure the api proxy by overriding the following variables:
```yaml
contiv_enable_api_proxy: true
contiv_api_proxy_port: 10000
contiv_generate_certificate: true
```
The default credentials to log in are: admin/admin.
#### The Command Line Interface
The second way to modify the Contiv configuration is to use the CLI. To do this, you have to connect to the server and export an environment variable to tell netctl how to connect to the cluster:
```bash
export NETMASTER=http://127.0.0.1:9999
```
The port can be changed by overriding the following variable:
```yaml
contiv_netmaster_port: 9999
```
The CLI doesn't use the authentication process needed by the web interface.
### Network configuration
The default configuration uses VXLAN to create an overlay. Two networks are created by default:
* `contivh1`: an infrastructure network. It allows nodes to access the pods IPs. It is mandatory in a Kubernetes environment that uses VXLAN.
* `default-net` : the default network that hosts pods.
You can change the default network configuration by overriding the `contiv_networks` variable.
The default forward mode is set to routing:
```yaml
contiv_fwd_mode: routing
```
The following is an example of how you can use VLAN instead of VXLAN:
```yaml
contiv_fwd_mode: bridge
contiv_vlan_interface: eth0
contiv_networks:
- name: default-net
subnet: "{{ kube_pods_subnet }}"
gateway: "{{ kube_pods_subnet|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
encap: vlan
pkt_tag: 10
```

View File

@@ -1,13 +1,7 @@
CoreOS bootstrap CoreOS bootstrap
=============== ===============
Example with **kubespray-cli**: Example with Ansible:
```
kubespray deploy --gce --coreos
```
Or with Ansible:
Before running the cluster playbook you must satisfy the following requirements: Before running the cluster playbook you must satisfy the following requirements:

View File

@@ -26,13 +26,13 @@ Debian Jessie installation Notes:
``` ```
sudo add-apt-repository ppa:ansible/ansible sudo add-apt-repository ppa:ansible/ansible
sudo apt-get update sudo apt-get update
sudo apt.get install ansible sudo apt-get install ansible
``` ```
- Install Jinja2 and Python-Netaddr - Install Jinja2 and Python-Netaddr
```sudo apt-get install phyton-jinja2=2.8-1~bpo8+1 python-netaddr``` ```sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr```
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment) Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)

View File

@@ -50,7 +50,7 @@ DNS modes supported by Kubespray
You can modify how Kubespray sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``. You can modify how Kubespray sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``.
## dns_mode ## dns_mode
``dns_mode`` configures how Kubespray will setup cluster DNS. There are three modes available: ``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available:
#### dnsmasq_kubedns (default) #### dnsmasq_kubedns (default)
This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some
@@ -62,6 +62,20 @@ other queries are forwardet to the nameservers found in ``upstream_dns_servers``
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use kubedns/skydns for This does not install the dnsmasq DaemonSet and instructs kubelet to directly use kubedns/skydns for
all queries. all queries.
#### coredns
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use CoreDNS for
all queries.
#### coredns_dual
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use CoreDNS for
all queries. It will also deploy a secondary CoreDNS stack
#### manual
This does not install dnsmasq or kubedns, but allows you to specify
`manual_dns_server`, which will be configured on nodes for handling Pod DNS.
Use this method if you plan to install your own DNS server in the cluster after
initial deployment.
#### none #### none
This does not install any of dnsmasq and kubedns/skydns. This basically disables cluster DNS completely and This does not install any of dnsmasq and kubedns/skydns. This basically disables cluster DNS completely and
leaves you with a non functional cluster. leaves you with a non functional cluster.

View File

@@ -1,29 +1,12 @@
Getting started Getting started
=============== ===============
The easiest way to run the deployement is to use the **kubespray-cli** tool.
A complete documentation can be found in its [github repository](https://github.com/kubespray/kubespray-cli).
Here is a simple example on AWS:
* Create instances and generate the inventory
```
kubespray aws --instances 3
```
* Run the deployment
```
kubespray deploy --aws -u centos -n calico
```
Building your own inventory Building your own inventory
--------------------------- ---------------------------
Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is
an example inventory located an example inventory located
[here](https://github.com/kubernetes-incubator/kubespray/blob/master/inventory/inventory.example). [here](https://github.com/kubernetes-incubator/kubespray/blob/master/inventory/sample/hosts.ini).
You can use an You can use an
[inventory generator](https://github.com/kubernetes-incubator/kubespray/blob/master/contrib/inventory_builder/inventory.py) [inventory generator](https://github.com/kubernetes-incubator/kubespray/blob/master/contrib/inventory_builder/inventory.py)
@@ -35,11 +18,9 @@ certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` hel
Example inventory generator usage: Example inventory generator usage:
``` cp -r inventory/sample inventory/mycluster
cp -r inventory my_inventory declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5) CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
CONFIG_FILE=my_inventory/inventory.cfg python3 contrib/inventory_builder/inventory.py ${IPS[@]}
```
Starting custom deployment Starting custom deployment
-------------------------- --------------------------
@@ -47,12 +28,10 @@ Starting custom deployment
Once you have an inventory, you may want to customize deployment data vars Once you have an inventory, you may want to customize deployment data vars
and start the deployment: and start the deployment:
**IMPORTANT: Edit my_inventory/groups_vars/*.yaml to override data vars** **IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars:
``` ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml -b -v \
ansible-playbook -i my_inventory/inventory.cfg cluster.yml -b -v \ --private-key=~/.ssh/private_key
--private-key=~/.ssh/private_key
```
See more details in the [ansible guide](ansible.md). See more details in the [ansible guide](ansible.md).
@@ -61,31 +40,43 @@ Adding nodes
You may want to add **worker** nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters. You may want to add **worker** nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
- Add the new worker node to your inventory under kube-node (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)). - Add the new worker node to your inventory under kube-node (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `scale.yml` for `cluster.yml`: - Run the ansible-playbook command, substituting `scale.yml` for `cluster.yml`:
ansible-playbook -i inventory/mycluster/hosts.ini scale.yml -b -v \
--private-key=~/.ssh/private_key
Remove nodes
------------
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again.
- Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `remove-node.yml`:
``` ```
ansible-playbook -i my_inventory/inventory.cfg scale.yml -b -v \ ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
--private-key=~/.ssh/private_key --private-key=~/.ssh/private_key
``` ```
Connecting to Kubernetes Connecting to Kubernetes
------------------------ ------------------------
By default, Kubespray configures kube-master hosts with insecure access to By default, Kubespray configures kube-master hosts with insecure access to
kube-apiserver via port 8080. A kubeconfig file is not necessary in this case, kube-apiserver via port 8080. A kubeconfig file is not necessary in this case,
because kubectl will use http://localhost:8080 to connect. The kubeconfig files because kubectl will use <http://localhost:8080> to connect. The kubeconfig files
generated will point to localhost (on kube-masters) and kube-node hosts will generated will point to localhost (on kube-masters) and kube-node hosts will
connect either to a localhost nginx proxy or to a loadbalancer if configured. connect either to a localhost nginx proxy or to a loadbalancer if configured.
More details on this process are in the [HA guide](ha.md). More details on this process are in the [HA guide](ha-mode.md).
Kubespray permits connecting to the cluster remotely on any IP of any Kubespray permits connecting to the cluster remotely on any IP of any
kube-master host on port 6443 by default. However, this requires kube-master host on port 6443 by default. However, this requires
authentication. One could generate a kubeconfig based on one installed authentication. One could generate a kubeconfig based on one installed
kube-master hosts (needs improvement) or connect with a username and password. kube-master hosts (needs improvement) or connect with a username and password.
By default, a user with admin rights is created, named `kube`. By default, a user with admin rights is created, named `kube`.
The password can be viewed after deployment by looking at the file The password can be viewed after deployment by looking at the file
`PATH_TO_KUBESPRAY/credentials/kube_user`. This contains a randomly generated `PATH_TO_KUBESPRAY/credentials/kube_user.creds`. This contains a randomly generated
password. If you wish to set your own password, just precreate/modify this password. If you wish to set your own password, just precreate/modify this
file yourself. file yourself.
For more information on kubeconfig and accessing a Kubernetes cluster, refer to For more information on kubeconfig and accessing a Kubernetes cluster, refer to
the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
@@ -93,29 +84,34 @@ the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-applicati
Accessing Kubernetes Dashboard Accessing Kubernetes Dashboard
------------------------------ ------------------------------
If the variable `dashboard_enabled` is set (default is true) as well as As of kubernetes-dashboard v1.7.x:
kube_basic_auth (default is false), then you can
access the Kubernetes Dashboard at the following URL:
https://kube:_kube-password_@_host_:6443/ui/ - New login options that use apiserver auth proxying of token/basic/kubeconfig by default
- Requires RBAC in authorization\_modes
- Only serves over https
- No longer available at <https://first_master:6443/ui> until apiserver is updated with the https proxy URL
To see the password, refer to the section above, titled *Connecting to If the variable `dashboard_enabled` is set (default is true), then you can access the Kubernetes Dashboard at the following URL, You will be prompted for credentials:
Kubernetes*. The host can be any kube-master or kube-node or loadbalancer <https://first_master:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login>
(when enabled).
To access the Dashboard with basic auth disabled, follow the instructions here: Or you can run 'kubectl proxy' from your local machine to access dashboard in your browser from:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#command-line-proxy <http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login>
It is recommended to access dashboard from behind a gateway (like Ingress Controller) that enforces an authentication token. Details and other access options here: <https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above>
Accessing Kubernetes API Accessing Kubernetes API
------------------------ ------------------------
The main client of Kubernetes is `kubectl`. It is installed on each kube-master The main client of Kubernetes is `kubectl`. It is installed on each kube-master
host and can optionally be configured on your ansible host by setting host and can optionally be configured on your ansible host by setting
`kubeconfig_localhost: true` in the configuration. If enabled, kubectl and `kubectl_localhost: true` and `kubeconfig_localhost: true` in the configuration:
admin.conf will appear in the artifacts/ directory after deployment. You can
see a list of nodes by running the following commands:
cd artifacts/ - If `kubectl_localhost` enabled, `kubectl` will download onto `/usr/local/bin/` and setup with bash completion. A helper script `inventory/mycluster/artifacts/kubectl.sh` also created for setup with below `admin.conf`.
./kubectl --kubeconfig admin.conf get nodes - If `kubeconfig_localhost` enabled `admin.conf` will appear in the `inventory/mycluster/artifacts/` directory after deployment.
If desired, copy kubectl to your bin dir and admin.conf to ~/.kube/config. You can see a list of nodes by running the following commands:
cd inventory/mycluster/artifacts
./kubectl.sh get nodes
If desired, copy admin.conf to ~/.kube/config.

View File

@@ -27,19 +27,21 @@ non-master Kubernetes node. This is referred to as localhost loadbalancing. It
is less efficient than a dedicated load balancer because it creates extra is less efficient than a dedicated load balancer because it creates extra
health checks on the Kubernetes apiserver, but is more practical for scenarios health checks on the Kubernetes apiserver, but is more practical for scenarios
where an external LB or virtual IP management is inconvenient. This option is where an external LB or virtual IP management is inconvenient. This option is
configured by the variable `loadbalancer_apiserver_localhost` (defaults to `True`). configured by the variable `loadbalancer_apiserver_localhost` (defaults to
`True`. Or `False`, if there is an external `loadbalancer_apiserver` defined).
You may also define the port the local internal loadbalancer uses by changing, You may also define the port the local internal loadbalancer uses by changing,
`nginx_kube_apiserver_port`. This defaults to the value of `kube_apiserver_port`. `nginx_kube_apiserver_port`. This defaults to the value of
It is also important to note that Kubespray will only configure kubelet and kube-proxy `kube_apiserver_port`. It is also important to note that Kubespray will only
on non-master nodes to use the local internal loadbalancer. configure kubelet and kube-proxy on non-master nodes to use the local internal
loadbalancer.
If you choose to NOT use the local internal loadbalancer, you will need to configure If you choose to NOT use the local internal loadbalancer, you will need to
your own loadbalancer to achieve HA. Note that deploying a loadbalancer is up to configure your own loadbalancer to achieve HA. Note that deploying a
a user and is not covered by ansible roles in Kubespray. By default, it only configures loadbalancer is up to a user and is not covered by ansible roles in Kubespray.
a non-HA endpoint, which points to the `access_ip` or IP address of the first server By default, it only configures a non-HA endpoint, which points to the
node in the `kube-master` group. It can also configure clients to use endpoints `access_ip` or IP address of the first server node in the `kube-master` group.
for a given loadbalancer type. The following diagram shows how traffic to the It can also configure clients to use endpoints for a given loadbalancer type.
apiserver is directed. The following diagram shows how traffic to the apiserver is directed.
![Image](figures/loadbalancer_localhost.png?raw=true) ![Image](figures/loadbalancer_localhost.png?raw=true)
@@ -66,40 +68,72 @@ listen kubernetes-apiserver-https
balance roundrobin balance roundrobin
``` ```
And the corresponding example global vars config: Note: That's an example config managed elsewhere outside of Kubespray.
And the corresponding example global vars for such a "cluster-aware"
external LB with the cluster API access modes configured in Kubespray:
``` ```
apiserver_loadbalancer_domain_name: "lb-apiserver.kubernetes.local" apiserver_loadbalancer_domain_name: "my-apiserver-lb.example.com"
loadbalancer_apiserver: loadbalancer_apiserver:
address: <VIP> address: <VIP>
port: 8383 port: 8383
``` ```
Note: The default kubernetes apiserver configuration binds to all interfaces,
so you will need to use a different port for the vip from that the API is
listening on, or set the `kube_apiserver_bind_address` so that the API only
listens on a specific interface (to avoid conflict with haproxy binding the
port on the VIP adddress)
This domain name, or default "lb-apiserver.kubernetes.local", will be inserted This domain name, or default "lb-apiserver.kubernetes.local", will be inserted
into the `/etc/hosts` file of all servers in the `k8s-cluster` group. Note that into the `/etc/hosts` file of all servers in the `k8s-cluster` group and wired
into the generated self-signed TLS/SSL certificates as well. Note that
the HAProxy service should as well be HA and requires a VIP management, which the HAProxy service should as well be HA and requires a VIP management, which
is out of scope of this doc. Specifying an external LB overrides any internal is out of scope of this doc.
localhost LB configuration.
Note: In order to achieve HA for HAProxy instances, those must be running on There is a special case for an internal and an externally configured (not with
the each node in the `k8s-cluster` group as well, but require no VIP, thus Kubespray) LB used simultaneously. Keep in mind that the cluster is not aware
no VIP management. of such an external LB and you need no to specify any configuration variables
for it.
Access endpoints are evaluated automagically, as the following: Note: TLS/SSL termination for externally accessed API endpoints' will **not**
be covered by Kubespray for that case. Make sure your external LB provides it.
Alternatively you may specify an externally load balanced VIPs in the
`supplementary_addresses_in_ssl_keys` list. Then, kubespray will add them into
the generated cluster certifactes as well.
| Endpoint type | kube-master | non-master | Aside of that specific case, the `loadbalancer_apiserver` considered mutually
|------------------------------|---------------|---------------------| exclusive to `loadbalancer_apiserver_localhost`.
| Local LB (default) | http://lc:p | https://lc:nsp |
| External LB, no internal | https://lb:lp | https://lb:lp | Access API endpoints are evaluated automagically, as the following:
| No ext/int LB | http://lc:p | https://m[0].aip:sp |
| Endpoint type | kube-master | non-master | external |
|------------------------------|----------------|---------------------|---------------------|
| Local LB (default) | https://bip:sp | https://lc:nsp | https://m[0].aip:sp |
| Local LB + Unmanaged here LB | https://bip:sp | https://lc:nsp | https://ext |
| External LB, no internal | https://bip:sp | https://lb:lp | https://lb:lp |
| No ext/int LB | https://bip:sp | https://m[0].aip:sp | https://m[0].aip:sp |
Where: Where:
* `m[0]` - the first node in the `kube-master` group; * `m[0]` - the first node in the `kube-master` group;
* `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`; * `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`;
* `ext` - Externally load balanced VIP:port and FQDN, not managed by Kubespray;
* `lc` - localhost; * `lc` - localhost;
* `p` - insecure port, `kube_apiserver_insecure_port` * `bip` - a custom bind IP or localhost for the default bind IP '0.0.0.0';
* `nsp` - nginx secure port, `nginx_kube_apiserver_port`; * `nsp` - nginx secure port, `nginx_kube_apiserver_port`, defers to `sp`;
* `sp` - secure port, `kube_apiserver_port`; * `sp` - secure port, `kube_apiserver_port`;
* `lp` - LB port, `loadbalancer_apiserver.port`, defers to the secure port; * `lp` - LB port, `loadbalancer_apiserver.port`, defers to the secure port;
* `ip` - the node IP, defers to the ansible IP; * `ip` - the node IP, defers to the ansible IP;
* `aip` - `access_ip`, defers to the ip. * `aip` - `access_ip`, defers to the ip.
A second and a third column represent internal cluster access modes. The last
column illustrates an example URI to access the cluster APIs externally.
Kubespray has nothing to do with it, this is informational only.
As you can see, the masters' internal API endpoints are always
contacted via the local bind IP, which is `https://bip:sp`.
**Note** that for some cases, like healthchecks of applications deployed by
Kubespray, the masters' APIs are accessed via the insecure endpoint, which
consists of the local `kube_apiserver_insecure_bind_address` and
`kube_apiserver_insecure_port`.

View File

@@ -84,7 +84,7 @@ Other members of your team should use ```git submodule sync```, ```git submodule
# Contributing # Contributing
If you made useful changes or fixed a bug in existent kubespray repo, use this flow for PRs to original kubespray repo. If you made useful changes or fixed a bug in existent kubespray repo, use this flow for PRs to original kubespray repo.
0. Sign the [CNCF CLA](https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ). 0. Sign the [CNCF CLA](https://git.k8s.io/community/CLA.md).
1. Change working directory to git submodule directory (3d/kubespray). 1. Change working directory to git submodule directory (3d/kubespray).

View File

@@ -3,8 +3,7 @@ Large deployments of K8s
For a large scaled deployments, consider the following configuration changes: For a large scaled deployments, consider the following configuration changes:
* Tune [ansible settings] * Tune [ansible settings](http://docs.ansible.com/ansible/intro_configuration.html)
(http://docs.ansible.com/ansible/intro_configuration.html)
for `forks` and `timeout` vars to fit large numbers of nodes being deployed. for `forks` and `timeout` vars to fit large numbers of nodes being deployed.
* Override containers' `foo_image_repo` vars to point to intranet registry. * Override containers' `foo_image_repo` vars to point to intranet registry.
@@ -34,6 +33,9 @@ For a large scaled deployments, consider the following configuration changes:
``kube_controller_pod_eviction_timeout`` for better Kubernetes reliability. ``kube_controller_pod_eviction_timeout`` for better Kubernetes reliability.
Check out [Kubernetes Reliability](kubernetes-reliability.md) Check out [Kubernetes Reliability](kubernetes-reliability.md)
* Tune network prefix sizes. Those are ``kube_network_node_prefix``,
``kube_service_addresses`` and ``kube_pods_subnet``.
* Add calico-rr nodes if you are deploying with Calico or Canal. Nodes recover * Add calico-rr nodes if you are deploying with Calico or Canal. Nodes recover
from host/network interruption much quicker with calico-rr. Note that from host/network interruption much quicker with calico-rr. Note that
calico-rr role must be on a host without kube-master or kube-node role (but calico-rr role must be on a host without kube-master or kube-node role (but
@@ -44,5 +46,8 @@ For a large scaled deployments, consider the following configuration changes:
section of the Getting started guide for tips on creating a large scale section of the Getting started guide for tips on creating a large scale
Ansible inventory. Ansible inventory.
* Override the ``etcd_events_cluster_setup: true`` store events in a separate
dedicated etcd instance.
For example, when deploying 200 nodes, you may want to run ansible with For example, when deploying 200 nodes, you may want to run ansible with
``--forks=50``, ``--timeout=600`` and define the ``retry_stagger: 60``. ``--forks=50``, ``--timeout=600`` and define the ``retry_stagger: 60``.

19
docs/opensuse.md Normal file
View File

@@ -0,0 +1,19 @@
openSUSE Leap 42.3 and Tumbleweed
===============
openSUSE Leap installation Notes:
- Install Ansible
```
sudo zypper ref
sudo zypper -n install ansible
```
- Install Jinja2 and Python-Netaddr
```sudo zypper -n install python-Jinja2 python-netaddr```
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)

View File

@@ -24,13 +24,13 @@ If you wanted to upgrade just kube_version from v1.4.3 to v1.4.6, you could
deploy the following way: deploy the following way:
``` ```
ansible-playbook cluster.yml -i inventory/inventory.cfg -e kube_version=v1.4.3 ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.4.3
``` ```
And then repeat with v1.4.6 as kube_version: And then repeat with v1.4.6 as kube_version:
``` ```
ansible-playbook cluster.yml -i inventory/inventory.cfg -e kube_version=v1.4.6 ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.4.6
``` ```
#### Graceful upgrade #### Graceful upgrade
@@ -44,7 +44,7 @@ deployed.
``` ```
git fetch origin git fetch origin
git checkout origin/master git checkout origin/master
ansible-playbook upgrade-cluster.yml -b -i inventory/inventory.cfg -e kube_version=v1.6.0 ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.6.0
``` ```
After a successul upgrade, the Server Version should be updated: After a successul upgrade, the Server Version should be updated:

View File

@@ -1,7 +1,7 @@
Vagrant Install Vagrant Install
================= =================
Assuming you have Vagrant (1.9+) installed with virtualbox (it may work Assuming you have Vagrant (2.0+) installed with virtualbox (it may work
with vmware, but is untested) you should be able to launch a 3 node with vmware, but is untested) you should be able to launch a 3 node
Kubernetes cluster by simply running `$ vagrant up`.<br /> Kubernetes cluster by simply running `$ vagrant up`.<br />

View File

@@ -63,7 +63,8 @@ following default cluster paramters:
bits in kube_pods_subnet dictates how many kube-nodes can be in cluster. bits in kube_pods_subnet dictates how many kube-nodes can be in cluster.
* *dns_setup* - Enables dnsmasq * *dns_setup* - Enables dnsmasq
* *dnsmasq_dns_server* - Cluster IP for dnsmasq (default is 10.233.0.2) * *dnsmasq_dns_server* - Cluster IP for dnsmasq (default is 10.233.0.2)
* *skydns_server* - Cluster IP for KubeDNS (default is 10.233.0.3) * *skydns_server* - Cluster IP for DNS (default is 10.233.0.3)
* *skydns_server_secondary* - Secondary Cluster IP for CoreDNS used with coredns_dual deployment (default is 10.233.0.4)
* *cloud_provider* - Enable extra Kubelet option if operating inside GCE or * *cloud_provider* - Enable extra Kubelet option if operating inside GCE or
OpenStack (default is unset) OpenStack (default is unset)
* *kube_hostpath_dynamic_provisioner* - Required for use of PetSets type in * *kube_hostpath_dynamic_provisioner* - Required for use of PetSets type in
@@ -105,9 +106,9 @@ Stack](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/dns-st
* *http_proxy/https_proxy/no_proxy* - Proxy variables for deploying behind a * *http_proxy/https_proxy/no_proxy* - Proxy variables for deploying behind a
proxy. Note that no_proxy defaults to all internal cluster IPs and hostnames proxy. Note that no_proxy defaults to all internal cluster IPs and hostnames
that correspond to each node. that correspond to each node.
* *kubelet_deployment_type* - Controls which platform to deploy kubelet on. * *kubelet_deployment_type* - Controls which platform to deploy kubelet on.
Available options are ``host``, ``rkt``, and ``docker``. ``docker`` mode Available options are ``host``, ``rkt``, and ``docker``. ``docker`` mode
is unlikely to work on newer releases. Starting with Kubernetes v1.7 is unlikely to work on newer releases. Starting with Kubernetes v1.7
series, this now defaults to ``host``. Before v1.7, the default was Docker. series, this now defaults to ``host``. Before v1.7, the default was Docker.
This is because of cgroup [issues](https://github.com/kubernetes/kubernetes/issues/43704). This is because of cgroup [issues](https://github.com/kubernetes/kubernetes/issues/43704).
* *kubelet_load_modules* - For some things, kubelet needs to load kernel modules. For example, * *kubelet_load_modules* - For some things, kubelet needs to load kernel modules. For example,
@@ -117,6 +118,14 @@ Stack](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/dns-st
* *kubelet_cgroup_driver* - Allows manual override of the * *kubelet_cgroup_driver* - Allows manual override of the
cgroup-driver option for Kubelet. By default autodetection is used cgroup-driver option for Kubelet. By default autodetection is used
to match Docker configuration. to match Docker configuration.
* *node_labels* - Labels applied to nodes via kubelet --node-labels parameter.
For example, labels can be set in the inventory as variables or more widely in group_vars.
*node_labels* must be defined as a dict:
```
node_labels:
label1_name: label1_value
label2_name: label2_value
```
##### Custom flags for Kube Components ##### Custom flags for Kube Components
For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments. This can be done by providing a list of flags. Example: For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments. This can be done by providing a list of flags. Example:
@@ -136,6 +145,6 @@ The possible vars are:
By default, a user with admin rights is created, named `kube`. By default, a user with admin rights is created, named `kube`.
The password can be viewed after deployment by looking at the file The password can be viewed after deployment by looking at the file
`PATH_TO_KUBESPRAY/credentials/kube_user`. This contains a randomly generated `PATH_TO_KUBESPRAY/credentials/kube_user.creds`. This contains a randomly generated
password. If you wish to set your own password, just precreate/modify this password. If you wish to set your own password, just precreate/modify this
file yourself or change `kube_api_pwd` var. file yourself or change `kube_api_pwd` var.

View File

@@ -24,7 +24,7 @@ hardcoded to only create a Vault role for Etcd.
This step is where the long-term Vault cluster is started and configured. Its This step is where the long-term Vault cluster is started and configured. Its
first task, is to stop any temporary instances of Vault, to free the port for first task, is to stop any temporary instances of Vault, to free the port for
the long-term. At the end of this task, the entire Vault cluster should be up the long-term. At the end of this task, the entire Vault cluster should be up
and read to go. and ready to go.
Keys to the Kingdom Keys to the Kingdom
------------------- -------------------

View File

@@ -16,7 +16,7 @@ After this step you should have:
## Kubespray configuration ## Kubespray configuration
Fist you must define the cloud provider in `inventory/group_vars/all.yml` and set it to `vsphere`. Fist you must define the cloud provider in `inventory/sample/group_vars/all.yml` and set it to `vsphere`.
```yml ```yml
cloud_provider: vsphere cloud_provider: vsphere
``` ```
@@ -34,10 +34,12 @@ Then, in the same file, you need to declare your vCenter credential following th
| vsphere_datastore | TRUE | string | | | Datastore name to use | | vsphere_datastore | TRUE | string | | | Datastore name to use |
| vsphere_working_dir | TRUE | string | | | Working directory from the view "VMs and template" in the vCenter where VM are placed | | vsphere_working_dir | TRUE | string | | | Working directory from the view "VMs and template" in the vCenter where VM are placed |
| vsphere_scsi_controller_type | TRUE | string | buslogic, pvscsi, parallel | pvscsi | SCSI controller name. Commonly "pvscsi". | | vsphere_scsi_controller_type | TRUE | string | buslogic, pvscsi, parallel | pvscsi | SCSI controller name. Commonly "pvscsi". |
| vsphere_vm_uuid | FALSE | string | | | VM Instance UUID of virtual machine that host K8s master. Can be retrieved from instanceUuid property in VmConfigInfo, or as vc.uuid in VMX file or in `/sys/class/dmi/id/product_serial` | | vsphere_vm_uuid | FALSE | string | | | VM Instance UUID of virtual machine that host K8s master. Can be retrieved from instanceUuid property in VmConfigInfo, or as vc.uuid in VMX file or in `/sys/class/dmi/id/product_serial` (Optional, only used for Kubernetes <= 1.9.2) |
| vsphere_public_network | FALSE | string | | Blank | Name of the network the VMs are joined to | | vsphere_public_network | FALSE | string | | Blank | Name of the network the VMs are joined to |
| vsphere_resource_pool | FALSE | string | | Blank | Name of the Resource pool where the VMs are located (Optional, only used for Kubernetes >= 1.9.2) |
Example configuration Example configuration
```yml ```yml
vsphere_vcenter_ip: "myvcenter.domain.com" vsphere_vcenter_ip: "myvcenter.domain.com"
vsphere_vcenter_port: 443 vsphere_vcenter_port: 443
@@ -48,6 +50,7 @@ vsphere_datacenter: "DATACENTER_name"
vsphere_datastore: "DATASTORE_name" vsphere_datastore: "DATASTORE_name"
vsphere_working_dir: "Docker_hosts" vsphere_working_dir: "Docker_hosts"
vsphere_scsi_controller_type: "pvscsi" vsphere_scsi_controller_type: "pvscsi"
vsphere_resource_pool: "K8s-Pool"
``` ```
## Deployment ## Deployment
@@ -55,7 +58,7 @@ vsphere_scsi_controller_type: "pvscsi"
Once the configuration is set, you can execute the playbook again to apply the new configuration Once the configuration is set, you can execute the playbook again to apply the new configuration
``` ```
cd kubespray cd kubespray
ansible-playbook -i inventory/inventory.cfg -b -v cluster.yml ansible-playbook -i inventory/sample/hosts.ini -b -v cluster.yml
``` ```
You'll find some usefull examples [here](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/vsphere) to test your configuration. You'll find some usefull examples [here](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/vsphere) to test your configuration.

View File

@@ -12,7 +12,7 @@ Weave encryption is supported for all communication
* To use Weave encryption, specify a strong password (if no password, no encrytion) * To use Weave encryption, specify a strong password (if no password, no encrytion)
``` ```
# In file ./inventory/group_vars/k8s-cluster.yml # In file ./inventory/sample/group_vars/k8s-cluster.yml
weave_password: EnterPasswordHere weave_password: EnterPasswordHere
``` ```
@@ -77,21 +77,21 @@ The seed mode also allows multi-clouds and hybrid on-premise/cloud clusters depl
* Switch from consensus mode to seed mode * Switch from consensus mode to seed mode
``` ```
# In file ./inventory/group_vars/k8s-cluster.yml # In file ./inventory/sample/group_vars/k8s-cluster.yml
weave_mode_seed: true weave_mode_seed: true
``` ```
These two variables are only used when `weave_mode_seed` is set to `true` (**/!\ do not manually change these values**) These two variables are only used when `weave_mode_seed` is set to `true` (**/!\ do not manually change these values**)
``` ```
# In file ./inventory/group_vars/k8s-cluster.yml # In file ./inventory/sample/group_vars/k8s-cluster.yml
weave_seed: uninitialized weave_seed: uninitialized
weave_peers: uninitialized weave_peers: uninitialized
``` ```
The first variable, `weave_seed`, contains the initial nodes of the weave network The first variable, `weave_seed`, contains the initial nodes of the weave network
The seconde variable, `weave_peers`, saves the IPs of all nodes joined to the weave network The second variable, `weave_peers`, saves the IPs of all nodes joined to the weave network
These two variables are used to connect a new node to the weave network. The new node needs to know the firsts nodes (seed) and the list of IPs of all nodes. These two variables are used to connect a new node to the weave network. The new node needs to know the firsts nodes (seed) and the list of IPs of all nodes.

View File

@@ -0,0 +1,54 @@
---
- hosts: localhost
tasks:
- name: CephFS Provisioner | Install pip packages
pip:
name: "{{ item.name }}"
version: "{{ item.version }}"
state: "{{ item.state }}"
with_items:
- { state: "present", name: "docker", version: "2.7.0" }
- { state: "present", name: "docker-compose", version: "1.18.0" }
- name: CephFS Provisioner | Check Go version
shell: |
go version
ignore_errors: yes
register: go_version_result
- name: CephFS Provisioner | Install Go 1.9
shell: |
add-apt-repository -y ppa:gophers/archive
apt-get update
apt-get install -y golang-1.9
ln -fs /usr/lib/go-1.9/bin/* /usr/local/bin/
when: 'go_version_result.rc != 0 or "go version go1.9" not in go_version_result.stdout'
- name: CephFS Provisioner | Check if image exists
shell: |
docker image list | grep 'cephfs-provisioner'
ignore_errors: yes
register: check_image_result
- block:
- name: CephFS Provisioner | Clone repo
git:
repo: https://github.com/kubernetes-incubator/external-storage.git
dest: "~/go/src/github.com/kubernetes-incubator"
version: 92295a30
clone: no
update: yes
- name: CephFS Provisioner | Build image
shell: |
cd ~/go/src/github.com/kubernetes-incubator/external-storage
REGISTRY=quay.io/kubespray/ VERSION=92295a30 make ceph/cephfs
- name: CephFS Provisioner | Push image
docker_image:
name: quay.io/kubespray/cephfs-provisioner:92295a30
push: yes
retries: 10
when: check_image_result.rc != 0

1
inventory/local/group_vars Symbolic link
View File

@@ -0,0 +1 @@
../sample/group_vars

View File

@@ -17,9 +17,13 @@ bin_dir: /usr/local/bin
### LOADBALANCING AND ACCESS MODES ### LOADBALANCING AND ACCESS MODES
## Enable multiaccess to configure etcd clients to access all of the etcd members directly ## Enable multiaccess to configure etcd clients to access all of the etcd members directly
## as the "http://hostX:port, http://hostY:port, ..." and ignore the proxy loadbalancers. ## as the "http://hostX:port, http://hostY:port, ..." and ignore the proxy loadbalancers.
## This may be the case if clients support and loadbalance multiple etcd servers natively. ## This may be the case if clients support and loadbalance multiple etcd servers natively.
#etcd_multiaccess: true #etcd_multiaccess: true
### ETCD: disable peer client cert authentication.
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
#etcd_peer_client_auth: true
## External LB example config ## External LB example config
## apiserver_loadbalancer_domain_name: "elb.some.domain" ## apiserver_loadbalancer_domain_name: "elb.some.domain"
#loadbalancer_apiserver: #loadbalancer_apiserver:
@@ -38,7 +42,7 @@ bin_dir: /usr/local/bin
## for mounting persistent volumes into containers. These may not be loaded by preinstall kubernetes ## for mounting persistent volumes into containers. These may not be loaded by preinstall kubernetes
## processes. For example, ceph and rbd backed volumes. Set to true to allow kubelet to load kernel ## processes. For example, ceph and rbd backed volumes. Set to true to allow kubelet to load kernel
## modules. ## modules.
# kubelet_load_modules: false #kubelet_load_modules: false
## Internal network total size. This is the prefix of the ## Internal network total size. This is the prefix of the
## entire network. Must be unused in your environment. ## entire network. Must be unused in your environment.
@@ -56,7 +60,7 @@ bin_dir: /usr/local/bin
## There are some changes specific to the cloud providers ## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins ## for instance we need to encapsulate packets with some network plugins
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', or 'vsphere' ## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', 'vsphere', or 'external'
## When openstack is used make sure to source in the openstack credentials ## When openstack is used make sure to source in the openstack credentials
## like you would do when using nova-client before starting the playbook. ## like you would do when using nova-client before starting the playbook.
#cloud_provider: #cloud_provider:
@@ -72,14 +76,20 @@ bin_dir: /usr/local/bin
#azure_subnet_name: #azure_subnet_name:
#azure_security_group_name: #azure_security_group_name:
#azure_vnet_name: #azure_vnet_name:
#azure_vnet_resource_group:
#azure_route_table_name: #azure_route_table_name:
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (https://github.com/kubernetes/kubernetes/issues/50461) ## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (Fixed in 1.9: https://github.com/kubernetes/kubernetes/issues/50461)
#openstack_blockstorage_version: "v1/v2/auto (default)" #openstack_blockstorage_version: "v1/v2/auto (default)"
## When OpenStack is used, if LBaaSv2 is available you can enable it with the following variables. ## When OpenStack is used, if LBaaSv2 is available you can enable it with the following 2 variables.
#openstack_lbaas_enabled: True #openstack_lbaas_enabled: True
#openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP" #openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP"
## To enable automatic floating ip provisioning, specify a subnet.
#openstack_lbaas_floating_network_id: "Neutron network ID (not subnet ID) to get floating IP from, disabled by default" #openstack_lbaas_floating_network_id: "Neutron network ID (not subnet ID) to get floating IP from, disabled by default"
## Override default LBaaS behavior
#openstack_lbaas_use_octavia: False
#openstack_lbaas_method: "ROUND_ROBIN"
#openstack_lbaas_provider: "haproxy"
#openstack_lbaas_create_monitor: "yes" #openstack_lbaas_create_monitor: "yes"
#openstack_lbaas_monitor_delay: "1m" #openstack_lbaas_monitor_delay: "1m"
#openstack_lbaas_monitor_timeout: "30s" #openstack_lbaas_monitor_timeout: "30s"
@@ -87,10 +97,6 @@ bin_dir: /usr/local/bin
## Uncomment to enable experimental kubeadm deployment mode ## Uncomment to enable experimental kubeadm deployment mode
#kubeadm_enabled: false #kubeadm_enabled: false
#kubeadm_token_first: "{{ lookup('password', 'credentials/kubeadm_token_first length=6 chars=ascii_lowercase,digits') }}"
#kubeadm_token_second: "{{ lookup('password', 'credentials/kubeadm_token_second length=16 chars=ascii_lowercase,digits') }}"
#kubeadm_token: "{{ kubeadm_token_first }}.{{ kubeadm_token_second }}"
#
## Set these proxy values in order to update package manager and docker daemon to use proxies ## Set these proxy values in order to update package manager and docker daemon to use proxies
#http_proxy: "" #http_proxy: ""
#https_proxy: "" #https_proxy: ""
@@ -106,7 +112,7 @@ bin_dir: /usr/local/bin
## Default packages to install within the cluster, f.e: ## Default packages to install within the cluster, f.e:
#kpm_packages: #kpm_packages:
# - name: kube-system/grafana # - name: kube-system/grafana
## Certificate Management ## Certificate Management
## This setting determines whether certs are generated via scripts or whether a ## This setting determines whether certs are generated via scripts or whether a
@@ -122,3 +128,10 @@ bin_dir: /usr/local/bin
## Set level of detail for etcd exported metrics, specify 'extensive' to include histogram metrics. ## Set level of detail for etcd exported metrics, specify 'extensive' to include histogram metrics.
#etcd_metrics: basic #etcd_metrics: basic
## Etcd is restricted by default to 512M on systems under 4GB RAM, 512MB is not enough for much more than testing.
## Set this if your etcd nodes have less than 4GB but you want more RAM for etcd. Set to 0 for unrestricted RAM.
#etcd_memory_limit: "512M"
# The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable.
#kube_read_only_port: 10255

View File

@@ -1,15 +1,11 @@
# Kubernetes configuration dirs and system namespace. # Kubernetes configuration dirs and system namespace.
# Those are where all the additional config stuff goes # Those are where all the additional config stuff goes
# the kubernetes normally puts in /srv/kubernets. # the kubernetes normally puts in /srv/kubernetes.
# This puts them in a sane location and namespace. # This puts them in a sane location and namespace.
# Editting those values will almost surely break something. # Editing those values will almost surely break something.
kube_config_dir: /etc/kubernetes kube_config_dir: /etc/kubernetes
kube_script_dir: "{{ bin_dir }}/kubernetes-scripts" kube_script_dir: "{{ bin_dir }}/kubernetes-scripts"
kube_manifest_dir: "{{ kube_config_dir }}/manifests" kube_manifest_dir: "{{ kube_config_dir }}/manifests"
system_namespace: kube-system
# Logging directory (sysvinit systems)
kube_log_dir: "/var/log/kubernetes"
# This is where all the cert scripts and certs will be located # This is where all the cert scripts and certs will be located
kube_cert_dir: "{{ kube_config_dir }}/ssl" kube_cert_dir: "{{ kube_config_dir }}/ssl"
@@ -20,10 +16,10 @@ kube_token_dir: "{{ kube_config_dir }}/tokens"
# This is where to save basic auth file # This is where to save basic auth file
kube_users_dir: "{{ kube_config_dir }}/users" kube_users_dir: "{{ kube_config_dir }}/users"
kube_api_anonymous_auth: false kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release ## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.8.1 kube_version: v1.9.5
# Where the binaries will be downloaded. # Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G) # Note: ensure that you've enough disk space (about 1G)
@@ -32,7 +28,7 @@ local_release_dir: "/tmp/releases"
retry_stagger: 5 retry_stagger: 5
# This is the group that the cert creation scripts chgrp the # This is the group that the cert creation scripts chgrp the
# cert files to. Not really changable... # cert files to. Not really changeable...
kube_cert_group: kube-cert kube_cert_group: kube-cert
# Cluster Loglevel configuration # Cluster Loglevel configuration
@@ -40,7 +36,7 @@ kube_log_level: 2
# Users to create for basic auth in Kubernetes API via HTTP # Users to create for basic auth in Kubernetes API via HTTP
# Optionally add groups for user # Optionally add groups for user
kube_api_pwd: "{{ lookup('password', 'credentials/kube_user length=15 chars=ascii_letters,digits') }}" kube_api_pwd: "{{ lookup('password', inventory_dir + '/credentials/kube_user.creds length=15 chars=ascii_letters,digits') }}"
kube_users: kube_users:
kube: kube:
pass: "{{kube_api_pwd}}" pass: "{{kube_api_pwd}}"
@@ -65,7 +61,7 @@ kube_users:
# kube_oidc_groups_claim: groups # kube_oidc_groups_claim: groups
# Choose network plugin (calico, weave or flannel) # Choose network plugin (cilium, calico, contiv, weave or flannel)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing # Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: calico kube_network_plugin: calico
@@ -86,6 +82,9 @@ weave_mode_seed: false
weave_seed: uninitialized weave_seed: uninitialized
weave_peers: uninitialized weave_peers: uninitialized
# Set the MTU of Weave (default 1376, Jumbo Frames: 8916)
weave_mtu: 1376
# Enable kubernetes network policies # Enable kubernetes network policies
enable_network_policy: false enable_network_policy: false
@@ -106,20 +105,33 @@ kube_network_node_prefix: 24
kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}" kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
kube_apiserver_port: 6443 # (https) kube_apiserver_port: 6443 # (https)
kube_apiserver_insecure_port: 8080 # (http) kube_apiserver_insecure_port: 8080 # (http)
# Set to 0 to disable insecure port - Requires RBAC in authorization_modes and kube_api_anonymous_auth: true
#kube_apiserver_insecure_port: 0 # (disabled)
# Kube-proxy proxyMode configuration.
# Can be ipvs, iptables
kube_proxy_mode: iptables
## Encrypting Secret Data at Rest (experimental)
kube_encrypt_secret_data: false
# DNS configuration. # DNS configuration.
# Kubernetes cluster name, also will be used as DNS domain # Kubernetes cluster name, also will be used as DNS domain
cluster_name: cluster.local cluster_name: cluster.local
# Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods # Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
ndots: 2 ndots: 2
# Can be dnsmasq_kubedns, kubedns or none # Can be dnsmasq_kubedns, kubedns, coredns, coredns_dual, manual or none
dns_mode: kubedns dns_mode: kubedns
# Set manual server if using a custom cluster DNS server
#manual_dns_server: 10.x.x.x
# Can be docker_dns, host_resolvconf or none # Can be docker_dns, host_resolvconf or none
resolvconf_mode: docker_dns resolvconf_mode: docker_dns
# Deploy netchecker app to verify DNS resolve as an HTTP service # Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false deploy_netchecker: false
# Ip address of the kubernetes skydns service # Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}" skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipaddr('address') }}"
dnsmasq_dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}" dnsmasq_dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}" dns_domain: "{{ cluster_name }}"
@@ -138,11 +150,13 @@ docker_bin_dir: "/usr/bin"
etcd_deployment_type: docker etcd_deployment_type: docker
kubelet_deployment_type: host kubelet_deployment_type: host
vault_deployment_type: docker vault_deployment_type: docker
helm_deployment_type: host
# K8s image pull policy (imagePullPolicy) # K8s image pull policy (imagePullPolicy)
k8s_image_pull_policy: IfNotPresent k8s_image_pull_policy: IfNotPresent
# Kubernetes dashboard (available at http://first_master:6443/ui by default) # Kubernetes dashboard
# RBAC required. see docs/getting-started.md for access details.
dashboard_enabled: true dashboard_enabled: true
# Monitoring apps for k8s # Monitoring apps for k8s
@@ -151,12 +165,58 @@ efk_enabled: false
# Helm deployment # Helm deployment
helm_enabled: false helm_enabled: false
# Istio depoyment # Istio deployment
istio_enabled: false istio_enabled: false
# Make a copy of kubeconfig on the host that runs Ansible in GITDIR/artifacts # Registry deployment
registry_enabled: false
# registry_namespace: "{{ system_namespace }}"
# registry_storage_class: ""
# registry_disk_size: "10Gi"
# Local volume provisioner deployment
local_volume_provisioner_enabled: false
# local_volume_provisioner_namespace: "{{ system_namespace }}"
# local_volume_provisioner_base_dir: /mnt/disks
# local_volume_provisioner_mount_dir: /mnt/disks
# local_volume_provisioner_storage_class: local-storage
# CephFS provisioner deployment
cephfs_provisioner_enabled: false
# cephfs_provisioner_namespace: "{{ system_namespace }}"
# cephfs_provisioner_cluster: ceph
# cephfs_provisioner_monitors:
# - 172.24.0.1:6789
# - 172.24.0.2:6789
# - 172.24.0.3:6789
# cephfs_provisioner_admin_id: admin
# cephfs_provisioner_secret: secret
# cephfs_provisioner_storage_class: cephfs
# Nginx ingress controller deployment
ingress_nginx_enabled: false
# ingress_nginx_host_network: false
# ingress_nginx_namespace: "ingress-nginx"
# ingress_nginx_insecure_port: 80
# ingress_nginx_secure_port: 443
# ingress_nginx_configmap:
# map-hash-bucket-size: "128"
# ssl-protocols: "SSLv2"
# ingress_nginx_configmap_tcp_services:
# 9000: "default/example-go:8080"
# ingress_nginx_configmap_udp_services:
# 53: "kube-system/kube-dns:53"
# Cert manager deployment
cert_manager_enabled: false
# cert_manager_namespace: "cert-manager"
# Add Persistent Volumes Storage Class for corresponding cloud provider ( OpenStack is only supported now )
persistent_volumes_enabled: false
# Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts
# kubeconfig_localhost: false # kubeconfig_localhost: false
# Download kubectl onto the host that runs Ansible in GITDIR/artifacts # Download kubectl onto the host that runs Ansible in {{ bin_dir }}
# kubectl_localhost: false # kubectl_localhost: false
# dnsmasq # dnsmasq
@@ -168,9 +228,14 @@ istio_enabled: false
# kubelet_cgroups_per_qos: true # kubelet_cgroups_per_qos: true
# A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. # A comma separated list of levels of node allocatable enforcement to be enforced by kubelet.
# Acceptible options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "". # Acceptable options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "".
# kubelet_enforce_node_allocatable: pods # kubelet_enforce_node_allocatable: pods
## Supplementary addresses that can be added in kubernetes ssl keys. ## Supplementary addresses that can be added in kubernetes ssl keys.
## That can be usefull for example to setup a keepalived virtual IP ## That can be useful for example to setup a keepalived virtual IP
# supplementary_addresses_in_ssl_keys: [10.0.0.1, 10.0.0.2, 10.0.0.3] # supplementary_addresses_in_ssl_keys: [10.0.0.1, 10.0.0.2, 10.0.0.3]
## Running on top of openstack vms with cinder enabled may lead to unschedulable pods due to NoVolumeZoneConflict restriction in kube-scheduler.
## See https://github.com/kubernetes-incubator/kubespray/issues/2141
## Set this variable to true to get rid of this issue
volume_cross_zone_attachment: false

View File

@@ -26,6 +26,11 @@
# node5 # node5
# node6 # node6
# [kube-ingress]
# node2
# node3
# [k8s-cluster:children] # [k8s-cluster:children]
# kube-master
# kube-node # kube-node
# kube-master # kube-ingress

View File

@@ -18,7 +18,9 @@ options:
required: false required: false
default: null default: null
description: description:
- The path and filename of the resource(s) definition file. - The path and filename of the resource(s) definition file(s).
- To operate on several files this can accept a comma separated list of files or a list of files.
aliases: [ 'files', 'file', 'filenames' ]
kubectl: kubectl:
required: false required: false
default: null default: null
@@ -86,6 +88,15 @@ EXAMPLES = """
- name: test nginx is present - name: test nginx is present
kube: filename=/tmp/nginx.yml kube: filename=/tmp/nginx.yml
- name: test nginx and postgresql are present
kube: files=/tmp/nginx.yml,/tmp/postgresql.yml
- name: test nginx and postgresql are present
kube:
files:
- /tmp/nginx.yml
- /tmp/postgresql.yml
""" """
@@ -112,7 +123,7 @@ class KubeManager(object):
self.all = module.params.get('all') self.all = module.params.get('all')
self.force = module.params.get('force') self.force = module.params.get('force')
self.name = module.params.get('name') self.name = module.params.get('name')
self.filename = module.params.get('filename') self.filename = [f.strip() for f in module.params.get('filename') or []]
self.resource = module.params.get('resource') self.resource = module.params.get('resource')
self.label = module.params.get('label') self.label = module.params.get('label')
@@ -122,7 +133,7 @@ class KubeManager(object):
rc, out, err = self.module.run_command(args) rc, out, err = self.module.run_command(args)
if rc != 0: if rc != 0:
self.module.fail_json( self.module.fail_json(
msg='error running kubectl (%s) command (rc=%d): %s' % (' '.join(args), rc, out or err)) msg='error running kubectl (%s) command (rc=%d), out=\'%s\', err=\'%s\'' % (' '.join(args), rc, out, err))
except Exception as exc: except Exception as exc:
self.module.fail_json( self.module.fail_json(
msg='error running kubectl (%s) command: %s' % (' '.join(args), str(exc))) msg='error running kubectl (%s) command: %s' % (' '.join(args), str(exc)))
@@ -147,7 +158,7 @@ class KubeManager(object):
if not self.filename: if not self.filename:
self.module.fail_json(msg='filename required to create') self.module.fail_json(msg='filename required to create')
cmd.append('--filename=' + self.filename) cmd.append('--filename=' + ','.join(self.filename))
return self._execute(cmd) return self._execute(cmd)
@@ -161,7 +172,7 @@ class KubeManager(object):
if not self.filename: if not self.filename:
self.module.fail_json(msg='filename required to reload') self.module.fail_json(msg='filename required to reload')
cmd.append('--filename=' + self.filename) cmd.append('--filename=' + ','.join(self.filename))
return self._execute(cmd) return self._execute(cmd)
@@ -173,7 +184,7 @@ class KubeManager(object):
cmd = ['delete'] cmd = ['delete']
if self.filename: if self.filename:
cmd.append('--filename=' + self.filename) cmd.append('--filename=' + ','.join(self.filename))
else: else:
if not self.resource: if not self.resource:
self.module.fail_json(msg='resource required to delete without filename') self.module.fail_json(msg='resource required to delete without filename')
@@ -197,27 +208,31 @@ class KubeManager(object):
def exists(self): def exists(self):
cmd = ['get'] cmd = ['get']
if not self.resource: if self.filename:
return False cmd.append('--filename=' + ','.join(self.filename))
else:
if not self.resource:
self.module.fail_json(msg='resource required without filename')
cmd.append(self.resource) cmd.append(self.resource)
if self.name: if self.name:
cmd.append(self.name) cmd.append(self.name)
if self.label:
cmd.append('--selector=' + self.label)
if self.all:
cmd.append('--all-namespaces')
cmd.append('--no-headers') cmd.append('--no-headers')
if self.label:
cmd.append('--selector=' + self.label)
if self.all:
cmd.append('--all-namespaces')
result = self._execute_nofail(cmd) result = self._execute_nofail(cmd)
if not result: if not result:
return False return False
return True return True
# TODO: This is currently unused, perhaps convert to 'scale' with a replicas param?
def stop(self): def stop(self):
if not self.force and not self.exists(): if not self.force and not self.exists():
@@ -226,7 +241,7 @@ class KubeManager(object):
cmd = ['stop'] cmd = ['stop']
if self.filename: if self.filename:
cmd.append('--filename=' + self.filename) cmd.append('--filename=' + ','.join(self.filename))
else: else:
if not self.resource: if not self.resource:
self.module.fail_json(msg='resource required to stop without filename') self.module.fail_json(msg='resource required to stop without filename')
@@ -253,7 +268,7 @@ def main():
module = AnsibleModule( module = AnsibleModule(
argument_spec=dict( argument_spec=dict(
name=dict(), name=dict(),
filename=dict(), filename=dict(type='list', aliases=['files', 'file', 'filenames']),
namespace=dict(), namespace=dict(),
resource=dict(), resource=dict(),
label=dict(), label=dict(),
@@ -263,7 +278,8 @@ def main():
all=dict(default=False, type='bool'), all=dict(default=False, type='bool'),
log_level=dict(default=0, type='int'), log_level=dict(default=0, type='int'),
state=dict(default='present', choices=['present', 'absent', 'latest', 'reloaded', 'stopped']), state=dict(default='present', choices=['present', 'absent', 'latest', 'reloaded', 'stopped']),
) ),
mutually_exclusive=[['filename', 'list']]
) )
changed = False changed = False

29
remove-node.yml Normal file
View File

@@ -0,0 +1,29 @@
---
- hosts: all
gather_facts: true
- hosts: etcd:k8s-cluster:vault:calico-rr
vars_prompt:
name: "delete_nodes_confirmation"
prompt: "Are you sure you want to delete nodes state? Type 'yes' to delete nodes."
default: "no"
private: no
pre_tasks:
- name: check confirmation
fail:
msg: "Delete nodes confirmation failed"
when: delete_nodes_confirmation != "yes"
- hosts: kube-master
roles:
- { role: remove-node/pre-remove, tags: pre-remove }
- hosts: kube-node
roles:
- { role: reset, tags: reset }
- hosts: kube-master
roles:
- { role: remove-node/post-remove, tags: post-remove }

View File

@@ -1,6 +1,9 @@
--- ---
- hosts: all - hosts: all
gather_facts: true
- hosts: etcd:k8s-cluster:vault:calico-rr
vars_prompt: vars_prompt:
name: "reset_confirmation" name: "reset_confirmation"
prompt: "Are you sure you want to reset cluster state? Type 'yes' to reset your cluster." prompt: "Are you sure you want to reset cluster state? Type 'yes' to reset your cluster."

View File

@@ -16,7 +16,5 @@ Host {{ bastion_ip }}
ControlPersist 5m ControlPersist 5m
Host {{ vars['hosts'] }} Host {{ vars['hosts'] }}
ProxyCommand ssh -W %h:%p {{ real_user }}@{{ bastion_ip }} {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %} ProxyCommand ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p {{ real_user }}@{{ bastion_ip }} {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}
StrictHostKeyChecking no
{% endif %} {% endif %}

View File

@@ -1,5 +1,6 @@
--- ---
pypy_version: 2.4.0 pip_python_coreos_modules:
pip_python_modules:
- httplib2 - httplib2
- six - six
override_system_hostname: true

View File

@@ -1,4 +1,4 @@
#/bin/bash #!/bin/bash
set -e set -e
BINDIR="/opt/bin" BINDIR="/opt/bin"

View File

@@ -3,6 +3,7 @@
raw: stat /opt/bin/.bootstrapped raw: stat /opt/bin/.bootstrapped
register: need_bootstrap register: need_bootstrap
failed_when: false failed_when: false
changed_when: false
tags: tags:
- facts - facts
@@ -51,4 +52,4 @@
- name: Install required python modules - name: Install required python modules
pip: pip:
name: "{{ item }}" name: "{{ item }}"
with_items: "{{pip_python_modules}}" with_items: "{{pip_python_coreos_modules}}"

View File

@@ -5,6 +5,7 @@
raw: which "{{ item }}" raw: which "{{ item }}"
register: need_bootstrap register: need_bootstrap
failed_when: false failed_when: false
changed_when: false
with_items: with_items:
- python - python
- pip - pip
@@ -16,7 +17,7 @@
apt-get update && \ apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y python-minimal python-pip dbus DEBIAN_FRONTEND=noninteractive apt-get install -y python-minimal python-pip dbus
when: when:
"{{ need_bootstrap.results | map(attribute='rc') | sort | last | bool }}" need_bootstrap.results | map(attribute='rc') | sort | last | bool
- set_fact: - set_fact:
ansible_python_interpreter: "/usr/bin/python" ansible_python_interpreter: "/usr/bin/python"

View File

@@ -0,0 +1,7 @@
---
- name: Install required packages (SUSE)
package:
name: "{{ item }}"
state: present
with_items:
- python-cryptography

View File

@@ -5,6 +5,7 @@
raw: which "{{ item }}" raw: which "{{ item }}"
register: need_bootstrap register: need_bootstrap
failed_when: false failed_when: false
changed_when: false
with_items: with_items:
- python - python
- pip - pip

View File

@@ -1,17 +1,20 @@
--- ---
- include: bootstrap-ubuntu.yml - import_tasks: bootstrap-ubuntu.yml
when: bootstrap_os == "ubuntu" when: bootstrap_os == "ubuntu"
- include: bootstrap-debian.yml - import_tasks: bootstrap-debian.yml
when: bootstrap_os == "debian" when: bootstrap_os == "debian"
- include: bootstrap-coreos.yml - import_tasks: bootstrap-coreos.yml
when: bootstrap_os == "coreos" when: bootstrap_os == "coreos"
- include: bootstrap-centos.yml - import_tasks: bootstrap-centos.yml
when: bootstrap_os == "centos" when: bootstrap_os == "centos"
- include: setup-pipelining.yml - import_tasks: bootstrap-opensuse.yml
when: bootstrap_os == "opensuse"
- import_tasks: setup-pipelining.yml
- name: check if atomic host - name: check if atomic host
stat: stat:
@@ -26,18 +29,25 @@
gather_subset: '!all' gather_subset: '!all'
filter: ansible_* filter: ansible_*
- name: Assign inventory name to unconfigured hostnames (non-CoreOS) - name: Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed)
hostname: hostname:
name: "{{inventory_hostname}}" name: "{{inventory_hostname}}"
when: ansible_os_family not in ['CoreOS', 'Container Linux by CoreOS'] when:
- override_system_hostname
- ansible_distribution not in ['openSUSE Tumbleweed']
- ansible_os_family not in ['CoreOS', 'Container Linux by CoreOS']
- name: Assign inventory name to unconfigured hostnames (CoreOS only) - name: Assign inventory name to unconfigured hostnames (CoreOS and Tumbleweed only)
command: "hostnamectl set-hostname {{inventory_hostname}}" command: "hostnamectl set-hostname {{inventory_hostname}}"
register: hostname_changed register: hostname_changed
when: ansible_hostname == 'localhost' and ansible_os_family in ['CoreOS', 'Container Linux by CoreOS'] when:
- ansible_hostname == 'localhost'
- ansible_distribution in ['openSUSE Tumbleweed'] or ansible_os_family in ['CoreOS', 'Container Linux by CoreOS']
- override_system_hostname
- name: Update hostname fact (CoreOS only) - name: Update hostname fact (CoreOS and Tumbleweed only)
setup: setup:
gather_subset: '!all' gather_subset: '!all'
filter: ansible_hostname filter: ansible_hostname
when: ansible_os_family in ['CoreOS', 'Container Linux by CoreOS'] and hostname_changed.changed when:
- hostname_changed.changed

View File

@@ -91,7 +91,7 @@
- name: Start Resources - name: Start Resources
kube: kube:
name: "{{item.item.name}}" name: "{{item.item.name}}"
namespace: "{{system_namespace}}" namespace: "kube-system"
kubectl: "{{bin_dir}}/kubectl" kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}" resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}" filename: "{{kube_config_dir}}/{{item.item.file}}"

View File

@@ -22,7 +22,7 @@ server={{ srv }}
{% endfor %} {% endfor %}
{% elif resolvconf_mode == 'host_resolvconf' %} {% elif resolvconf_mode == 'host_resolvconf' %}
{# The default resolver is only needed when the hosts resolv.conf was modified by us. If it was not modified, we can rely on dnsmasq to reuse the systems resolv.conf #} {# The default resolver is only needed when the hosts resolv.conf was modified by us. If it was not modified, we can rely on dnsmasq to reuse the systems resolv.conf #}
server={{ default_resolver }} server={{ cloud_resolver }}
{% endif %} {% endif %}
{% if kube_log_level == '4' %} {% if kube_log_level == '4' %}

View File

@@ -39,7 +39,7 @@ spec:
operator: Exists operator: Exists
containers: containers:
- name: autoscaler - name: autoscaler
image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1 image: "{{ dnsmasqautoscaler_image_repo }}:{{ dnsmasqautoscaler_image_tag }}"
resources: resources:
requests: requests:
cpu: "20m" cpu: "20m"

View File

@@ -3,11 +3,11 @@ kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
metadata: metadata:
name: dnsmasq name: dnsmasq
namespace: "{{ system_namespace }}" namespace: "kube-system"
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: dnsmasq name: dnsmasq
namespace: "{{ system_namespace}}" namespace: "kube-system"
roleRef: roleRef:
kind: ClusterRole kind: ClusterRole
name: cluster-admin name: cluster-admin

View File

@@ -3,7 +3,7 @@ apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
name: dnsmasq name: dnsmasq
namespace: "{{system_namespace}}" namespace: "kube-system"
labels: labels:
k8s-app: dnsmasq k8s-app: dnsmasq
kubernetes.io/cluster-service: "true" kubernetes.io/cluster-service: "true"

View File

@@ -3,6 +3,6 @@ apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
metadata: metadata:
name: dnsmasq name: dnsmasq
namespace: "{{ system_namespace }}" namespace: "kube-system"
labels: labels:
kubernetes.io/cluster-service: "true" kubernetes.io/cluster-service: "true"

View File

@@ -6,7 +6,7 @@ metadata:
kubernetes.io/cluster-service: 'true' kubernetes.io/cluster-service: 'true'
k8s-app: dnsmasq k8s-app: dnsmasq
name: dnsmasq name: dnsmasq
namespace: {{system_namespace}} namespace: kube-system
spec: spec:
ports: ports:
- port: 53 - port: 53

View File

@@ -1,5 +1,6 @@
--- ---
docker_version: '1.13' docker_version: '17.03'
docker_selinux_version: '17.03'
docker_package_info: docker_package_info:
pkgs: pkgs:
@@ -10,9 +11,31 @@ docker_repo_key_info:
docker_repo_info: docker_repo_info:
repos: repos:
dockerproject_repo_key_info:
repo_keys:
dockerproject_repo_info:
repos:
docker_dns_servers_strict: yes docker_dns_servers_strict: yes
docker_container_storage_setup: false docker_container_storage_setup: false
docker_rh_repo_base_url: 'https://yum.dockerproject.org/repo/main/centos/7' # Used to override obsoletes=0
docker_rh_repo_gpgkey: 'https://yum.dockerproject.org/gpg' yum_conf: /etc/yum.conf
docker_yum_conf: /etc/yum_docker.conf
# CentOS/RedHat docker-ce repo
docker_rh_repo_base_url: 'https://download.docker.com/linux/centos/7/$basearch/stable'
docker_rh_repo_gpgkey: 'https://download.docker.com/linux/centos/gpg'
# Ubuntu docker-ce repo
docker_ubuntu_repo_base_url: "https://download.docker.com/linux/ubuntu"
docker_ubuntu_repo_gpgkey: 'https://download.docker.com/linux/ubuntu/gpg'
# Debian docker-ce repo
docker_debian_repo_base_url: "https://download.docker.com/linux/debian"
docker_debian_repo_gpgkey: 'https://download.docker.com/linux/debian/gpg'
# dockerproject repo
dockerproject_rh_repo_base_url: 'https://yum.dockerproject.org/repo/main/centos/7'
dockerproject_rh_repo_gpgkey: 'https://yum.dockerproject.org/gpg'
dockerproject_apt_repo_base_url: 'https://apt.dockerproject.org/repo'
dockerproject_apt_repo_gpgkey: 'https://apt.dockerproject.org/gpg'

View File

@@ -3,6 +3,9 @@ docker_container_storage_setup_version: v0.6.0
docker_container_storage_setup_profile_name: kubespray docker_container_storage_setup_profile_name: kubespray
docker_container_storage_setup_storage_driver: devicemapper docker_container_storage_setup_storage_driver: devicemapper
docker_container_storage_setup_container_thinpool: docker-pool docker_container_storage_setup_container_thinpool: docker-pool
# It must be define a disk path for docker_container_storage_setup_devs.
# Otherwise docker-storage-setup will be executed incorrectly.
# docker_container_storage_setup_devs: /dev/vdb
docker_container_storage_setup_data_size: 40%FREE docker_container_storage_setup_data_size: 40%FREE
docker_container_storage_setup_min_data_size: 2G docker_container_storage_setup_min_data_size: 2G
docker_container_storage_setup_chunk_size: 512K docker_container_storage_setup_chunk_size: 512K

View File

@@ -23,7 +23,7 @@
copy: copy:
dest: /etc/systemd/system/docker.service.d/override.conf dest: /etc/systemd/system/docker.service.d/override.conf
content: |- content: |-
### Thie file is managed by Ansible ### This file is managed by Ansible
[Service] [Service]
EnvironmentFile=-/etc/sysconfig/docker-storage EnvironmentFile=-/etc/sysconfig/docker-storage
@@ -31,6 +31,12 @@
group: root group: root
mode: 0644 mode: 0644
# https://docs.docker.com/engine/installation/linux/docker-ce/centos/#install-using-the-repository
- name: docker-storage-setup | install lvm2
yum:
name: lvm2
state: present
- name: docker-storage-setup | install and run container-storage-setup - name: docker-storage-setup | install and run container-storage-setup
become: yes become: yes
script: install_container_storage_setup.sh {{ docker_container_storage_setup_version }} {{ docker_container_storage_setup_profile_name }} script: install_container_storage_setup.sh {{ docker_container_storage_setup_version }} {{ docker_container_storage_setup_profile_name }}

View File

@@ -15,7 +15,15 @@
tags: tags:
- facts - facts
- include: set_facts_dns.yml # https://yum.dockerproject.org/repo/main/opensuse/ contains packages for an EOL
# openSUSE version so we can't use it. The only alternative is to use the docker
# packages from the distribution repositories.
- name: Warn about Docker version on SUSE
debug:
msg: "SUSE distributions always install Docker from the distro repos"
when: ansible_pkg_mgr == 'zypper'
- include_tasks: set_facts_dns.yml
when: dns_mode != 'none' and resolvconf_mode == 'docker_dns' when: dns_mode != 'none' and resolvconf_mode == 'docker_dns'
tags: tags:
- facts - facts
@@ -30,27 +38,49 @@
tags: tags:
- facts - facts
- name: ensure docker repository public key is installed - import_tasks: pre-upgrade.yml
- name: ensure docker-ce repository public key is installed
action: "{{ docker_repo_key_info.pkg_key }}" action: "{{ docker_repo_key_info.pkg_key }}"
args: args:
id: "{{item}}" id: "{{item}}"
keyserver: "{{docker_repo_key_info.keyserver}}" url: "{{docker_repo_key_info.url}}"
state: present state: present
register: keyserver_task_result register: keyserver_task_result
until: keyserver_task_result|succeeded until: keyserver_task_result|succeeded
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
environment: "{{ proxy_env }}"
with_items: "{{ docker_repo_key_info.repo_keys }}" with_items: "{{ docker_repo_key_info.repo_keys }}"
when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS", "RedHat", "Suse"] or is_atomic)
- name: ensure docker repository is enabled - name: ensure docker-ce repository is enabled
action: "{{ docker_repo_info.pkg_repo }}" action: "{{ docker_repo_info.pkg_repo }}"
args: args:
repo: "{{item}}" repo: "{{item}}"
state: present state: present
with_items: "{{ docker_repo_info.repos }}" with_items: "{{ docker_repo_info.repos }}"
when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) and (docker_repo_info.repos|length > 0) when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS", "RedHat", "Suse"] or is_atomic) and (docker_repo_info.repos|length > 0)
- name: ensure docker-engine repository public key is installed
action: "{{ dockerproject_repo_key_info.pkg_key }}"
args:
id: "{{item}}"
url: "{{dockerproject_repo_key_info.url}}"
state: present
register: keyserver_task_result
until: keyserver_task_result|succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
with_items: "{{ dockerproject_repo_key_info.repo_keys }}"
when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS", "RedHat", "Suse"] or is_atomic)
- name: ensure docker-engine repository is enabled
action: "{{ dockerproject_repo_info.pkg_repo }}"
args:
repo: "{{item}}"
state: present
with_items: "{{ dockerproject_repo_info.repos }}"
when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS", "RedHat", "Suse"] or is_atomic) and (dockerproject_repo_info.repos|length > 0)
- name: Configure docker repository on RedHat/CentOS - name: Configure docker repository on RedHat/CentOS
template: template:
@@ -58,30 +88,60 @@
dest: "/etc/yum.repos.d/docker.repo" dest: "/etc/yum.repos.d/docker.repo"
when: ansible_distribution in ["CentOS","RedHat"] and not is_atomic when: ansible_distribution in ["CentOS","RedHat"] and not is_atomic
- name: Copy yum.conf for editing
copy:
src: "{{ yum_conf }}"
dest: "{{ docker_yum_conf }}"
remote_src: yes
when: ansible_distribution in ["CentOS","RedHat"] and not is_atomic
- name: Edit copy of yum.conf to set obsoletes=0
lineinfile:
path: "{{ docker_yum_conf }}"
state: present
regexp: '^obsoletes='
line: 'obsoletes=0'
when: ansible_distribution in ["CentOS","RedHat"] and not is_atomic
- name: ensure docker packages are installed - name: ensure docker packages are installed
action: "{{ docker_package_info.pkg_mgr }}" action: "{{ docker_package_info.pkg_mgr }}"
args: args:
pkg: "{{item.name}}" pkg: "{{item.name}}"
force: "{{item.force|default(omit)}}" force: "{{item.force|default(omit)}}"
conf_file: "{{item.yum_conf|default(omit)}}"
state: present state: present
register: docker_task_result register: docker_task_result
until: docker_task_result|succeeded until: docker_task_result|succeeded
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
environment: "{{ proxy_env }}"
with_items: "{{ docker_package_info.pkgs }}" with_items: "{{ docker_package_info.pkgs }}"
notify: restart docker notify: restart docker
when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) and (docker_package_info.pkgs|length > 0) when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) and (docker_package_info.pkgs|length > 0)
- name: check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns - name: ensure service is started if docker packages are already present
service:
name: docker
state: started
when: docker_task_result is not changed
- name: flush handlers so we can wait for docker to come up
meta: flush_handlers
- name: set fact for docker_version
command: "docker version -f '{{ '{{' }}.Client.Version{{ '}}' }}'" command: "docker version -f '{{ '{{' }}.Client.Version{{ '}}' }}'"
register: docker_version register: installed_docker_version
failed_when: docker_version.stdout|version_compare('1.12', '<')
changed_when: false changed_when: false
when: dns_mode != 'none' and resolvconf_mode == 'docker_dns'
- name: check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns
fail:
msg: "You need at least docker version >= 1.12 for resolvconf_mode=docker_dns"
when: >
dns_mode != 'none' and
resolvconf_mode == 'docker_dns' and
installed_docker_version.stdout|version_compare('1.12', '<')
- name: Set docker systemd config - name: Set docker systemd config
include: systemd.yml import_tasks: systemd.yml
- name: ensure docker service is started and enabled - name: ensure docker service is started and enabled
service: service:

View File

@@ -0,0 +1,25 @@
---
- name: Ensure old versions of Docker are not installed. | Debian
package:
name: '{{ item }}'
state: absent
with_items:
- docker
- docker-engine
when:
- ansible_os_family == 'Debian'
- (docker_versioned_pkg[docker_version | string] | search('docker-ce'))
- name: Ensure old versions of Docker are not installed. | RedHat
package:
name: '{{ item }}'
state: absent
with_items:
- docker
- docker-common
- docker-engine
- docker-selinux
when:
- ansible_os_family == 'RedHat'
- (docker_versioned_pkg[docker_version | string] | search('docker-ce'))
- not is_atomic

View File

@@ -3,10 +3,14 @@
- name: set dns server for docker - name: set dns server for docker
set_fact: set_fact:
docker_dns_servers: |- docker_dns_servers: |-
{%- if dns_mode == 'kubedns' -%} {%- if dns_mode in ['kubedns', 'coredns'] -%}
{{ [ skydns_server ] }} {{ [ skydns_server ] }}
{%- elif dns_mode == 'coredns_dual' -%}
{{ [ skydns_server ] + [ skydns_server_secondary ] }}
{%- elif dns_mode == 'dnsmasq_kubedns' -%} {%- elif dns_mode == 'dnsmasq_kubedns' -%}
{{ [ dnsmasq_dns_server ] }} {{ [ dnsmasq_dns_server ] }}
{%- elif dns_mode == 'manual' -%}
{{ [ manual_dns_server ] }}
{%- endif -%} {%- endif -%}
- name: set base docker dns facts - name: set base docker dns facts
@@ -22,7 +26,7 @@
- name: add upstream dns servers (only when dnsmasq is not used) - name: add upstream dns servers (only when dnsmasq is not used)
set_fact: set_fact:
docker_dns_servers: "{{ docker_dns_servers + upstream_dns_servers|default([]) }}" docker_dns_servers: "{{ docker_dns_servers + upstream_dns_servers|default([]) }}"
when: dns_mode == 'kubedns' when: dns_mode in ['kubedns', 'coredns', 'coreos_dual']
- name: add global searchdomains - name: add global searchdomains
set_fact: set_fact:
@@ -47,7 +51,7 @@
- name: add system search domains to docker options - name: add system search domains to docker options
set_fact: set_fact:
docker_dns_search_domains: "{{ docker_dns_search_domains | union(system_search_domains.stdout.split(' ')|default([])) | unique }}" docker_dns_search_domains: "{{ docker_dns_search_domains | union(system_search_domains.stdout.split()|default([])) | unique }}"
when: system_search_domains.stdout != "" when: system_search_domains.stdout != ""
- name: check number of nameservers - name: check number of nameservers

View File

@@ -8,12 +8,13 @@
template: template:
src: http-proxy.conf.j2 src: http-proxy.conf.j2
dest: /etc/systemd/system/docker.service.d/http-proxy.conf dest: /etc/systemd/system/docker.service.d/http-proxy.conf
notify: restart docker
when: http_proxy is defined or https_proxy is defined when: http_proxy is defined or https_proxy is defined
- name: get systemd version - name: get systemd version
command: rpm -q --qf '%{V}\n' systemd shell: systemctl --version | head -n 1 | cut -d " " -f 2
register: systemd_version register: systemd_version
when: ansible_os_family == "RedHat" and not is_atomic when: not is_atomic
changed_when: false changed_when: false
- name: Write docker.service systemd file - name: Write docker.service systemd file

View File

@@ -7,6 +7,9 @@ Wants=docker-storage-setup.service
{% elif ansible_os_family == "Debian" %} {% elif ansible_os_family == "Debian" %}
After=network.target docker.socket After=network.target docker.socket
Wants=docker.socket Wants=docker.socket
{% elif ansible_os_family == "Suse" %}
After=network.target containerd.socket containerd.service
Requires=containerd.socket containerd.service
{% endif %} {% endif %}
[Service] [Service]
@@ -18,20 +21,26 @@ Environment=GOTRACEBACK=crash
ExecReload=/bin/kill -s HUP $MAINPID ExecReload=/bin/kill -s HUP $MAINPID
Delegate=yes Delegate=yes
KillMode=process KillMode=process
ExecStart={{ docker_bin_dir }}/docker daemon \ ExecStart={{ docker_bin_dir }}/docker{% if installed_docker_version.stdout|version_compare('17.03', '<') %} daemon{% else %}d{% endif %} \
{% if ansible_os_family == "Suse" %}
--containerd /run/containerd/containerd.sock --add-runtime oci=/usr/bin/docker-runc \
{% endif %}
$DOCKER_OPTS \ $DOCKER_OPTS \
$DOCKER_STORAGE_OPTIONS \ $DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \ $DOCKER_NETWORK_OPTIONS \
$DOCKER_DNS_OPTIONS \ $DOCKER_DNS_OPTIONS \
$INSECURE_REGISTRY $INSECURE_REGISTRY
{% if ansible_os_family == "RedHat" and systemd_version.stdout|int >= 226 %} {% if not is_atomic and systemd_version.stdout|int >= 226 %}
TasksMax=infinity TasksMax=infinity
{% endif %} {% endif %}
LimitNOFILE=1048576 LimitNOFILE=1048576
LimitNPROC=1048576 LimitNPROC=1048576
LimitCORE=infinity LimitCORE=infinity
TimeoutStartSec=1min TimeoutStartSec=1min
Restart=on-abnormal # restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@@ -1,7 +1,15 @@
[dockerrepo] [docker-ce]
name=Docker Repository name=Docker-CE Repository
baseurl={{ docker_rh_repo_base_url }} baseurl={{ docker_rh_repo_base_url }}
enabled=1 enabled=1
gpgcheck=1 gpgcheck=1
gpgkey={{ docker_rh_repo_gpgkey }} gpgkey={{ docker_rh_repo_gpgkey }}
{% if http_proxy is defined %}proxy={{ http_proxy }}{% endif %} {% if http_proxy is defined %}proxy={{ http_proxy }}{% endif %}
[docker-engine]
name=Docker-Engine Repository
baseurl={{ dockerproject_rh_repo_base_url }}
enabled=1
gpgcheck=1
gpgkey={{ dockerproject_rh_repo_gpgkey }}
{% if http_proxy is defined %}proxy={{ http_proxy }}{% endif %}

View File

@@ -1,14 +1,16 @@
--- ---
docker_kernel_min_version: '3.10' docker_kernel_min_version: '3.10'
# https://download.docker.com/linux/debian/
# https://apt.dockerproject.org/repo/dists/debian-wheezy/main/filelist # https://apt.dockerproject.org/repo/dists/debian-wheezy/main/filelist
docker_versioned_pkg: docker_versioned_pkg:
'latest': docker-engine 'latest': docker-ce
'1.11': docker-engine=1.11.2-0~{{ ansible_distribution_release|lower }} '1.11': docker-engine=1.11.2-0~{{ ansible_distribution_release|lower }}
'1.12': docker-engine=1.12.6-0~debian-{{ ansible_distribution_release|lower }} '1.12': docker-engine=1.12.6-0~debian-{{ ansible_distribution_release|lower }}
'1.13': docker-engine=1.13.1-0~debian-{{ ansible_distribution_release|lower }} '1.13': docker-engine=1.13.1-0~debian-{{ ansible_distribution_release|lower }}
'stable': docker-engine=17.03.0~ce-0~debian-{{ ansible_distribution_release|lower }} '17.03': docker-ce=17.03.2~ce-0~debian-{{ ansible_distribution_release|lower }}
'edge': docker-engine=17.03.0~ce-0~debian-{{ ansible_distribution_release|lower }} 'stable': docker-ce=17.03.2~ce-0~debian-{{ ansible_distribution_release|lower }}
'edge': docker-ce=17.12.1~ce-0~debian-{{ ansible_distribution_release|lower }}
docker_package_info: docker_package_info:
pkg_mgr: apt pkg_mgr: apt
@@ -18,14 +20,28 @@ docker_package_info:
docker_repo_key_info: docker_repo_key_info:
pkg_key: apt_key pkg_key: apt_key
keyserver: hkp://p80.pool.sks-keyservers.net:80 url: '{{ docker_debian_repo_gpgkey }}'
repo_keys: repo_keys:
- 58118E89F3A912897C070ADBF76221572C52609D - 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
docker_repo_info: docker_repo_info:
pkg_repo: apt_repository pkg_repo: apt_repository
repos: repos:
- > - >
deb https://apt.dockerproject.org/repo deb {{ docker_debian_repo_base_url }}
{{ ansible_distribution_release|lower }}
stable
dockerproject_repo_key_info:
pkg_key: apt_key
url: '{{ dockerproject_apt_repo_gpgkey }}'
repo_keys:
- 58118E89F3A912897C070ADBF76221572C52609D
dockerproject_repo_info:
pkg_repo: apt_repository
repos:
- >
deb {{ dockerproject_apt_repo_base_url }}
{{ ansible_distribution|lower }}-{{ ansible_distribution_release|lower }} {{ ansible_distribution|lower }}-{{ ansible_distribution_release|lower }}
main main

View File

@@ -1,27 +0,0 @@
---
docker_kernel_min_version: '0'
# https://docs.docker.com/engine/installation/linux/fedora/#install-from-a-package
# https://download.docker.com/linux/fedora/7/x86_64/stable/
# the package names below are guesses;
# docs mention `sudo dnf config-manager --enable docker-ce-edge` for edge
docker_versioned_pkg:
'latest': docker
'1.11': docker-1:1.11.2
'1.12': docker-1:1.12.5
'1.13': docker-1.13.1
'stable': docker-ce
'edge': docker-ce-edge
docker_package_info:
pkg_mgr: dnf
pkgs:
- name: "{{ docker_versioned_pkg[docker_version | string] }}"
docker_repo_key_info:
pkg_key: ''
repo_keys: []
docker_repo_info:
pkg_repo: ''
repos: []

View File

@@ -1,23 +1,36 @@
--- ---
docker_kernel_min_version: '0' docker_kernel_min_version: '0'
# https://yum.dockerproject.org/repo/main/centos/7/Packages/ # https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package
# https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
# https://yum.dockerproject.org/repo/main/centos/7
# or do 'yum --showduplicates list docker-engine' # or do 'yum --showduplicates list docker-engine'
docker_versioned_pkg: docker_versioned_pkg:
'latest': docker-engine 'latest': docker-ce
'1.11': docker-engine-1.11.2-1.el7.centos '1.11': docker-engine-1.11.2-1.el7.centos
'1.12': docker-engine-1.12.6-1.el7.centos '1.12': docker-engine-1.12.6-1.el7.centos
'1.13': docker-engine-1.13.1-1.el7.centos '1.13': docker-engine-1.13.1-1.el7.centos
'stable': docker-engine-17.03.0.ce-1.el7.centos '17.03': docker-ce-17.03.2.ce-1.el7.centos
'edge': docker-engine-17.03.0.ce-1.el7.centos 'stable': docker-ce-17.03.2.ce-1.el7.centos
'edge': docker-ce-17.12.1.ce-1.el7.centos
docker_selinux_versioned_pkg:
'latest': docker-ce-selinux
'1.11': docker-engine-selinux-1.11.2-1.el7.centos
'1.12': docker-engine-selinux-1.12.6-1.el7.centos
'1.13': docker-engine-selinux-1.13.1-1.el7.centos
'17.03': docker-ce-selinux-17.03.2.ce-1.el7.centos
'stable': docker-ce-selinux-17.03.2.ce-1.el7.centos
'edge': docker-ce-selinux-17.03.2.ce-1.el7.centos
# https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package
# https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
docker_package_info: docker_package_info:
pkg_mgr: yum pkg_mgr: yum
pkgs: pkgs:
- name: "{{ docker_selinux_versioned_pkg[docker_selinux_version | string] }}"
yum_conf: "{{ docker_yum_conf }}"
- name: "{{ docker_versioned_pkg[docker_version | string] }}" - name: "{{ docker_versioned_pkg[docker_version | string] }}"
yum_conf: "{{ docker_yum_conf }}"
docker_repo_key_info: docker_repo_key_info:
pkg_key: '' pkg_key: ''

View File

@@ -1,12 +1,10 @@
--- ---
docker_kernel_min_version: '0' docker_kernel_min_version: '0'
# versioning: docker-io itself is pinned at docker 1.5
docker_package_info: docker_package_info:
pkg_mgr: yum pkg_mgr: zypper
pkgs: pkgs:
- name: docker-io - name: docker
docker_repo_key_info: docker_repo_key_info:
pkg_key: '' pkg_key: ''

View File

@@ -1,14 +1,15 @@
--- ---
docker_kernel_min_version: '3.10' docker_kernel_min_version: '3.10'
# https://apt.dockerproject.org/repo/dists/ubuntu-xenial/main/filelist # https://download.docker.com/linux/ubuntu/
docker_versioned_pkg: docker_versioned_pkg:
'latest': docker-engine 'latest': docker-ce
'1.11': docker-engine=1.11.1-0~{{ ansible_distribution_release|lower }} '1.11': docker-engine=1.11.2-0~{{ ansible_distribution_release|lower }}
'1.12': docker-engine=1.12.6-0~ubuntu-{{ ansible_distribution_release|lower }} '1.12': docker-engine=1.12.6-0~ubuntu-{{ ansible_distribution_release|lower }}
'1.13': docker-engine=1.13.1-0~ubuntu-{{ ansible_distribution_release|lower }} '1.13': docker-engine=1.13.1-0~ubuntu-{{ ansible_distribution_release|lower }}
'stable': docker-engine=17.03.0~ce-0~ubuntu-{{ ansible_distribution_release|lower }} '17.03': docker-ce=17.03.2~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
'edge': docker-engine=17.03.0~ce-0~ubuntu-{{ ansible_distribution_release|lower }} 'stable': docker-ce=17.03.2~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
'edge': docker-ce=17.12.1~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
docker_package_info: docker_package_info:
pkg_mgr: apt pkg_mgr: apt
@@ -18,14 +19,28 @@ docker_package_info:
docker_repo_key_info: docker_repo_key_info:
pkg_key: apt_key pkg_key: apt_key
keyserver: hkp://p80.pool.sks-keyservers.net:80 url: '{{ docker_ubuntu_repo_gpgkey }}'
repo_keys: repo_keys:
- 58118E89F3A912897C070ADBF76221572C52609D - 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
docker_repo_info: docker_repo_info:
pkg_repo: apt_repository pkg_repo: apt_repository
repos: repos:
- > - >
deb https://apt.dockerproject.org/repo deb {{ docker_ubuntu_repo_base_url }}
{{ ansible_distribution_release|lower }}
stable
dockerproject_repo_key_info:
pkg_key: apt_key
url: '{{ dockerproject_apt_repo_gpgkey }}'
repo_keys:
- 58118E89F3A912897C070ADBF76221572C52609D
dockerproject_repo_info:
pkg_repo: apt_repository
repos:
- >
deb {{ dockerproject_apt_repo_base_url }}
{{ ansible_distribution|lower }}-{{ ansible_distribution_release|lower }} {{ ansible_distribution|lower }}-{{ ansible_distribution_release|lower }}
main main

View File

@@ -1,5 +1,5 @@
--- ---
local_release_dir: /tmp local_release_dir: /tmp/releases
# Used to only evaluate vars from download role # Used to only evaluate vars from download role
skip_downloads: false skip_downloads: false
@@ -24,38 +24,34 @@ download_always_pull: False
download_delegate: "{% if download_localhost %}localhost{% else %}{{groups['kube-master'][0]}}{% endif %}" download_delegate: "{% if download_localhost %}localhost{% else %}{{groups['kube-master'][0]}}{% endif %}"
# Versions # Versions
kube_version: v1.8.1 kube_version: v1.9.5
kubeadm_version: "{{ kube_version }}" kubeadm_version: "{{ kube_version }}"
etcd_version: v3.2.4 etcd_version: v3.2.4
# TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults # TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# after migration to container download # after migration to container download
calico_version: "v2.5.0" calico_version: "v2.6.8"
calico_ctl_version: "v1.5.0" calico_ctl_version: "v1.6.3"
calico_cni_version: "v1.10.0" calico_cni_version: "v1.11.4"
calico_policy_version: "v0.7.0" calico_policy_version: "v1.0.3"
calico_rr_version: "v0.4.0" calico_rr_version: "v0.4.2"
weave_version: 2.0.4 flannel_version: "v0.10.0"
flannel_version: "v0.8.0" flannel_cni_version: "v0.3.0"
flannel_cni_version: "v0.2.0" istio_version: "0.2.6"
vault_version: 0.8.1
weave_version: 2.2.1
pod_infra_version: 3.0 pod_infra_version: 3.0
contiv_version: 1.1.7
cilium_version: "v1.0.0-rc8"
# Download URLs # Download URLs
istioctl_download_url: "https://storage.googleapis.com/istio-release/releases/{{ istio_version }}/istioctl/istioctl-linux"
kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/amd64/kubeadm" kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/amd64/kubeadm"
vault_download_url: "https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_amd64.zip"
# Checksums # Checksums
kubeadm_checksum: "93246027cc225b4fd7ec57bf1f562dbc78f2ed9f2b77a1468976c266a104cf4d"
istio_version: "0.2.6"
istioctl_download_url: "https://storage.googleapis.com/istio-release/releases/{{ istio_version }}/istioctl/istioctl-linux"
istioctl_checksum: fd703063c540b8c0ab943f478c05ab257d88ae27224c746a27d0526ddbf7c370 istioctl_checksum: fd703063c540b8c0ab943f478c05ab257d88ae27224c746a27d0526ddbf7c370
kubeadm_checksum: 12b6e9ac1624852b7c978bde70b9bde9ca0e4fc6581d09bddfb117bb41f93c74
vault_version: 0.8.1
vault_binary_checksum: 3c4d70ba71619a43229e65c67830e30e050eab7a81ac6b28325ff707e5914188 vault_binary_checksum: 3c4d70ba71619a43229e65c67830e30e050eab7a81ac6b28325ff707e5914188
vault_download_url: "https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_amd64.zip"
vault_image_repo: "vault"
vault_image_tag: "{{ vault_version }}"
# Containers # Containers
etcd_image_repo: "quay.io/coreos/etcd" etcd_image_repo: "quay.io/coreos/etcd"
@@ -70,12 +66,28 @@ calico_node_image_repo: "quay.io/calico/node"
calico_node_image_tag: "{{ calico_version }}" calico_node_image_tag: "{{ calico_version }}"
calico_cni_image_repo: "quay.io/calico/cni" calico_cni_image_repo: "quay.io/calico/cni"
calico_cni_image_tag: "{{ calico_cni_version }}" calico_cni_image_tag: "{{ calico_cni_version }}"
calico_policy_image_repo: "quay.io/calico/kube-policy-controller" calico_policy_image_repo: "quay.io/calico/kube-controllers"
calico_policy_image_tag: "{{ calico_policy_version }}" calico_policy_image_tag: "{{ calico_policy_version }}"
calico_rr_image_repo: "quay.io/calico/routereflector" calico_rr_image_repo: "quay.io/calico/routereflector"
calico_rr_image_tag: "{{ calico_rr_version }}" calico_rr_image_tag: "{{ calico_rr_version }}"
hyperkube_image_repo: "quay.io/coreos/hyperkube" istio_proxy_image_repo: docker.io/istio/proxy
hyperkube_image_tag: "{{ kube_version }}_coreos.0" istio_proxy_image_tag: "{{ istio_version }}"
istio_proxy_init_image_repo: docker.io/istio/proxy_init
istio_proxy_init_image_tag: "{{ istio_version }}"
istio_ca_image_repo: docker.io/istio/istio-ca
istio_ca_image_tag: "{{ istio_version }}"
istio_mixer_image_repo: docker.io/istio/mixer
istio_mixer_image_tag: "{{ istio_version }}"
istio_pilot_image_repo: docker.io/istio/pilot
istio_pilot_image_tag: "{{ istio_version }}"
istio_proxy_debug_image_repo: docker.io/istio/proxy_debug
istio_proxy_debug_image_tag: "{{ istio_version }}"
istio_sidecar_initializer_image_repo: docker.io/istio/sidecar_initializer
istio_sidecar_initializer_image_tag: "{{ istio_version }}"
istio_statsd_image_repo: prom/statsd-exporter
istio_statsd_image_tag: latest
hyperkube_image_repo: "gcr.io/google-containers/hyperkube"
hyperkube_image_tag: "{{ kube_version }}"
pod_infra_image_repo: "gcr.io/google_containers/pause-amd64" pod_infra_image_repo: "gcr.io/google_containers/pause-amd64"
pod_infra_image_tag: "{{ pod_infra_version }}" pod_infra_image_tag: "{{ pod_infra_version }}"
install_socat_image_repo: "xueshanf/install-socat" install_socat_image_repo: "xueshanf/install-socat"
@@ -89,20 +101,31 @@ weave_kube_image_repo: "weaveworks/weave-kube"
weave_kube_image_tag: "{{ weave_version }}" weave_kube_image_tag: "{{ weave_version }}"
weave_npc_image_repo: "weaveworks/weave-npc" weave_npc_image_repo: "weaveworks/weave-npc"
weave_npc_image_tag: "{{ weave_version }}" weave_npc_image_tag: "{{ weave_version }}"
contiv_image_repo: "contiv/netplugin"
contiv_image_tag: "{{ contiv_version }}"
contiv_auth_proxy_image_repo: "contiv/auth_proxy"
contiv_auth_proxy_image_tag: "{{ contiv_version }}"
cilium_image_repo: "docker.io/cilium/cilium"
cilium_image_tag: "{{ cilium_version }}"
nginx_image_repo: nginx nginx_image_repo: nginx
nginx_image_tag: 1.11.4-alpine nginx_image_tag: 1.13
dnsmasq_version: 2.78 dnsmasq_version: 2.78
dnsmasq_image_repo: "andyshinn/dnsmasq" dnsmasq_image_repo: "andyshinn/dnsmasq"
dnsmasq_image_tag: "{{ dnsmasq_version }}" dnsmasq_image_tag: "{{ dnsmasq_version }}"
kubedns_version: 1.14.5 kubedns_version: 1.14.8
kubedns_image_repo: "gcr.io/google_containers/k8s-dns-kube-dns-amd64" kubedns_image_repo: "gcr.io/google_containers/k8s-dns-kube-dns-amd64"
kubedns_image_tag: "{{ kubedns_version }}" kubedns_image_tag: "{{ kubedns_version }}"
coredns_version: 1.1.0
coredns_image_repo: "docker.io/coredns/coredns"
coredns_image_tag: "{{ coredns_version }}"
dnsmasq_nanny_image_repo: "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64" dnsmasq_nanny_image_repo: "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64"
dnsmasq_nanny_image_tag: "{{ kubedns_version }}" dnsmasq_nanny_image_tag: "{{ kubedns_version }}"
dnsmasq_sidecar_image_repo: "gcr.io/google_containers/k8s-dns-sidecar-amd64" dnsmasq_sidecar_image_repo: "gcr.io/google_containers/k8s-dns-sidecar-amd64"
dnsmasq_sidecar_image_tag: "{{ kubedns_version }}" dnsmasq_sidecar_image_tag: "{{ kubedns_version }}"
kubednsautoscaler_version: 1.1.1 dnsmasqautoscaler_version: 1.1.2
dnsmasqautoscaler_image_repo: "gcr.io/google_containers/cluster-proportional-autoscaler-amd64"
dnsmasqautoscaler_image_tag: "{{ dnsmasqautoscaler_version }}"
kubednsautoscaler_version: 1.1.2
kubednsautoscaler_image_repo: "gcr.io/google_containers/cluster-proportional-autoscaler-amd64" kubednsautoscaler_image_repo: "gcr.io/google_containers/cluster-proportional-autoscaler-amd64"
kubednsautoscaler_image_tag: "{{ kubednsautoscaler_version }}" kubednsautoscaler_image_tag: "{{ kubednsautoscaler_version }}"
test_image_repo: busybox test_image_repo: busybox
@@ -116,13 +139,30 @@ fluentd_image_tag: "{{ fluentd_version }}"
kibana_version: "v4.6.1" kibana_version: "v4.6.1"
kibana_image_repo: "gcr.io/google_containers/kibana" kibana_image_repo: "gcr.io/google_containers/kibana"
kibana_image_tag: "{{ kibana_version }}" kibana_image_tag: "{{ kibana_version }}"
helm_version: "v2.8.1"
helm_version: "v2.2.2"
helm_image_repo: "lachlanevenson/k8s-helm" helm_image_repo: "lachlanevenson/k8s-helm"
helm_image_tag: "{{ helm_version }}" helm_image_tag: "{{ helm_version }}"
tiller_version: "{{ helm_version }}"
tiller_image_repo: "gcr.io/kubernetes-helm/tiller" tiller_image_repo: "gcr.io/kubernetes-helm/tiller"
tiller_image_tag: "{{ tiller_version }}" tiller_image_tag: "{{ helm_version }}"
vault_image_repo: "vault"
vault_image_tag: "{{ vault_version }}"
registry_image_repo: "registry"
registry_image_tag: "2.6"
registry_proxy_image_repo: "gcr.io/google_containers/kube-registry-proxy"
registry_proxy_image_tag: "0.4"
local_volume_provisioner_image_repo: "quay.io/external_storage/local-volume-provisioner"
local_volume_provisioner_image_tag: "v2.0.0"
cephfs_provisioner_image_repo: "quay.io/kubespray/cephfs-provisioner"
cephfs_provisioner_image_tag: "92295a30"
ingress_nginx_controller_image_repo: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller"
ingress_nginx_controller_image_tag: "0.12.0"
ingress_nginx_default_backend_image_repo: "gcr.io/google_containers/defaultbackend"
ingress_nginx_default_backend_image_tag: "1.4"
cert_manager_version: "v0.2.3"
cert_manager_controller_image_repo: "quay.io/jetstack/cert-manager-controller"
cert_manager_controller_image_tag: "{{ cert_manager_version }}"
cert_manager_ingress_shim_image_repo: "quay.io/jetstack/cert-manager-ingress-shim"
cert_manager_ingress_shim_image_tag: "{{ cert_manager_version }}"
downloads: downloads:
netcheck_server: netcheck_server:
@@ -131,18 +171,24 @@ downloads:
repo: "{{ netcheck_server_img_repo }}" repo: "{{ netcheck_server_img_repo }}"
tag: "{{ netcheck_server_tag }}" tag: "{{ netcheck_server_tag }}"
sha256: "{{ netcheck_server_digest_checksum|default(None) }}" sha256: "{{ netcheck_server_digest_checksum|default(None) }}"
groups:
- k8s-cluster
netcheck_agent: netcheck_agent:
enabled: "{{ deploy_netchecker }}" enabled: "{{ deploy_netchecker }}"
container: true container: true
repo: "{{ netcheck_agent_img_repo }}" repo: "{{ netcheck_agent_img_repo }}"
tag: "{{ netcheck_agent_tag }}" tag: "{{ netcheck_agent_tag }}"
sha256: "{{ netcheck_agent_digest_checksum|default(None) }}" sha256: "{{ netcheck_agent_digest_checksum|default(None) }}"
groups:
- k8s-cluster
etcd: etcd:
enabled: true enabled: true
container: true container: true
repo: "{{ etcd_image_repo }}" repo: "{{ etcd_image_repo }}"
tag: "{{ etcd_image_tag }}" tag: "{{ etcd_image_tag }}"
sha256: "{{ etcd_digest_checksum|default(None) }}" sha256: "{{ etcd_digest_checksum|default(None) }}"
groups:
- etcd
kubeadm: kubeadm:
enabled: "{{ kubeadm_enabled }}" enabled: "{{ kubeadm_enabled }}"
file: true file: true
@@ -154,6 +200,8 @@ downloads:
unarchive: false unarchive: false
owner: "root" owner: "root"
mode: "0755" mode: "0755"
groups:
- k8s-cluster
istioctl: istioctl:
enabled: "{{ istio_enabled }}" enabled: "{{ istio_enabled }}"
file: true file: true
@@ -165,116 +213,250 @@ downloads:
unarchive: false unarchive: false
owner: "root" owner: "root"
mode: "0755" mode: "0755"
groups:
- kube-master
istio_proxy:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_proxy_image_repo }}"
tag: "{{ istio_proxy_image_tag }}"
sha256: "{{ istio_proxy_digest_checksum|default(None) }}"
groups:
- kube-node
istio_proxy_init:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_proxy_init_image_repo }}"
tag: "{{ istio_proxy_init_image_tag }}"
sha256: "{{ istio_proxy_init_digest_checksum|default(None) }}"
groups:
- kube-node
istio_ca:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_ca_image_repo }}"
tag: "{{ istio_ca_image_tag }}"
sha256: "{{ istio_ca_digest_checksum|default(None) }}"
groups:
- kube-node
istio_mixer:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_mixer_image_repo }}"
tag: "{{ istio_mixer_image_tag }}"
sha256: "{{ istio_mixer_digest_checksum|default(None) }}"
groups:
- kube-node
istio_pilot:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_pilot_image_repo }}"
tag: "{{ istio_pilot_image_tag }}"
sha256: "{{ istio_pilot_digest_checksum|default(None) }}"
groups:
- kube-node
istio_proxy_debug:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_proxy_debug_image_repo }}"
tag: "{{ istio_proxy_debug_image_tag }}"
sha256: "{{ istio_proxy_debug_digest_checksum|default(None) }}"
groups:
- kube-node
istio_sidecar_initializer:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_sidecar_initializer_image_repo }}"
tag: "{{ istio_sidecar_initializer_image_tag }}"
sha256: "{{ istio_sidecar_initializer_digest_checksum|default(None) }}"
groups:
- kube-node
istio_statsd:
enabled: "{{ istio_enabled }}"
container: true
repo: "{{ istio_statsd_image_repo }}"
tag: "{{ istio_statsd_image_tag }}"
sha256: "{{ istio_statsd_digest_checksum|default(None) }}"
groups:
- kube-node
hyperkube: hyperkube:
enabled: true enabled: true
container: true container: true
repo: "{{ hyperkube_image_repo }}" repo: "{{ hyperkube_image_repo }}"
tag: "{{ hyperkube_image_tag }}" tag: "{{ hyperkube_image_tag }}"
sha256: "{{ hyperkube_digest_checksum|default(None) }}" sha256: "{{ hyperkube_digest_checksum|default(None) }}"
groups:
- k8s-cluster
cilium:
enabled: "{{ kube_network_plugin == 'cilium' }}"
container: true
repo: "{{ cilium_image_repo }}"
tag: "{{ cilium_image_tag }}"
sha256: "{{ cilium_digest_checksum|default(None) }}"
groups:
- k8s-cluster
flannel: flannel:
enabled: "{{ kube_network_plugin == 'flannel' or kube_network_plugin == 'canal' }}" enabled: "{{ kube_network_plugin == 'flannel' or kube_network_plugin == 'canal' }}"
container: true container: true
repo: "{{ flannel_image_repo }}" repo: "{{ flannel_image_repo }}"
tag: "{{ flannel_image_tag }}" tag: "{{ flannel_image_tag }}"
sha256: "{{ flannel_digest_checksum|default(None) }}" sha256: "{{ flannel_digest_checksum|default(None) }}"
groups:
- k8s-cluster
flannel_cni: flannel_cni:
enabled: "{{ kube_network_plugin == 'flannel' }}" enabled: "{{ kube_network_plugin == 'flannel' }}"
container: true container: true
repo: "{{ flannel_cni_image_repo }}" repo: "{{ flannel_cni_image_repo }}"
tag: "{{ flannel_cni_image_tag }}" tag: "{{ flannel_cni_image_tag }}"
sha256: "{{ flannel_cni_digest_checksum|default(None) }}" sha256: "{{ flannel_cni_digest_checksum|default(None) }}"
groups:
- k8s-cluster
calicoctl: calicoctl:
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}" enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
container: true container: true
repo: "{{ calicoctl_image_repo }}" repo: "{{ calicoctl_image_repo }}"
tag: "{{ calicoctl_image_tag }}" tag: "{{ calicoctl_image_tag }}"
sha256: "{{ calicoctl_digest_checksum|default(None) }}" sha256: "{{ calicoctl_digest_checksum|default(None) }}"
groups:
- k8s-cluster
calico_node: calico_node:
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}" enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
container: true container: true
repo: "{{ calico_node_image_repo }}" repo: "{{ calico_node_image_repo }}"
tag: "{{ calico_node_image_tag }}" tag: "{{ calico_node_image_tag }}"
sha256: "{{ calico_node_digest_checksum|default(None) }}" sha256: "{{ calico_node_digest_checksum|default(None) }}"
groups:
- k8s-cluster
calico_cni: calico_cni:
enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}" enabled: "{{ kube_network_plugin == 'calico' or kube_network_plugin == 'canal' }}"
container: true container: true
repo: "{{ calico_cni_image_repo }}" repo: "{{ calico_cni_image_repo }}"
tag: "{{ calico_cni_image_tag }}" tag: "{{ calico_cni_image_tag }}"
sha256: "{{ calico_cni_digest_checksum|default(None) }}" sha256: "{{ calico_cni_digest_checksum|default(None) }}"
groups:
- k8s-cluster
calico_policy: calico_policy:
enabled: "{{ enable_network_policy or kube_network_plugin == 'canal' }}" enabled: "{{ enable_network_policy or kube_network_plugin == 'canal' }}"
container: true container: true
repo: "{{ calico_policy_image_repo }}" repo: "{{ calico_policy_image_repo }}"
tag: "{{ calico_policy_image_tag }}" tag: "{{ calico_policy_image_tag }}"
sha256: "{{ calico_policy_digest_checksum|default(None) }}" sha256: "{{ calico_policy_digest_checksum|default(None) }}"
groups:
- k8s-cluster
calico_rr: calico_rr:
enabled: "{{ peer_with_calico_rr is defined and peer_with_calico_rr}} and kube_network_plugin == 'calico'" enabled: "{{ peer_with_calico_rr is defined and peer_with_calico_rr and kube_network_plugin == 'calico' }}"
container: true container: true
repo: "{{ calico_rr_image_repo }}" repo: "{{ calico_rr_image_repo }}"
tag: "{{ calico_rr_image_tag }}" tag: "{{ calico_rr_image_tag }}"
sha256: "{{ calico_rr_digest_checksum|default(None) }}" sha256: "{{ calico_rr_digest_checksum|default(None) }}"
groups:
- calico-rr
weave_kube: weave_kube:
enabled: "{{ kube_network_plugin == 'weave' }}" enabled: "{{ kube_network_plugin == 'weave' }}"
container: true container: true
repo: "{{ weave_kube_image_repo }}" repo: "{{ weave_kube_image_repo }}"
tag: "{{ weave_kube_image_tag }}" tag: "{{ weave_kube_image_tag }}"
sha256: "{{ weave_kube_digest_checksum|default(None) }}" sha256: "{{ weave_kube_digest_checksum|default(None) }}"
groups:
- k8s-cluster
weave_npc: weave_npc:
enabled: "{{ kube_network_plugin == 'weave' }}" enabled: "{{ kube_network_plugin == 'weave' }}"
container: true container: true
repo: "{{ weave_npc_image_repo }}" repo: "{{ weave_npc_image_repo }}"
tag: "{{ weave_npc_image_tag }}" tag: "{{ weave_npc_image_tag }}"
sha256: "{{ weave_npc_digest_checksum|default(None) }}" sha256: "{{ weave_npc_digest_checksum|default(None) }}"
groups:
- k8s-cluster
contiv:
enabled: "{{ kube_network_plugin == 'contiv' }}"
container: true
repo: "{{ contiv_image_repo }}"
tag: "{{ contiv_image_tag }}"
sha256: "{{ contiv_digest_checksum|default(None) }}"
groups:
- k8s-cluster
contiv_auth_proxy:
enabled: "{{ kube_network_plugin == 'contiv' }}"
container: true
repo: "{{ contiv_auth_proxy_image_repo }}"
tag: "{{ contiv_auth_proxy_image_tag }}"
sha256: "{{ contiv_auth_proxy_digest_checksum|default(None) }}"
groups:
- k8s-cluster
pod_infra: pod_infra:
enabled: true enabled: true
container: true container: true
repo: "{{ pod_infra_image_repo }}" repo: "{{ pod_infra_image_repo }}"
tag: "{{ pod_infra_image_tag }}" tag: "{{ pod_infra_image_tag }}"
sha256: "{{ pod_infra_digest_checksum|default(None) }}" sha256: "{{ pod_infra_digest_checksum|default(None) }}"
groups:
- k8s-cluster
install_socat: install_socat:
enabled: "{{ ansible_os_family in ['CoreOS', 'Container Linux by CoreOS'] }}" enabled: "{{ ansible_os_family in ['CoreOS', 'Container Linux by CoreOS'] }}"
container: true container: true
repo: "{{ install_socat_image_repo }}" repo: "{{ install_socat_image_repo }}"
tag: "{{ install_socat_image_tag }}" tag: "{{ install_socat_image_tag }}"
sha256: "{{ install_socat_digest_checksum|default(None) }}" sha256: "{{ install_socat_digest_checksum|default(None) }}"
groups:
- k8s-cluster
nginx: nginx:
enabled: true enabled: "{{ loadbalancer_apiserver_localhost }}"
container: true container: true
repo: "{{ nginx_image_repo }}" repo: "{{ nginx_image_repo }}"
tag: "{{ nginx_image_tag }}" tag: "{{ nginx_image_tag }}"
sha256: "{{ nginx_digest_checksum|default(None) }}" sha256: "{{ nginx_digest_checksum|default(None) }}"
groups:
- kube-node
dnsmasq: dnsmasq:
enabled: "{{ dns_mode == 'dnsmasq_kubedns' }}" enabled: "{{ dns_mode == 'dnsmasq_kubedns' }}"
container: true container: true
repo: "{{ dnsmasq_image_repo }}" repo: "{{ dnsmasq_image_repo }}"
tag: "{{ dnsmasq_image_tag }}" tag: "{{ dnsmasq_image_tag }}"
sha256: "{{ dnsmasq_digest_checksum|default(None) }}" sha256: "{{ dnsmasq_digest_checksum|default(None) }}"
groups:
- kube-node
kubedns: kubedns:
enabled: true enabled: "{{ dns_mode in ['kubedns', 'dnsmasq_kubedns'] }}"
container: true container: true
repo: "{{ kubedns_image_repo }}" repo: "{{ kubedns_image_repo }}"
tag: "{{ kubedns_image_tag }}" tag: "{{ kubedns_image_tag }}"
sha256: "{{ kubedns_digest_checksum|default(None) }}" sha256: "{{ kubedns_digest_checksum|default(None) }}"
groups:
- kube-node
coredns:
enabled: "{{ dns_mode in ['coredns', 'coredns_dual'] }}"
container: true
repo: "{{ coredns_image_repo }}"
tag: "{{ coredns_image_tag }}"
sha256: "{{ coredns_digest_checksum|default(None) }}"
groups:
- kube-node
dnsmasq_nanny: dnsmasq_nanny:
enabled: true enabled: "{{ dns_mode in ['kubedns', 'dnsmasq_kubedns'] }}"
container: true container: true
repo: "{{ dnsmasq_nanny_image_repo }}" repo: "{{ dnsmasq_nanny_image_repo }}"
tag: "{{ dnsmasq_nanny_image_tag }}" tag: "{{ dnsmasq_nanny_image_tag }}"
sha256: "{{ dnsmasq_nanny_digest_checksum|default(None) }}" sha256: "{{ dnsmasq_nanny_digest_checksum|default(None) }}"
groups:
- kube-node
dnsmasq_sidecar: dnsmasq_sidecar:
enabled: true enabled: "{{ dns_mode in ['kubedns', 'dnsmasq_kubedns'] }}"
container: true container: true
repo: "{{ dnsmasq_sidecar_image_repo }}" repo: "{{ dnsmasq_sidecar_image_repo }}"
tag: "{{ dnsmasq_sidecar_image_tag }}" tag: "{{ dnsmasq_sidecar_image_tag }}"
sha256: "{{ dnsmasq_sidecar_digest_checksum|default(None) }}" sha256: "{{ dnsmasq_sidecar_digest_checksum|default(None) }}"
groups:
- kube-node
kubednsautoscaler: kubednsautoscaler:
enabled: true enabled: "{{ dns_mode in ['kubedns', 'dnsmasq_kubedns'] }}"
container: true container: true
repo: "{{ kubednsautoscaler_image_repo }}" repo: "{{ kubednsautoscaler_image_repo }}"
tag: "{{ kubednsautoscaler_image_tag }}" tag: "{{ kubednsautoscaler_image_tag }}"
sha256: "{{ kubednsautoscaler_digest_checksum|default(None) }}" sha256: "{{ kubednsautoscaler_digest_checksum|default(None) }}"
groups:
- kube-node
testbox: testbox:
enabled: true enabled: false
container: true container: true
repo: "{{ test_image_repo }}" repo: "{{ test_image_repo }}"
tag: "{{ test_image_tag }}" tag: "{{ test_image_tag }}"
@@ -285,30 +467,40 @@ downloads:
repo: "{{ elasticsearch_image_repo }}" repo: "{{ elasticsearch_image_repo }}"
tag: "{{ elasticsearch_image_tag }}" tag: "{{ elasticsearch_image_tag }}"
sha256: "{{ elasticsearch_digest_checksum|default(None) }}" sha256: "{{ elasticsearch_digest_checksum|default(None) }}"
groups:
- kube-node
fluentd: fluentd:
enabled: "{{ efk_enabled }}" enabled: "{{ efk_enabled }}"
container: true container: true
repo: "{{ fluentd_image_repo }}" repo: "{{ fluentd_image_repo }}"
tag: "{{ fluentd_image_tag }}" tag: "{{ fluentd_image_tag }}"
sha256: "{{ fluentd_digest_checksum|default(None) }}" sha256: "{{ fluentd_digest_checksum|default(None) }}"
groups:
- kube-node
kibana: kibana:
enabled: "{{ efk_enabled }}" enabled: "{{ efk_enabled }}"
container: true container: true
repo: "{{ kibana_image_repo }}" repo: "{{ kibana_image_repo }}"
tag: "{{ kibana_image_tag }}" tag: "{{ kibana_image_tag }}"
sha256: "{{ kibana_digest_checksum|default(None) }}" sha256: "{{ kibana_digest_checksum|default(None) }}"
groups:
- kube-node
helm: helm:
enabled: "{{ helm_enabled }}" enabled: "{{ helm_enabled }}"
container: true container: true
repo: "{{ helm_image_repo }}" repo: "{{ helm_image_repo }}"
tag: "{{ helm_image_tag }}" tag: "{{ helm_image_tag }}"
sha256: "{{ helm_digest_checksum|default(None) }}" sha256: "{{ helm_digest_checksum|default(None) }}"
groups:
- kube-node
tiller: tiller:
enabled: "{{ helm_enabled }}" enabled: "{{ helm_enabled }}"
container: true container: true
repo: "{{ tiller_image_repo }}" repo: "{{ tiller_image_repo }}"
tag: "{{ tiller_image_tag }}" tag: "{{ tiller_image_tag }}"
sha256: "{{ tiller_digest_checksum|default(None) }}" sha256: "{{ tiller_digest_checksum|default(None) }}"
groups:
- kube-node
vault: vault:
enabled: "{{ cert_management == 'vault' }}" enabled: "{{ cert_management == 'vault' }}"
container: "{{ vault_deployment_type != 'host' }}" container: "{{ vault_deployment_type != 'host' }}"
@@ -323,6 +515,72 @@ downloads:
unarchive: true unarchive: true
url: "{{ vault_download_url }}" url: "{{ vault_download_url }}"
version: "{{ vault_version }}" version: "{{ vault_version }}"
groups:
- vault
registry:
enabled: "{{ registry_enabled }}"
container: true
repo: "{{ registry_image_repo }}"
tag: "{{ registry_image_tag }}"
sha256: "{{ registry_digest_checksum|default(None) }}"
groups:
- kube-node
registry_proxy:
enabled: "{{ registry_enabled }}"
container: true
repo: "{{ registry_proxy_image_repo }}"
tag: "{{ registry_proxy_image_tag }}"
sha256: "{{ registry_proxy_digest_checksum|default(None) }}"
groups:
- kube-node
local_volume_provisioner:
enabled: "{{ local_volume_provisioner_enabled }}"
container: true
repo: "{{ local_volume_provisioner_image_repo }}"
tag: "{{ local_volume_provisioner_image_tag }}"
sha256: "{{ local_volume_provisioner_digest_checksum|default(None) }}"
groups:
- kube-node
cephfs_provisioner:
enabled: "{{ cephfs_provisioner_enabled }}"
container: true
repo: "{{ cephfs_provisioner_image_repo }}"
tag: "{{ cephfs_provisioner_image_tag }}"
sha256: "{{ cephfs_provisioner_digest_checksum|default(None) }}"
groups:
- kube-node
ingress_nginx_controller:
enabled: "{{ ingress_nginx_enabled }}"
container: true
repo: "{{ ingress_nginx_controller_image_repo }}"
tag: "{{ ingress_nginx_controller_image_tag }}"
sha256: "{{ ingress_nginx_controller_digest_checksum|default(None) }}"
groups:
- kube-ingress
ingress_nginx_default_backend:
enabled: "{{ ingress_nginx_enabled }}"
container: true
repo: "{{ ingress_nginx_default_backend_image_repo }}"
tag: "{{ ingress_nginx_default_backend_image_tag }}"
sha256: "{{ ingress_nginx_default_backend_digest_checksum|default(None) }}"
groups:
- kube-ingress
cert_manager_controller:
enabled: "{{ cert_manager_enabled }}"
container: true
repo: "{{ cert_manager_controller_image_repo }}"
tag: "{{ cert_manager_controller_image_tag }}"
sha256: "{{ cert_manager_controller_digest_checksum|default(None) }}"
groups:
- kube-node
cert_manager_ingress_shim:
enabled: "{{ cert_manager_enabled }}"
container: true
repo: "{{ cert_manager_ingress_shim_image_repo }}"
tag: "{{ cert_manager_ingress_shim_image_tag }}"
sha256: "{{ cert_manager_ingress_shim_digest_checksum|default(None) }}"
groups:
- kube-node
download_defaults: download_defaults:
container: false container: false

View File

@@ -1,8 +1,8 @@
--- ---
- name: container_download | Make download decision if pull is required by tag or sha256 - name: container_download | Make download decision if pull is required by tag or sha256
include: set_docker_image_facts.yml include_tasks: set_docker_image_facts.yml
delegate_to: "{{ download_delegate if download_run_once or omit }}" delegate_to: "{{ download_delegate if download_run_once or omit }}"
delegate_facts: no delegate_facts: yes
run_once: "{{ download_run_once }}" run_once: "{{ download_run_once }}"
when: when:
- download.enabled - download.enabled
@@ -10,17 +10,32 @@
tags: tags:
- facts - facts
- name: container_download | Download containers if pull is required or told to always pull # FIXME(mattymo): In Ansible 2.4 omitting download delegate is broken. Move back
# to one task in the future.
- name: container_download | Download containers if pull is required or told to always pull (delegate)
command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}" command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}"
register: pull_task_result register: pull_task_result
until: pull_task_result|succeeded until: pull_task_result|succeeded
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
environment: "{{ proxy_env }}"
when: when:
- download_run_once
- download.enabled - download.enabled
- download.container - download.container
- pull_required|default(download_always_pull) - pull_required|default(download_always_pull)
delegate_to: "{{ download_delegate if download_run_once or omit }}" delegate_to: "{{ download_delegate }}"
delegate_facts: no delegate_facts: yes
run_once: "{{ download_run_once }}" run_once: yes
- name: container_download | Download containers if pull is required or told to always pull (all nodes)
command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}"
register: pull_task_result
until: pull_task_result|succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- not download_run_once
- download.enabled
- download.container
- pull_required|default(download_always_pull)
- group_names | intersect(download.groups) | length

View File

@@ -13,6 +13,7 @@
when: when:
- download.enabled - download.enabled
- download.file - download.file
- group_names | intersect(download.groups) | length
- name: file_download | Download item - name: file_download | Download item
get_url: get_url:
@@ -25,10 +26,10 @@
until: "'OK' in get_url_result.msg or 'file already exists' in get_url_result.msg" until: "'OK' in get_url_result.msg or 'file already exists' in get_url_result.msg"
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
environment: "{{ proxy_env }}"
when: when:
- download.enabled - download.enabled
- download.file - download.file
- group_names | intersect(download.groups) | length
- name: file_download | Extract archives - name: file_download | Extract archives
unarchive: unarchive:
@@ -41,3 +42,4 @@
- download.enabled - download.enabled
- download.file - download.file
- download.unarchive|default(False) - download.unarchive|default(False)
- group_names | intersect(download.groups) | length

Some files were not shown because too many files have changed in this diff Show More