Compare commits

..

25 Commits

Author SHA1 Message Date
R. P. Taylor
b5f3f2de38 [2.26] Update to last patch versions (runc, nerdctl, etcd) (#11576) (#12817)
* Update etcd, containerd, runc, nerdctl versions

* Update etcd and containerd versions in README

Co-authored-by: janosbabik <143906591+janosbabik@users.noreply.github.com>
2026-01-02 14:26:35 +05:30
k8s-infra-cherrypick-robot
d744e3fb92 Fix: Change "empty" definition for PodSecurity Admission configuration (#12477)
Fixes a bug where `kube-apiserver` fails to start if the PodSecurity
configuration file doesn't have the `apiVersion` and `kind` keys.

Signed-off-by: Alejandro Macedo <alex.macedopereira@gmail.com>
Co-authored-by: Alejandro Macedo <alex.macedopereira@gmail.com>
2025-08-19 03:45:36 -07:00
k8s-infra-cherrypick-robot
6921b70a22 Add version pinning for AWS tf provider to fix CI (#12325)
Co-authored-by: Chad Swenson <chadswen@gmail.com>
2025-06-19 19:24:51 -07:00
k8s-infra-cherrypick-robot
6f9d003999 fix manage-offline-container-images.sh get image_id (#12315)
Co-authored-by: DearJay <zhongtianjieyi143@gmail.com>
2025-06-15 08:00:57 -07:00
Max Gautier
8a685bd9b6 Move 'pretend certificates' **after** cert distribution (#12222)
The link target will only exist after we distribute the certs on each node.
2025-05-16 20:23:13 -07:00
ChengHao Yang
1d6a6c86f9 Bump ingress-nginx to 1.11.5 and certgen to 1.5.2 (#12081)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-12 01:41:14 -07:00
k8s-infra-cherrypick-robot
0b2488dfaa Workaround missing etcd certds on control plane node (#12193)
Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-05-06 09:15:14 -07:00
Max Gautier
30e1878d27 Revert "Limit nodes in gather ansible_default_ipv4 (#11370)" (#12063)
This reverts commit 2d194af85e.
2025-03-21 14:26:31 -07:00
k8s-infra-cherrypick-robot
9a506636e3 fix: kubecontrollersconfigurations list permission (#12038)
[WARNING][1] kube-controllers/runconfig.go 193: unable to list KubeControllersConfiguration(default) error=connection is unauthorized: kubecontrollersconfigurations.crd.projectcalico.org "default" is forbidden: User "system:serviceaccount:kube-system:calico-kube-controllers" cannot list resource "kubecontrollersconfigurations" in API group "crd.projectcalico.org" at the cluster scope

Co-authored-by: darkobas <marko@datafund.io>
2025-03-15 09:15:47 -07:00
ChengHao Yang
f7af33fac4 [release-2.26] Remove: tox testing from pre-commit (#12029)
* Remove amazon-linux2 from CI: issue with vm creation

Co-authored-by: ant31 <2t.antoine@gmail.com>

* Remove tox related-test from pre-commit

---------

Co-authored-by: ant31 <2t.antoine@gmail.com>
2025-03-13 02:21:46 -07:00
k8s-infra-cherrypick-robot
184b15f8ae Revert "apiserver: fix incorrect path to admission plugins config files (#11779)" (#11811)
This reverts commit 742409e663.

Co-authored-by: Max Gautier <mg@max.gautier.name>
2024-12-18 18:10:10 +01:00
k8s-infra-cherrypick-robot
ee2d3916f2 apiserver: fix incorrect path to admission plugins config files (#11800)
Co-authored-by: Max Gautier <mg@max.gautier.name>
2024-12-16 10:18:51 +01:00
k8s-infra-cherrypick-robot
75e12e8981 containerd: add After=dbus.service (#11784)
This is needed for shutdown ordering: while at startup, it's not a
problem that containerd start before dbus (the dbus socket already
exists) it needs to shutdown before dbus to do its cleanup (asking
systemd via dbus to cleanup cgroups).

Co-authored-by: Max Gautier <mg@max.gautier.name>
2024-12-11 16:32:03 +00:00
k8s-infra-cherrypick-robot
b35a3ff924 Bump version of community.general (#11726)
We use the persistent option of modprobe, which was introduced in 7.0.0

Co-authored-by: Max Gautier <mg@max.gautier.name>
2024-11-18 14:00:53 +00:00
k8s-infra-cherrypick-robot
cd26b379fe Fix the pretty-printing of (core|nodelocal)dns (#11723)
When using
dns_upstream_forward_extra_opts:
  prefer_udp: "" # the option as no value so use empty string to just
                 # put the key

This is rendered in the dns configmap as ($ for end-of-line)

...
  prefer_udp $
...

Note the trailing space.
This triggers https://github.com/kubernetes/kubernetes/issues/36222,
which makes the configmap hardly readable when editing them manually or
simply putting them in a yaml file for inspection.

Trim the concatenation of option + value to get rid of any trailing
space.

Co-authored-by: Max Gautier <mg@max.gautier.name>
2024-11-18 08:12:54 +00:00
k8s-infra-cherrypick-robot
0b9872cd27 fix task naming in bootstrap-os (#11719)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
Co-authored-by: bo.jiang <bo.jiang@daocloud.io>
2024-11-17 18:06:52 +00:00
k8s-infra-cherrypick-robot
6b487d513e Include scripts and conf files in the ansible collection (#11717)
We use shell scripts and conf files in some roles (notably, certificates
provisioning), so we need to include them in order for the collection to
work when using the configurations depending on those roles.

Co-authored-by: Max Gautier <mg@max.gautier.name>
2024-11-16 20:36:52 +00:00
Robert Volkmann
e5bdb3b0b7 Add hashes for Kubernetes 1.30.6 (new default), 1.30.5, 1.29.8-10, 1.28.13-15 (#11673) 2024-10-31 08:25:26 +00:00
k8s-infra-cherrypick-robot
552c6bb975 update calico template (#11676)
Signed-off-by: KubeKyrie <shaolong.qin@daocloud.io>
Co-authored-by: KubeKyrie <shaolong.qin@daocloud.io>
2024-10-31 08:21:26 +00:00
k8s-infra-cherrypick-robot
5ad38a4d0d [release-2.26] Fix debian11-custom-cni failing test & upgrade debian12-custom-cni-helm chart version (#11658)
* Test: update custom_cni values

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Test: fix cilium require kube_owner set to root

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Test: update custom_cni render manifests

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Test: fix render template pre-commit

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Test: update debian12-custom-cni-helm chart version to 1.16.3

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2024-10-25 03:08:51 +01:00
Robert Volkmann
6c553d75d2 Bump containerd to 1.7.23 (#11643) 2024-10-19 15:15:03 +01:00
k8s-infra-cherrypick-robot
3f44293f69 Fix: cannot stop & remove all cri containers via remove_node.yml (#11637)
Before adding these changes, `ansible_facts.services["containerd.service"]` will not defined and fail to check for triggering the container stop and delete behaviors.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2024-10-16 03:31:03 +01:00
k8s-infra-cherrypick-robot
84324f50be Always copy cert generation scripts to first etcd (#11615)
If we don't, existing installation would not pick up fix to that script,
such as dc33a1971d.

Co-authored-by: Max Gautier <mg@max.gautier.name>
2024-10-09 12:52:23 +01:00
k8s-infra-cherrypick-robot
4577ee4a5d fix elastx cleanup security groups (#11536)
Co-authored-by: Kay Yan <kay.yan@daocloud.io>
2024-09-16 08:43:14 +01:00
k8s-infra-cherrypick-robot
d80686acb0 [release-2.26] Refactor and expand download_hash.py (#11538)
* download_hash.py: generalized and data-driven

The script is currently limited to one hardcoded URL for kubernetes
related binaries, and a fixed set of architectures.

The solution is three-fold:
1. Use an url template dictionary for each download -> this allow to easily
   add support for new downloads.
2. Source the architectures to search from the existing data
3. Enumerate the existing versions in the data and start searching from
   the last one until no newer version is found (newer in the version
   order sense, irrespective of actual age)

* download_hash.py: support for 'multi-hash' file + runc

runc upstream does not provide one hash file per assets in their
releases, but one file with all the hashes.
To handle this (and/or any arbitrary format from upstreams), add a
dictionary mapping the name of the download to a lambda function which
transform the file provided by upstream into a dictionary of hashes,
keyed by architecture.

* download_hash: argument handling with argparse

Allow the script to be called with a list of components, to only
download new versions checksums for those.
By default, we get new versions checksums for all supported (by the
script) components.

* download_hash: propagate new patch versions to all archs

* download_hash: add support for 'simple hash' components

* download_hash: support 'multi-hash' components

* download_hash: document missing support

* download_hash: use persistent session

This allows to reuse http connection and be more efficient.
From rough measuring it saves around 25-30% of execution time.

* download_hash: cache request for 'multi-hash' files

This avoid re-downloading the same file for different arch and
re-parsing it

* download_hash: document usage

---------

Co-authored-by: Max Gautier <mg@max.gautier.name>
2024-09-16 08:39:14 +01:00
350 changed files with 8058 additions and 8344 deletions

View File

@@ -37,6 +37,3 @@ exclude_paths:
- tests/files/custom_cni/cilium.yaml - tests/files/custom_cni/cilium.yaml
- venv - venv
- .github - .github
- .ansible
mock_modules:
- gluster.gluster.gluster_volume

View File

@@ -1,5 +1,4 @@
--- ---
blank_issues_enabled: false
contact_links: contact_links:
- name: Support Request - name: Support Request
url: https://kubernetes.slack.com/channels/kubespray url: https://kubernetes.slack.com/channels/kubespray

View File

@@ -6,12 +6,15 @@ stages:
- deploy-extended - deploy-extended
variables: variables:
KUBESPRAY_VERSION: v2.26.0 KUBESPRAY_VERSION: v2.25.0
FAILFASTCI_NAMESPACE: 'kargo-ci' FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray' GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true" ANSIBLE_FORCE_COLOR: "true"
ANSIBLE_STDOUT_CALLBACK: "debug"
MAGIC: "ci check this" MAGIC: "ci check this"
TEST_ID: "$CI_PIPELINE_ID-$CI_JOB_ID"
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
CI_TEST_REGISTRY_MIRROR: "./tests/common/_docker_hub_registry_mirror.yml"
CI_TEST_SETTING: "./tests/common/_kubespray_test_settings.yml"
GS_ACCESS_KEY_ID: $GS_KEY GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET GS_SECRET_ACCESS_KEY: $GS_SECRET
CONTAINER_ENGINE: docker CONTAINER_ENGINE: docker
@@ -19,11 +22,13 @@ variables:
GCE_PREEMPTIBLE: "false" GCE_PREEMPTIBLE: "false"
ANSIBLE_KEEP_REMOTE_FILES: "1" ANSIBLE_KEEP_REMOTE_FILES: "1"
ANSIBLE_CONFIG: ./tests/ansible.cfg ANSIBLE_CONFIG: ./tests/ansible.cfg
ANSIBLE_INVENTORY: ./inventory/sample/${CI_JOB_NAME}-${BUILD_NUMBER}.ini
IDEMPOT_CHECK: "false"
RESET_CHECK: "false" RESET_CHECK: "false"
REMOVE_NODE_CHECK: "false" REMOVE_NODE_CHECK: "false"
UPGRADE_TEST: "false" UPGRADE_TEST: "false"
MITOGEN_ENABLE: "false" MITOGEN_ENABLE: "false"
ANSIBLE_VERBOSITY: 2 ANSIBLE_LOG_LEVEL: "-vv"
RECOVER_CONTROL_PLANE_TEST: "false" RECOVER_CONTROL_PLANE_TEST: "false"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]" RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]"
TERRAFORM_VERSION: 1.3.7 TERRAFORM_VERSION: 1.3.7

View File

@@ -3,17 +3,14 @@
tags: [ffci-vm-med] tags: [ffci-vm-med]
only: [/^pr-.*$/] only: [/^pr-.*$/]
except: ['triggers'] except: ['triggers']
image: quay.io/kubespray/vm-kubespray-ci:v13 image: quay.io/kubespray/vm-kubespray-ci:v6
services: [] services: []
stage: deploy-part1 stage: deploy-part1
needs: [] needs: []
# - ci-not-authorized # - ci-not-authorized
variables: variables:
VAGRANT_DEFAULT_PROVIDER: "libvirt" VAGRANT_DEFAULT_PROVIDER: "libvirt"
VAGRANT_HOME: "$CI_PROJECT_DIR/.vagrant.d"
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
before_script: before_script:
- mkdir -p $VAGRANT_HOME
- groups - groups
- python3 -m venv citest - python3 -m venv citest
- source citest/bin/activate - source citest/bin/activate
@@ -32,15 +29,10 @@
when: always when: always
paths: paths:
- molecule_logs/ - molecule_logs/
cache:
key: $CI_JOB_NAME_SLUG
paths:
- .vagrant.d/boxes
- .cache/pip
policy: pull-push # TODO: change to "pull" when not on main
# CI template for periodic CI jobs # CI template for periodic CI jobs
# Enabled when PERIODIC_CI_ENABLED var is set # Enabled when PERIODIC_CI_ENABLED var is set
.molecule_periodic: .molecule_periodic:
only: only:
variables: variables:

View File

@@ -65,6 +65,14 @@
allow_failure: true allow_failure: true
extends: .packet extends: .packet
packet_cleanup_old:
stage: deploy-part1
extends: .packet_periodic
script:
- cd tests
- make cleanup-packet
after_script: []
# The ubuntu20-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken # The ubuntu20-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
packet_ubuntu20-calico-all-in-one: packet_ubuntu20-calico-all-in-one:
stage: deploy-part1 stage: deploy-part1
@@ -75,7 +83,7 @@ packet_ubuntu20-calico-all-in-one:
# ### PR JOBS PART2 # ### PR JOBS PART2
packet_ubuntu20-crio: packet_ubuntu20-crio:
extends: .packet_pr extends: .packet_pr_manual
packet_ubuntu22-calico-all-in-one: packet_ubuntu22-calico-all-in-one:
extends: .packet_pr extends: .packet_pr
@@ -89,12 +97,12 @@ packet_ubuntu24-calico-etcd-datastore:
extends: .packet_pr extends: .packet_pr
packet_almalinux8-crio: packet_almalinux8-crio:
extends: .packet_pr_manual extends: .packet_pr
packet_almalinux8-kube-ovn: packet_almalinux8-kube-ovn:
extends: .packet_pr extends: .packet_pr
packet_debian11-calico-collection: packet_debian11-calico:
extends: .packet_pr extends: .packet_pr
packet_debian11-macvlan: packet_debian11-macvlan:
@@ -125,12 +133,6 @@ packet_opensuse-docker-cilium:
packet_ubuntu20-cilium-sep: packet_ubuntu20-cilium-sep:
extends: .packet_pr extends: .packet_pr
packet_openeuler24-calico:
extends: .packet_pr
packet_ubuntu20-calico-all-in-one-hardening:
extends: .packet_pr
## Extended ## Extended
packet_debian11-docker: packet_debian11-docker:
extends: .packet_pr_extended extends: .packet_pr_extended
@@ -156,6 +158,9 @@ packet_almalinux8-calico:
packet_almalinux8-docker: packet_almalinux8-docker:
extends: .packet_pr_extended extends: .packet_pr_extended
packet_ubuntu20-calico-all-in-one-hardening:
extends: .packet_pr_extended
packet_ubuntu24-calico-all-in-one: packet_ubuntu24-calico-all-in-one:
extends: .packet_pr_extended extends: .packet_pr_extended
@@ -169,7 +174,7 @@ packet_ubuntu22-all-in-one-docker:
extends: .packet_pr_extended extends: .packet_pr_extended
# ### MANUAL JOBS # ### MANUAL JOBS
packet_fedora39-crio: packet_fedora37-crio:
extends: .packet_pr_manual extends: .packet_pr_manual
packet_ubuntu20-flannel-ha: packet_ubuntu20-flannel-ha:
@@ -181,7 +186,7 @@ packet_ubuntu20-all-in-one-docker:
packet_ubuntu20-flannel-ha-once: packet_ubuntu20-flannel-ha-once:
extends: .packet_pr_manual extends: .packet_pr_manual
packet_fedora39-calico-swap-selinux: packet_fedora37-calico-swap-selinux:
extends: .packet_pr_manual extends: .packet_pr_manual
packet_almalinux8-calico-ha-ebpf: packet_almalinux8-calico-ha-ebpf:
@@ -203,16 +208,17 @@ packet_ubuntu20-calico-ha-wireguard:
extends: .packet_pr_manual extends: .packet_pr_manual
# PERIODIC # PERIODIC
packet_fedora40-docker-calico: packet_fedora38-docker-calico:
stage: deploy-extended stage: deploy-extended
extends: .packet_periodic extends: .packet_periodic
variables: variables:
RESET_CHECK: "true" RESET_CHECK: "true"
packet_fedora39-calico-selinux: packet_fedora37-calico-selinux:
stage: deploy-extended stage: deploy-extended
extends: .packet_periodic extends: .packet_periodic
packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha: packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha:
stage: deploy-extended stage: deploy-extended
extends: .packet_periodic extends: .packet_periodic

View File

@@ -10,12 +10,10 @@
KUBESPRAY_VAGRANT_CONFIG: tests/files/${CI_JOB_NAME}.rb KUBESPRAY_VAGRANT_CONFIG: tests/files/${CI_JOB_NAME}.rb
DOCKER_NAME: vagrant DOCKER_NAME: vagrant
VAGRANT_ANSIBLE_TAGS: facts VAGRANT_ANSIBLE_TAGS: facts
VAGRANT_HOME: "$CI_PROJECT_DIR/.vagrant.d"
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
tags: [ffci-vm-large] tags: [ffci-vm-large]
# only: [/^pr-.*$/] # only: [/^pr-.*$/]
# except: ['triggers'] # except: ['triggers']
image: quay.io/kubespray/vm-kubespray-ci:v13 image: quay.io/kubespray/vm-kubespray-ci:v6
services: [] services: []
before_script: before_script:
- echo $USER - echo $USER
@@ -29,12 +27,6 @@
- ./tests/scripts/vagrant_clean.sh - ./tests/scripts/vagrant_clean.sh
script: script:
- ./tests/scripts/testcases_run.sh - ./tests/scripts/testcases_run.sh
cache:
key: $CI_JOB_NAME_SLUG
paths:
- .vagrant.d/boxes
- .cache/pip
policy: pull-push # TODO: change to "pull" when not on main
vagrant_ubuntu20-calico-dual-stack: vagrant_ubuntu20-calico-dual-stack:
stage: deploy-extended stage: deploy-extended
@@ -64,7 +56,7 @@ vagrant_ubuntu20-kube-router-svc-proxy:
extends: .vagrant extends: .vagrant
when: manual when: manual
vagrant_fedora39-kube-router: vagrant_fedora37-kube-router:
stage: deploy-extended stage: deploy-extended
extends: .vagrant extends: .vagrant
when: manual when: manual

View File

@@ -1,7 +1,7 @@
--- ---
repos: repos:
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0 rev: v4.6.0
hooks: hooks:
- id: check-added-large-files - id: check-added-large-files
- id: check-case-conflict - id: check-case-conflict
@@ -35,10 +35,12 @@ repos:
files: "\\.sh$" files: "\\.sh$"
- repo: https://github.com/ansible/ansible-lint - repo: https://github.com/ansible/ansible-lint
rev: v24.12.2 rev: v24.5.0
hooks: hooks:
- id: ansible-lint - id: ansible-lint
additional_dependencies: additional_dependencies:
- ansible==9.8.0
- jsonschema==4.22.0
- jmespath==1.0.1 - jmespath==1.0.1
- netaddr==1.3.0 - netaddr==1.3.0
- distlib - distlib
@@ -51,6 +53,14 @@ repos:
- repo: local - repo: local
hooks: hooks:
- id: ansible-syntax-check
name: ansible-syntax-check
entry: env ANSIBLE_INVENTORY=inventory/local-tests.cfg ANSIBLE_REMOTE_USER=root ANSIBLE_BECOME="true" ANSIBLE_BECOME_USER=root ANSIBLE_VERBOSITY="3" ansible-playbook --syntax-check
language: python
files: "^cluster.yml|^upgrade-cluster.yml|^reset.yml|^extra_playbooks/upgrade-only-k8s.yml"
additional_dependencies:
- ansible==9.5.1
- id: check-readme-versions - id: check-readme-versions
name: check-readme-versions name: check-readme-versions
entry: tests/scripts/check_readme_versions.sh entry: tests/scripts/check_readme_versions.sh

7
Makefile Normal file
View File

@@ -0,0 +1,7 @@
mitogen:
@echo Mitogen support is deprecated.
@echo Please run the following command manually:
@echo ansible-playbook -c local mitogen.yml -vv
clean:
rm -rf dist/
rm *.retry

100
README.md
View File

@@ -5,7 +5,7 @@
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**. If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/) You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **[AWS](docs/cloud_providers/aws.md), GCE, [Azure](docs/cloud_providers/azure.md), [OpenStack](docs/cloud_controllers/openstack.md), [vSphere](docs/cloud_controllers/vsphere.md), [Equinix Metal](docs/cloud_providers/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal** - Can be deployed on **[AWS](docs/cloud_providers/aws.md), GCE, [Azure](docs/cloud_providers/azure.md), [OpenStack](docs/cloud_providers/openstack.md), [vSphere](docs/cloud_providers/vsphere.md), [Equinix Metal](docs/cloud_providers/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster - **Highly available** cluster
- **Composable** (Choice of the network plugin for instance) - **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions** - Supports most popular **Linux distributions**
@@ -19,7 +19,70 @@ Below are several ways to use Kubespray to deploy a Kubernetes cluster.
#### Usage #### Usage
See [Getting started](/docs/getting_started/getting-started.md) Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
then run the following steps:
```ShellSession
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
# Update Ansible inventory file with inventory builder
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
# Clean up old Kubernetes cluster with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example cleaning up SSL keys in /etc/,
# uninstalling old packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
# And be mind it will remove the current kubernetes cluster (if it's running)!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root reset.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
```
Note: When Ansible is already installed via system packages on the control node,
Python packages installed via `sudo pip install -r requirements.txt` will go to
a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on
Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on
Ubuntu). As a consequence, the `ansible-playbook` command will fail with:
```raw
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
```
This likely indicates that a task depends on a module present in ``requirements.txt``.
One way of addressing this is to uninstall the system Ansible package then
reinstall Ansible via ``pip``, but this not always possible and one must
take care regarding package versions.
A workaround consists of setting the `ANSIBLE_LIBRARY`
and `ANSIBLE_MODULE_UTILS` environment variables respectively to
the `ansible/modules` and `ansible/module_utils` subdirectories of the ``pip``
installation location, which is the ``Location`` shown by running
`pip show [package]` before executing `ansible-playbook`.
A simple way to ensure you get all the correct version of Ansible is to use
the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/)
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.25.0
docker pull quay.io/kubespray/kubespray:v2.25.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.25.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```
#### Collection #### Collection
@@ -60,9 +123,12 @@ vagrant up
- [Fedora CoreOS bootstrap](docs/operating_systems/fcos.md) - [Fedora CoreOS bootstrap](docs/operating_systems/fcos.md)
- [openSUSE setup](docs/operating_systems/opensuse.md) - [openSUSE setup](docs/operating_systems/opensuse.md)
- [Downloaded artifacts](docs/advanced/downloads.md) - [Downloaded artifacts](docs/advanced/downloads.md)
- [Cloud providers](docs/cloud_providers/cloud.md)
- [OpenStack](docs/cloud_providers/openstack.md)
- [AWS](docs/cloud_providers/aws.md)
- [Azure](docs/cloud_providers/azure.md)
- [vSphere](docs/cloud_providers/vsphere.md)
- [Equinix Metal](docs/cloud_providers/equinix-metal.md) - [Equinix Metal](docs/cloud_providers/equinix-metal.md)
- [OpenStack](docs/cloud_controllers/openstack.md)
- [vSphere](docs/cloud_controllers/vsphere.md)
- [Large deployments](docs/operations/large-deployments.md) - [Large deployments](docs/operations/large-deployments.md)
- [Adding/replacing a node](docs/operations/nodes.md) - [Adding/replacing a node](docs/operations/nodes.md)
- [Upgrades basics](docs/operations/upgrades.md) - [Upgrades basics](docs/operations/upgrades.md)
@@ -78,7 +144,7 @@ vagrant up
- **Debian** Bookworm, Bullseye - **Debian** Bookworm, Bullseye
- **Ubuntu** 20.04, 22.04, 24.04 - **Ubuntu** 20.04, 22.04, 24.04
- **CentOS/RHEL** [8, 9](docs/operating_systems/centos.md#centos-8) - **CentOS/RHEL** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Fedora** 39, 40 - **Fedora** 37, 38
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md)) - **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed - **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** [8, 9](docs/operating_systems/centos.md#centos-8) - **Oracle Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
@@ -94,15 +160,15 @@ Note: Upstart/SysV init based OS types are not supported.
## Supported Components ## Supported Components
- Core - Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.31.9 - [kubernetes](https://github.com/kubernetes/kubernetes) v1.30.6
- [etcd](https://github.com/etcd-io/etcd) v3.5.21 - [etcd](https://github.com/etcd-io/etcd) v3.5.16
- [docker](https://www.docker.com/) v26.1 - [docker](https://www.docker.com/) v26.1
- [containerd](https://containerd.io/) v1.7.27 - [containerd](https://containerd.io/) v1.7.23
- [cri-o](http://cri-o.io/) v1.31.6 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS) - [cri-o](http://cri-o.io/) v1.30.3 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin - Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.4.1 - [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0
- [calico](https://github.com/projectcalico/calico) v3.29.4 - [calico](https://github.com/projectcalico/calico) v3.28.1
- [cilium](https://github.com/cilium/cilium) v1.15.9 - [cilium](https://github.com/cilium/cilium) v1.15.4
- [flannel](https://github.com/flannel-io/flannel) v0.22.0 - [flannel](https://github.com/flannel-io/flannel) v0.22.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.12.21 - [kube-ovn](https://github.com/alauda/kube-ovn) v1.12.21
- [kube-router](https://github.com/cloudnativelabs/kube-router) v2.0.0 - [kube-router](https://github.com/cloudnativelabs/kube-router) v2.0.0
@@ -110,12 +176,12 @@ Note: Upstart/SysV init based OS types are not supported.
- [weave](https://github.com/rajch/weave) v2.8.7 - [weave](https://github.com/rajch/weave) v2.8.7
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.8.0 - [kube-vip](https://github.com/kube-vip/kube-vip) v0.8.0
- Application - Application
- [cert-manager](https://github.com/jetstack/cert-manager) v1.15.3 - [cert-manager](https://github.com/jetstack/cert-manager) v1.14.7
- [coredns](https://github.com/coredns/coredns) v1.11.3 - [coredns](https://github.com/coredns/coredns) v1.11.1
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.12.1 - [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.11.5
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4 - [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
- [argocd](https://argoproj.github.io/) v2.11.0 - [argocd](https://argoproj.github.io/) v2.11.0
- [helm](https://helm.sh/) v3.16.4 - [helm](https://helm.sh/) v3.15.4
- [metallb](https://metallb.universe.tf/) v0.13.9 - [metallb](https://metallb.universe.tf/) v0.13.9
- [registry](https://github.com/distribution/distribution) v2.8.1 - [registry](https://github.com/distribution/distribution) v2.8.1
- Storage Plugin - Storage Plugin
@@ -135,7 +201,7 @@ Note: Upstart/SysV init based OS types are not supported.
## Requirements ## Requirements
- **Minimum required version of Kubernetes is v1.29** - **Minimum required version of Kubernetes is v1.28**
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands** - **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/operations/offline-environment.md)) - The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/operations/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**. - The target servers are configured to allow **IPv4 forwarding**.

View File

@@ -9,7 +9,7 @@
# #
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE # DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/ # INSTRUCTIONS AT https://kubernetes.io/security/
mattymo
floryut floryut
ant31 oomichi
VannTen cristicalin
yankay

41
Vagrantfile vendored
View File

@@ -28,10 +28,8 @@ SUPPORTED_OS = {
"almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"}, "almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"},
"rockylinux8" => {box: "rockylinux/8", user: "vagrant"}, "rockylinux8" => {box: "rockylinux/8", user: "vagrant"},
"rockylinux9" => {box: "rockylinux/9", user: "vagrant"}, "rockylinux9" => {box: "rockylinux/9", user: "vagrant"},
"fedora39" => {box: "fedora/39-cloud-base", user: "vagrant"}, "fedora37" => {box: "fedora/37-cloud-base", user: "vagrant"},
"fedora40" => {box: "fedora/40-cloud-base", user: "vagrant"}, "fedora38" => {box: "fedora/38-cloud-base", user: "vagrant"},
"fedora39-arm64" => {box: "bento/fedora-39-arm64", user: "vagrant"},
"fedora40-arm64" => {box: "bento/fedora-40", user: "vagrant"},
"opensuse" => {box: "opensuse/Leap-15.4.x86_64", user: "vagrant"}, "opensuse" => {box: "opensuse/Leap-15.4.x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"}, "opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"}, "oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
@@ -57,8 +55,6 @@ $subnet ||= "172.18.8"
$subnet_ipv6 ||= "fd3c:b398:0698:0756" $subnet_ipv6 ||= "fd3c:b398:0698:0756"
$os ||= "ubuntu2004" $os ||= "ubuntu2004"
$network_plugin ||= "flannel" $network_plugin ||= "flannel"
$inventory ||= "inventory/sample"
$inventories ||= [$inventory]
# Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni # Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni
$multi_networking ||= "False" $multi_networking ||= "False"
$download_run_once ||= "True" $download_run_once ||= "True"
@@ -97,6 +93,19 @@ if ! SUPPORTED_OS.key?($os)
end end
$box = SUPPORTED_OS[$os][:box] $box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example
$inventory = "inventory/sample" if ! $inventory
$inventory = File.absolute_path($inventory, File.dirname(__FILE__))
# if $inventory has a hosts.ini file use it, otherwise copy over
# vars etc to where vagrant expects dynamic inventory to be
if ! File.exist?(File.join(File.dirname($inventory), "hosts.ini"))
$vagrant_ansible = File.join(File.absolute_path($vagrant_dir), "provisioners", "ansible")
FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
$vagrant_inventory = File.join($vagrant_ansible,"inventory")
FileUtils.rm_f($vagrant_inventory)
FileUtils.ln_s($inventory, $vagrant_inventory)
end
if Vagrant.has_plugin?("vagrant-proxyconf") if Vagrant.has_plugin?("vagrant-proxyconf")
$no_proxy = ENV['NO_PROXY'] || ENV['no_proxy'] || "127.0.0.1,localhost" $no_proxy = ENV['NO_PROXY'] || ENV['no_proxy'] || "127.0.0.1,localhost"
@@ -195,7 +204,7 @@ Vagrant.configure("2") do |config|
node.vm.network "forwarded_port", guest: guest, host: host, auto_correct: true node.vm.network "forwarded_port", guest: guest, host: host, auto_correct: true
end end
if ["rhel8"].include? $os if ["rhel7","rhel8"].include? $os
# Vagrant synced_folder rsync options cannot be used for RHEL boxes as Rsync package cannot # Vagrant synced_folder rsync options cannot be used for RHEL boxes as Rsync package cannot
# be installed until the host is registered with a valid Red Hat support subscription # be installed until the host is registered with a valid Red Hat support subscription
node.vm.synced_folder ".", "/vagrant", disabled: false node.vm.synced_folder ".", "/vagrant", disabled: false
@@ -226,16 +235,15 @@ Vagrant.configure("2") do |config|
node.vm.provision "shell", inline: "rm -f /etc/modprobe.d/local.conf" node.vm.provision "shell", inline: "rm -f /etc/modprobe.d/local.conf"
node.vm.provision "shell", inline: "sed -i '/net.ipv6.conf.all.disable_ipv6/d' /etc/sysctl.d/99-sysctl.conf /etc/sysctl.conf" node.vm.provision "shell", inline: "sed -i '/net.ipv6.conf.all.disable_ipv6/d' /etc/sysctl.d/99-sysctl.conf /etc/sysctl.conf"
end end
# Hack for fedora39/40 to get the IP address of the second interface # Hack for fedora37/38 to get the IP address of the second interface
if ["fedora39", "fedora40", "fedora39-arm64", "fedora40-arm64"].include? $os if ["fedora37", "fedora38"].include? $os
config.vm.provision "shell", inline: <<-SHELL config.vm.provision "shell", inline: <<-SHELL
nmcli conn modify 'Wired connection 2' ipv4.addresses $(cat /etc/sysconfig/network-scripts/ifcfg-eth1 | grep IPADDR | cut -d "=" -f2)/24 nmcli conn modify 'Wired connection 2' ipv4.addresses $(cat /etc/sysconfig/network-scripts/ifcfg-eth1 | grep IPADDR | cut -d "=" -f2)
nmcli conn modify 'Wired connection 2' ipv4.method manual nmcli conn modify 'Wired connection 2' ipv4.method manual
service NetworkManager restart service NetworkManager restart
SHELL SHELL
end end
# Rockylinux boxes needs UEFI # Rockylinux boxes needs UEFI
if ["rockylinux8", "rockylinux9"].include? $os if ["rockylinux8", "rockylinux9"].include? $os
config.vm.provider "libvirt" do |domain| config.vm.provider "libvirt" do |domain|
@@ -244,7 +252,7 @@ Vagrant.configure("2") do |config|
end end
# Disable firewalld on oraclelinux/redhat vms # Disable firewalld on oraclelinux/redhat vms
if ["oraclelinux","oraclelinux8", "rhel8","rockylinux8"].include? $os if ["oraclelinux","oraclelinux8","rhel7","rhel8","rockylinux8"].include? $os
node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld" node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
end end
@@ -278,13 +286,14 @@ Vagrant.configure("2") do |config|
ansible.playbook = $playbook ansible.playbook = $playbook
ansible.compatibility_mode = "2.0" ansible.compatibility_mode = "2.0"
ansible.verbose = $ansible_verbosity ansible.verbose = $ansible_verbosity
$ansible_inventory_path = File.join( $inventory, "hosts.ini")
if File.exist?($ansible_inventory_path)
ansible.inventory_path = $ansible_inventory_path
end
ansible.become = true ansible.become = true
ansible.limit = "all,localhost" ansible.limit = "all,localhost"
ansible.host_key_checking = false ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
"--flush-cache",
"-e ansible_become_pass=vagrant"] +
$inventories.map {|inv| ["-i", inv]}.flatten
ansible.host_vars = host_vars ansible.host_vars = host_vars
ansible.extra_vars = $extra_vars ansible.extra_vars = $extra_vars
if $ansible_tags != "" if $ansible_tags != ""

177
contrib/dind/README.md Normal file
View File

@@ -0,0 +1,177 @@
# Kubespray DIND experimental setup
This ansible playbook creates local docker containers
to serve as Kubernetes "nodes", which in turn will run
"normal" Kubernetes docker containers, a mode usually
called DIND (Docker-IN-Docker).
The playbook has two roles:
- dind-host: creates the "nodes" as containers in localhost, with
appropriate settings for DIND (privileged, volume mapping for dind
storage, etc).
- dind-cluster: customizes each node container to have required
system packages installed, and some utils (swapoff, lsattr)
symlinked to /bin/true to ease mimicking a real node.
This playbook has been test with Ubuntu 16.04 as host and ubuntu:16.04
as docker images (note that dind-cluster has specific customization
for these images).
The playbook also creates a `/tmp/kubespray.dind.inventory_builder.sh`
helper (wraps up running `contrib/inventory_builder/inventory.py` with
node containers IPs and prefix).
## Deploying
See below for a complete successful run:
1. Create the node containers
```shell
# From the kubespray root dir
cd contrib/dind
pip install -r requirements.txt
ansible-playbook -i hosts dind-cluster.yaml
# Back to kubespray root
cd ../..
```
NOTE: if the playbook run fails with something like below error
message, you may need to specifically set `ansible_python_interpreter`,
see `./hosts` file for an example expanded localhost entry.
```shell
failed: [localhost] (item=kube-node1) => {"changed": false, "item": "kube-node1", "msg": "Failed to import docker or docker-py - No module named requests.exceptions. Try `pip install docker` or `pip install docker-py` (Python 2.6)"}
```
2. Customize kubespray-dind.yaml
Note that there's coupling between above created node containers
and `kubespray-dind.yaml` settings, in particular regarding selected `node_distro`
(as set in `group_vars/all/all.yaml`), and docker settings.
```shell
$EDITOR contrib/dind/kubespray-dind.yaml
```
3. Prepare the inventory and run the playbook
```shell
INVENTORY_DIR=inventory/local-dind
mkdir -p ${INVENTORY_DIR}
rm -f ${INVENTORY_DIR}/hosts.ini
CONFIG_FILE=${INVENTORY_DIR}/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
ansible-playbook --become -e ansible_ssh_user=debian -i ${INVENTORY_DIR}/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml
```
NOTE: You could also test other distros without editing files by
passing `--extra-vars` as per below commandline,
replacing `DISTRO` by either `debian`, `ubuntu`, `centos`, `fedora`:
```shell
cd contrib/dind
ansible-playbook -i hosts dind-cluster.yaml --extra-vars node_distro=DISTRO
cd ../..
CONFIG_FILE=inventory/local-dind/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
ansible-playbook --become -e ansible_ssh_user=DISTRO -i inventory/local-dind/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml --extra-vars bootstrap_os=DISTRO
```
## Resulting deployment
See below to get an idea on how a completed deployment looks like,
from the host where you ran kubespray playbooks.
### node_distro: debian
Running from an Ubuntu Xenial host:
```shell
$ uname -a
Linux ip-xx-xx-xx-xx 4.4.0-1069-aws #79-Ubuntu SMP Mon Sep 24
15:01:41 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1835dd183b75 debian:9.5 "sh -c 'apt-get -qy …" 43 minutes ago Up 43 minutes kube-node5
30b0af8d2924 debian:9.5 "sh -c 'apt-get -qy …" 43 minutes ago Up 43 minutes kube-node4
3e0d1510c62f debian:9.5 "sh -c 'apt-get -qy …" 43 minutes ago Up 43 minutes kube-node3
738993566f94 debian:9.5 "sh -c 'apt-get -qy …" 44 minutes ago Up 44 minutes kube-node2
c581ef662ed2 debian:9.5 "sh -c 'apt-get -qy …" 44 minutes ago Up 44 minutes kube-node1
$ docker exec kube-node1 kubectl get node
NAME STATUS ROLES AGE VERSION
kube-node1 Ready master,node 18m v1.12.1
kube-node2 Ready master,node 17m v1.12.1
kube-node3 Ready node 17m v1.12.1
kube-node4 Ready node 17m v1.12.1
kube-node5 Ready node 17m v1.12.1
$ docker exec kube-node1 kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default netchecker-agent-67489 1/1 Running 0 2m51s
default netchecker-agent-6qq6s 1/1 Running 0 2m51s
default netchecker-agent-fsw92 1/1 Running 0 2m51s
default netchecker-agent-fw6tl 1/1 Running 0 2m51s
default netchecker-agent-hostnet-8f2zb 1/1 Running 0 3m
default netchecker-agent-hostnet-gq7ml 1/1 Running 0 3m
default netchecker-agent-hostnet-jfkgv 1/1 Running 0 3m
default netchecker-agent-hostnet-kwfwx 1/1 Running 0 3m
default netchecker-agent-hostnet-r46nm 1/1 Running 0 3m
default netchecker-agent-lxdrn 1/1 Running 0 2m51s
default netchecker-server-864bd4c897-9vstl 1/1 Running 0 2m40s
default sh-68fcc6db45-qf55h 1/1 Running 1 12m
kube-system coredns-7598f59475-6vknq 1/1 Running 0 14m
kube-system coredns-7598f59475-l5q5x 1/1 Running 0 14m
kube-system kube-apiserver-kube-node1 1/1 Running 0 17m
kube-system kube-apiserver-kube-node2 1/1 Running 0 18m
kube-system kube-controller-manager-kube-node1 1/1 Running 0 18m
kube-system kube-controller-manager-kube-node2 1/1 Running 0 18m
kube-system kube-proxy-5xx9d 1/1 Running 0 17m
kube-system kube-proxy-cdqq4 1/1 Running 0 17m
kube-system kube-proxy-n64ls 1/1 Running 0 17m
kube-system kube-proxy-pswmj 1/1 Running 0 18m
kube-system kube-proxy-x89qw 1/1 Running 0 18m
kube-system kube-scheduler-kube-node1 1/1 Running 4 17m
kube-system kube-scheduler-kube-node2 1/1 Running 4 18m
kube-system kubernetes-dashboard-5db4d9f45f-548rl 1/1 Running 0 14m
kube-system nginx-proxy-kube-node3 1/1 Running 4 17m
kube-system nginx-proxy-kube-node4 1/1 Running 4 17m
kube-system nginx-proxy-kube-node5 1/1 Running 4 17m
kube-system weave-net-42bfr 2/2 Running 0 16m
kube-system weave-net-6gt8m 2/2 Running 0 16m
kube-system weave-net-88nnc 2/2 Running 0 16m
kube-system weave-net-shckr 2/2 Running 0 16m
kube-system weave-net-xr46t 2/2 Running 0 16m
$ docker exec kube-node1 curl -s http://localhost:31081/api/v1/connectivity_check
{"Message":"All 10 pods successfully reported back to the server","Absent":null,"Outdated":null}
```
## Using ./run-test-distros.sh
You can use `./run-test-distros.sh` to run a set of tests via DIND,
and excerpt from this script, to get an idea:
```shell
# The SPEC file(s) must have two arrays as e.g.
# DISTROS=(debian centos)
# EXTRAS=(
# 'kube_network_plugin=calico'
# 'kube_network_plugin=flannel'
# 'kube_network_plugin=weave'
# )
# that will be tested in a "combinatory" way (e.g. from above there'll be
# be 6 test runs), creating a sequenced <spec_filename>-nn.out with each output.
#
# Each $EXTRAS element will be whitespace split, and passed as --extra-vars
# to main kubespray ansible-playbook run.
```
See e.g. `test-some_distros-most_CNIs.env` and
`test-some_distros-kube_router_combo.env` in particular for a richer
set of CNI specific `--extra-vars` combo.

View File

@@ -0,0 +1,11 @@
---
- name: Create nodes as docker containers
hosts: localhost
gather_facts: false
roles:
- { role: dind-host }
- name: Customize each node containers
hosts: containers
roles:
- { role: dind-cluster }

View File

@@ -0,0 +1,3 @@
---
# See distro.yaml for supported node_distro images
node_distro: debian

View File

@@ -0,0 +1,41 @@
---
distro_settings:
debian: &DEBIAN
image: "debian:9.5"
user: "debian"
pid1_exe: /lib/systemd/systemd
init: |
sh -c "apt-get -qy update && apt-get -qy install systemd-sysv dbus && exec /sbin/init"
raw_setup: apt-get -qy update && apt-get -qy install dbus python sudo iproute2
raw_setup_done: test -x /usr/bin/sudo
agetty_svc: getty@*
ssh_service: ssh
extra_packages: []
ubuntu:
<<: *DEBIAN
image: "ubuntu:16.04"
user: "ubuntu"
init: |
/sbin/init
centos: &CENTOS
image: "centos:8"
user: "centos"
pid1_exe: /usr/lib/systemd/systemd
init: |
/sbin/init
raw_setup: yum -qy install policycoreutils dbus python sudo iproute iptables
raw_setup_done: test -x /usr/bin/sudo
agetty_svc: getty@* serial-getty@*
ssh_service: sshd
extra_packages: []
fedora:
<<: *CENTOS
image: "fedora:latest"
user: "fedora"
raw_setup: yum -qy install policycoreutils dbus python sudo iproute iptables; mkdir -p /etc/modules-load.d
extra_packages:
- hostname
- procps
- findutils
- kmod
- iputils

15
contrib/dind/hosts Normal file
View File

@@ -0,0 +1,15 @@
[local]
# If you created a virtualenv for ansible, you may need to specify running the
# python binary from there instead:
#localhost ansible_connection=local ansible_python_interpreter=/home/user/kubespray/.venv/bin/python
localhost ansible_connection=local
[containers]
kube-node1
kube-node2
kube-node3
kube-node4
kube-node5
[containers:vars]
ansible_connection=docker

View File

@@ -0,0 +1,22 @@
---
# kubespray-dind.yaml: minimal kubespray ansible playbook usable for DIND
# See contrib/dind/README.md
kube_api_anonymous_auth: true
kubelet_fail_swap_on: false
# Docker nodes need to have been created with same "node_distro: debian"
# at contrib/dind/group_vars/all/all.yaml
bootstrap_os: debian
docker_version: latest
docker_storage_options: -s overlay2 --storage-opt overlay2.override_kernel_check=true -g /dind/docker
dns_mode: coredns
deploy_netchecker: true
netcheck_agent_image_repo: quay.io/l23network/k8s-netchecker-agent
netcheck_server_image_repo: quay.io/l23network/k8s-netchecker-server
netcheck_agent_image_tag: v1.0
netcheck_server_image_tag: v1.0

View File

@@ -0,0 +1 @@
docker

View File

@@ -0,0 +1,73 @@
---
- name: Set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
- name: Set_fact other distro settings
set_fact:
distro_user: "{{ distro_setup['user'] }}"
distro_ssh_service: "{{ distro_setup['ssh_service'] }}"
distro_extra_packages: "{{ distro_setup['extra_packages'] }}"
- name: Null-ify some linux tools to ease DIND
file:
src: "/bin/true"
dest: "{{ item }}"
state: link
force: true
with_items:
# DIND box may have swap enable, don't bother
- /sbin/swapoff
# /etc/hosts handling would fail on trying to copy file attributes on edit,
# void it by successfully returning nil output
- /usr/bin/lsattr
# disable selinux-isms, sp needed if running on non-Selinux host
- /usr/sbin/semodule
- name: Void installing dpkg docs and man pages on Debian based distros
copy:
content: |
# Delete locales
path-exclude=/usr/share/locale/*
# Delete man pages
path-exclude=/usr/share/man/*
# Delete docs
path-exclude=/usr/share/doc/*
path-include=/usr/share/doc/*/copyright
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc
mode: "0644"
when:
- ansible_os_family == 'Debian'
- name: Install system packages to better match a full-fledge node
package:
name: "{{ item }}"
state: present
with_items: "{{ distro_extra_packages + ['rsyslog', 'openssh-server'] }}"
- name: Start needed services
service:
name: "{{ item }}"
state: started
with_items:
- rsyslog
- "{{ distro_ssh_service }}"
- name: Create distro user "{{ distro_user }}"
user:
name: "{{ distro_user }}"
uid: 1000
# groups: sudo
append: true
- name: Allow password-less sudo to "{{ distro_user }}"
copy:
content: "{{ distro_user }} ALL=(ALL) NOPASSWD:ALL"
dest: "/etc/sudoers.d/{{ distro_user }}"
mode: "0640"
- name: "Add my pubkey to {{ distro_user }} user authorized keys"
ansible.posix.authorized_key:
user: "{{ distro_user }}"
state: present
key: "{{ lookup('file', lookup('env', 'HOME') + '/.ssh/id_rsa.pub') }}"

View File

@@ -0,0 +1,87 @@
---
- name: Set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
- name: Set_fact other distro settings
set_fact:
distro_image: "{{ distro_setup['image'] }}"
distro_init: "{{ distro_setup['init'] }}"
distro_pid1_exe: "{{ distro_setup['pid1_exe'] }}"
distro_raw_setup: "{{ distro_setup['raw_setup'] }}"
distro_raw_setup_done: "{{ distro_setup['raw_setup_done'] }}"
distro_agetty_svc: "{{ distro_setup['agetty_svc'] }}"
- name: Create dind node containers from "containers" inventory section
community.docker.docker_container:
image: "{{ distro_image }}"
name: "{{ item }}"
state: started
hostname: "{{ item }}"
command: "{{ distro_init }}"
# recreate: true
privileged: true
tmpfs:
- /sys/module/nf_conntrack/parameters
volumes:
- /boot:/boot
- /lib/modules:/lib/modules
- "{{ item }}:/dind/docker"
register: containers
with_items: "{{ groups.containers }}"
tags:
- addresses
- name: Gather list of containers IPs
set_fact:
addresses: "{{ containers.results | map(attribute='ansible_facts') | map(attribute='docker_container') | map(attribute='NetworkSettings') | map(attribute='IPAddress') | list }}"
tags:
- addresses
- name: Create inventory_builder helper already set with the list of node containers' IPs
template:
src: inventory_builder.sh.j2
dest: /tmp/kubespray.dind.inventory_builder.sh
mode: "0755"
tags:
- addresses
- name: Install needed packages into node containers via raw, need to wait for possible systemd packages to finish installing
raw: |
# agetty processes churn a lot of cpu time failing on inexistent ttys, early STOP them, to rip them in below task
pkill -STOP agetty || true
{{ distro_raw_setup_done }} && echo SKIPPED && exit 0
until [ "$(readlink /proc/1/exe)" = "{{ distro_pid1_exe }}" ] ; do sleep 1; done
{{ distro_raw_setup }}
delegate_to: "{{ item._ansible_item_label | default(item.item) }}"
with_items: "{{ containers.results }}"
register: result
changed_when: result.stdout.find("SKIPPED") < 0
- name: Remove gettys from node containers
raw: |
until test -S /var/run/dbus/system_bus_socket; do sleep 1; done
systemctl disable {{ distro_agetty_svc }}
systemctl stop {{ distro_agetty_svc }}
delegate_to: "{{ item._ansible_item_label | default(item.item) }}"
with_items: "{{ containers.results }}"
changed_when: false
# Running systemd-machine-id-setup doesn't create a unique id for each node container on Debian,
# handle manually
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave)
raw: |
echo {{ item | hash('sha1') }} > /etc/machine-id.new
mv -b /etc/machine-id.new /etc/machine-id
cmp /etc/machine-id /etc/machine-id~ || true
systemctl daemon-reload
delegate_to: "{{ item._ansible_item_label | default(item.item) }}"
with_items: "{{ containers.results }}"
- name: Early hack image install to adapt for DIND
raw: |
rm -fv /usr/bin/udevadm /usr/sbin/udevadm
delegate_to: "{{ item._ansible_item_label | default(item.item) }}"
with_items: "{{ containers.results }}"
register: result
changed_when: result.stdout.find("removed") >= 0

View File

@@ -0,0 +1,3 @@
#!/bin/bash
# NOTE: if you change HOST_PREFIX, you also need to edit ./hosts [containers] section
HOST_PREFIX=kube-node python3 contrib/inventory_builder/inventory.py {% for ip in addresses %} {{ ip }} {% endfor %}

View File

@@ -0,0 +1,93 @@
#!/bin/bash
# Q&D test'em all: creates full DIND kubespray deploys
# for each distro, verifying it via netchecker.
info() {
local msg="$*"
local date="$(date -Isec)"
echo "INFO: [$date] $msg"
}
pass_or_fail() {
local rc="$?"
local msg="$*"
local date="$(date -Isec)"
[ $rc -eq 0 ] && echo "PASS: [$date] $msg" || echo "FAIL: [$date] $msg"
return $rc
}
test_distro() {
local distro=${1:?};shift
local extra="${*:-}"
local prefix="${distro[${extra}]}"
ansible-playbook -i hosts dind-cluster.yaml -e node_distro=$distro
pass_or_fail "$prefix: dind-nodes" || return 1
(cd ../..
INVENTORY_DIR=inventory/local-dind
mkdir -p ${INVENTORY_DIR}
rm -f ${INVENTORY_DIR}/hosts.ini
CONFIG_FILE=${INVENTORY_DIR}/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
# expand $extra with -e in front of each word
extra_args=""; for extra_arg in $extra; do extra_args="$extra_args -e $extra_arg"; done
ansible-playbook --become -e ansible_ssh_user=$distro -i \
${INVENTORY_DIR}/hosts.ini cluster.yml \
-e @contrib/dind/kubespray-dind.yaml -e bootstrap_os=$distro ${extra_args}
pass_or_fail "$prefix: kubespray"
) || return 1
local node0=${NODES[0]}
docker exec ${node0} kubectl get pod --all-namespaces
pass_or_fail "$prefix: kube-api" || return 1
let retries=60
while ((retries--)); do
# Some CNI may set NodePort on "main" node interface address (thus no localhost NodePort)
# e.g. kube-router: https://github.com/cloudnativelabs/kube-router/pull/217
docker exec ${node0} curl -m2 -s http://${NETCHECKER_HOST:?}:31081/api/v1/connectivity_check | grep successfully && break
sleep 2
done
[ $retries -ge 0 ]
pass_or_fail "$prefix: netcheck" || return 1
}
NODES=($(egrep ^kube_node hosts))
NETCHECKER_HOST=localhost
: ${OUTPUT_DIR:=./out}
mkdir -p ${OUTPUT_DIR}
# The SPEC file(s) must have two arrays as e.g.
# DISTROS=(debian centos)
# EXTRAS=(
# 'kube_network_plugin=calico'
# 'kube_network_plugin=flannel'
# 'kube_network_plugin=weave'
# )
# that will be tested in a "combinatory" way (e.g. from above there'll be
# be 6 test runs), creating a sequenced <spec_filename>-nn.out with each output.
#
# Each $EXTRAS element will be whitespace split, and passed as --extra-vars
# to main kubespray ansible-playbook run.
SPECS=${*:?Missing SPEC files, e.g. test-most_distros-some_CNIs.env}
for spec in ${SPECS}; do
unset DISTROS EXTRAS
echo "Loading file=${spec} ..."
. ${spec} || continue
: ${DISTROS:?} || continue
echo "DISTROS:" "${DISTROS[@]}"
echo "EXTRAS->"
printf " %s\n" "${EXTRAS[@]}"
let n=1
for distro in "${DISTROS[@]}"; do
for extra in "${EXTRAS[@]:-NULL}"; do
# Magic value to let this for run once:
[[ ${extra} == NULL ]] && unset extra
docker rm -f "${NODES[@]}"
printf -v file_out "%s/%s-%02d.out" ${OUTPUT_DIR} ${spec} $((n++))
{
info "${distro}[${extra}] START: file_out=${file_out}"
time test_distro ${distro} ${extra}
} |& tee ${file_out}
# sleeping for the sake of the human to verify if they want
sleep 2m
done
done
done
egrep -H '^(....:|real)' $(ls -tr ${OUTPUT_DIR}/*.out)

View File

@@ -0,0 +1,11 @@
# Test spec file: used from ./run-test-distros.sh, will run
# each distro in $DISTROS overloading main kubespray ansible-playbook run
# Get all DISTROS from distro.yaml (shame no yaml parsing, but nuff anyway)
# DISTROS="${*:-$(egrep -o '^ \w+' group_vars/all/distro.yaml|paste -s)}"
DISTROS=(debian ubuntu centos fedora)
# Each line below will be added as --extra-vars to main playbook run
EXTRAS=(
'kube_network_plugin=calico'
'kube_network_plugin=weave'
)

View File

@@ -0,0 +1,6 @@
DISTROS=(debian centos)
NETCHECKER_HOST=${NODES[0]}
EXTRAS=(
'kube_network_plugin=kube-router {"kube_router_run_service_proxy":false}'
'kube_network_plugin=kube-router {"kube_router_run_service_proxy":true}'
)

View File

@@ -0,0 +1,8 @@
DISTROS=(debian centos)
EXTRAS=(
'kube_network_plugin=calico {}'
'kube_network_plugin=canal {}'
'kube_network_plugin=cilium {}'
'kube_network_plugin=flannel {}'
'kube_network_plugin=weave {}'
)

View File

@@ -0,0 +1,480 @@
#!/usr/bin/env python3
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Usage: inventory.py ip1 [ip2 ...]
# Examples: inventory.py 10.10.1.3 10.10.1.4 10.10.1.5
#
# Advanced usage:
# Add another host after initial creation: inventory.py 10.10.1.5
# Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
# Add hosts with different ip and access ip:
# inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.1.3
# Add hosts with a specific hostname, ip, and optional access ip:
# inventory.py first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3
# Delete a host: inventory.py -10.10.1.3
# Delete a host by id: inventory.py -node1
#
# Load a YAML or JSON file with inventory data: inventory.py load hosts.yaml
# YAML file should be in the following format:
# group1:
# host1:
# ip: X.X.X.X
# var: val
# group2:
# host2:
# ip: X.X.X.X
from collections import OrderedDict
from ipaddress import ip_address
from ruamel.yaml import YAML
import os
import re
import subprocess
import sys
ROLES = ['all', 'kube_control_plane', 'kube_node', 'etcd', 'k8s_cluster',
'calico_rr']
PROTECTED_NAMES = ROLES
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'print_hostnames',
'load', 'add']
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
'0': False, 'no': False, 'false': False, 'off': False}
yaml = YAML()
yaml.Representer.add_representer(OrderedDict, yaml.Representer.represent_dict)
def get_var_as_bool(name, default):
value = os.environ.get(name, '')
return _boolean_states.get(value.lower(), default)
# Configurable as shell vars start
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.yaml")
# Remove the reference of KUBE_MASTERS after some deprecation cycles.
KUBE_CONTROL_HOSTS = int(os.environ.get("KUBE_CONTROL_HOSTS",
os.environ.get("KUBE_MASTERS", 2)))
# Reconfigures cluster distribution at scale
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("MASSIVE_SCALE_THRESHOLD", 200))
DEBUG = get_var_as_bool("DEBUG", True)
HOST_PREFIX = os.environ.get("HOST_PREFIX", "node")
USE_REAL_HOSTNAME = get_var_as_bool("USE_REAL_HOSTNAME", False)
# Configurable as shell vars end
class KubesprayInventory(object):
def __init__(self, changed_hosts=None, config_file=None):
self.config_file = config_file
self.yaml_config = {}
loadPreviousConfig = False
printHostnames = False
# See whether there are any commands to process
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
if changed_hosts[0] == "add":
loadPreviousConfig = True
changed_hosts = changed_hosts[1:]
elif changed_hosts[0] == "print_hostnames":
loadPreviousConfig = True
printHostnames = True
else:
self.parse_command(changed_hosts[0], changed_hosts[1:])
sys.exit(0)
# If the user wants to remove a node, we need to load the config anyway
if changed_hosts and changed_hosts[0][0] == "-":
loadPreviousConfig = True
if self.config_file and loadPreviousConfig: # Load previous YAML file
try:
self.hosts_file = open(config_file, 'r')
self.yaml_config = yaml.load(self.hosts_file)
except OSError as e:
# I am assuming we are catching "cannot open file" exceptions
print(e)
sys.exit(1)
if printHostnames:
self.print_hostnames()
sys.exit(0)
self.ensure_required_groups(ROLES)
if changed_hosts:
changed_hosts = self.range2ips(changed_hosts)
self.hosts = self.build_hostnames(changed_hosts,
loadPreviousConfig)
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
self.set_all(self.hosts)
self.set_k8s_cluster()
etcd_hosts_count = 3 if len(self.hosts.keys()) >= 3 else 1
self.set_etcd(list(self.hosts.keys())[:etcd_hosts_count])
if len(self.hosts) >= SCALE_THRESHOLD:
self.set_kube_control_plane(list(self.hosts.keys())[
etcd_hosts_count:(etcd_hosts_count + KUBE_CONTROL_HOSTS)])
else:
self.set_kube_control_plane(
list(self.hosts.keys())[:KUBE_CONTROL_HOSTS])
self.set_kube_node(self.hosts.keys())
if len(self.hosts) >= SCALE_THRESHOLD:
self.set_calico_rr(list(self.hosts.keys())[:etcd_hosts_count])
else: # Show help if no options
self.show_help()
sys.exit(0)
self.write_config(self.config_file)
def write_config(self, config_file):
if config_file:
with open(self.config_file, 'w') as f:
yaml.dump(self.yaml_config, f)
else:
print("WARNING: Unable to save config. Make sure you set "
"CONFIG_FILE env var.")
def debug(self, msg):
if DEBUG:
print("DEBUG: {0}".format(msg))
def get_ip_from_opts(self, optstring):
if 'ip' in optstring:
return optstring['ip']
else:
raise ValueError("IP parameter not found in options")
def ensure_required_groups(self, groups):
for group in groups:
if group == 'all':
self.debug("Adding group {0}".format(group))
if group not in self.yaml_config:
all_dict = OrderedDict([('hosts', OrderedDict({})),
('children', OrderedDict({}))])
self.yaml_config = {'all': all_dict}
else:
self.debug("Adding group {0}".format(group))
if group not in self.yaml_config['all']['children']:
self.yaml_config['all']['children'][group] = {'hosts': {}}
def get_host_id(self, host):
'''Returns integer host ID (without padding) from a given hostname.'''
try:
short_hostname = host.split('.')[0]
return int(re.findall("\\d+$", short_hostname)[-1])
except IndexError:
raise ValueError("Host name must end in an integer")
# Keeps already specified hosts,
# and adds or removes the hosts provided as an argument
def build_hostnames(self, changed_hosts, loadPreviousConfig=False):
existing_hosts = OrderedDict()
highest_host_id = 0
# Load already existing hosts from the YAML
if loadPreviousConfig:
try:
for host in self.yaml_config['all']['hosts']:
# Read configuration of an existing host
hostConfig = self.yaml_config['all']['hosts'][host]
existing_hosts[host] = hostConfig
# If the existing host seems
# to have been created automatically, detect its ID
if host.startswith(HOST_PREFIX):
host_id = self.get_host_id(host)
if host_id > highest_host_id:
highest_host_id = host_id
except Exception as e:
# I am assuming we are catching automatically
# created hosts without IDs
print(e)
sys.exit(1)
# FIXME(mattymo): Fix condition where delete then add reuses highest id
next_host_id = highest_host_id + 1
next_host = ""
all_hosts = existing_hosts.copy()
for host in changed_hosts:
# Delete the host from config the hostname/IP has a "-" prefix
if host[0] == "-":
realhost = host[1:]
if self.exists_hostname(all_hosts, realhost):
self.debug("Marked {0} for deletion.".format(realhost))
all_hosts.pop(realhost)
elif self.exists_ip(all_hosts, realhost):
self.debug("Marked {0} for deletion.".format(realhost))
self.delete_host_by_ip(all_hosts, realhost)
# Host/Argument starts with a digit,
# then we assume its an IP address
elif host[0].isdigit():
if ',' in host:
ip, access_ip = host.split(',')
else:
ip = host
access_ip = host
if self.exists_hostname(all_hosts, host):
self.debug("Skipping existing host {0}.".format(host))
continue
elif self.exists_ip(all_hosts, ip):
self.debug("Skipping existing host {0}.".format(ip))
continue
if USE_REAL_HOSTNAME:
cmd = ("ssh -oStrictHostKeyChecking=no "
+ access_ip + " 'hostname -s'")
next_host = subprocess.check_output(cmd, shell=True)
next_host = next_host.strip().decode('ascii')
else:
# Generates a hostname because we have only an IP address
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
next_host_id += 1
# Uses automatically generated node name
# in case we dont provide it.
all_hosts[next_host] = {'ansible_host': access_ip,
'ip': ip,
'access_ip': access_ip}
# Host/Argument starts with a letter, then we assume its a hostname
elif host[0].isalpha():
if ',' in host:
try:
hostname, ip, access_ip = host.split(',')
except Exception:
hostname, ip = host.split(',')
access_ip = ip
if self.exists_hostname(all_hosts, host):
self.debug("Skipping existing host {0}.".format(host))
continue
elif self.exists_ip(all_hosts, ip):
self.debug("Skipping existing host {0}.".format(ip))
continue
all_hosts[hostname] = {'ansible_host': access_ip,
'ip': ip,
'access_ip': access_ip}
return all_hosts
# Expand IP ranges into individual addresses
def range2ips(self, hosts):
reworked_hosts = []
def ips(start_address, end_address):
try:
# Python 3.x
start = int(ip_address(start_address))
end = int(ip_address(end_address))
except Exception:
# Python 2.7
start = int(ip_address(str(start_address)))
end = int(ip_address(str(end_address)))
return [ip_address(ip).exploded for ip in range(start, end + 1)]
for host in hosts:
if '-' in host and not (host.startswith('-') or host[0].isalpha()):
start, end = host.strip().split('-')
try:
reworked_hosts.extend(ips(start, end))
except ValueError:
raise Exception("Range of ip_addresses isn't valid")
else:
reworked_hosts.append(host)
return reworked_hosts
def exists_hostname(self, existing_hosts, hostname):
return hostname in existing_hosts.keys()
def exists_ip(self, existing_hosts, ip):
for host_opts in existing_hosts.values():
if ip == self.get_ip_from_opts(host_opts):
return True
return False
def delete_host_by_ip(self, existing_hosts, ip):
for hostname, host_opts in existing_hosts.items():
if ip == self.get_ip_from_opts(host_opts):
del existing_hosts[hostname]
return
raise ValueError("Unable to find host by IP: {0}".format(ip))
def purge_invalid_hosts(self, hostnames, protected_names=[]):
for role in self.yaml_config['all']['children']:
if role != 'k8s_cluster' and self.yaml_config['all']['children'][role]['hosts']: # noqa
all_hosts = self.yaml_config['all']['children'][role]['hosts'].copy() # noqa
for host in all_hosts.keys():
if host not in hostnames and host not in protected_names:
self.debug(
"Host {0} removed from role {1}".format(host, role)) # noqa
del self.yaml_config['all']['children'][role]['hosts'][host] # noqa
# purge from all
if self.yaml_config['all']['hosts']:
all_hosts = self.yaml_config['all']['hosts'].copy()
for host in all_hosts.keys():
if host not in hostnames and host not in protected_names:
self.debug("Host {0} removed from role all".format(host))
del self.yaml_config['all']['hosts'][host]
def add_host_to_group(self, group, host, opts=""):
self.debug("adding host {0} to group {1}".format(host, group))
if group == 'all':
if self.yaml_config['all']['hosts'] is None:
self.yaml_config['all']['hosts'] = {host: None}
self.yaml_config['all']['hosts'][host] = opts
elif group != 'k8s_cluster:children':
if self.yaml_config['all']['children'][group]['hosts'] is None:
self.yaml_config['all']['children'][group]['hosts'] = {
host: None}
else:
self.yaml_config['all']['children'][group]['hosts'][host] = None # noqa
def set_kube_control_plane(self, hosts):
for host in hosts:
self.add_host_to_group('kube_control_plane', host)
def set_all(self, hosts):
for host, opts in hosts.items():
self.add_host_to_group('all', host, opts)
def set_k8s_cluster(self):
k8s_cluster = {'children': {'kube_control_plane': None,
'kube_node': None}}
self.yaml_config['all']['children']['k8s_cluster'] = k8s_cluster
def set_calico_rr(self, hosts):
for host in hosts:
if host in self.yaml_config['all']['children']['kube_control_plane']: # noqa
self.debug("Not adding {0} to calico_rr group because it "
"conflicts with kube_control_plane "
"group".format(host))
continue
if host in self.yaml_config['all']['children']['kube_node']:
self.debug("Not adding {0} to calico_rr group because it "
"conflicts with kube_node group".format(host))
continue
self.add_host_to_group('calico_rr', host)
def set_kube_node(self, hosts):
for host in hosts:
if len(self.yaml_config['all']['hosts']) >= SCALE_THRESHOLD:
if host in self.yaml_config['all']['children']['etcd']['hosts']: # noqa
self.debug("Not adding {0} to kube_node group because of "
"scale deployment and host is in etcd "
"group.".format(host))
continue
if len(self.yaml_config['all']['hosts']) >= MASSIVE_SCALE_THRESHOLD: # noqa
if host in self.yaml_config['all']['children']['kube_control_plane']['hosts']: # noqa
self.debug("Not adding {0} to kube_node group because of "
"scale deployment and host is in "
"kube_control_plane group.".format(host))
continue
self.add_host_to_group('kube_node', host)
def set_etcd(self, hosts):
for host in hosts:
self.add_host_to_group('etcd', host)
def load_file(self, files=None):
'''Directly loads JSON to inventory.'''
if not files:
raise Exception("No input file specified.")
import json
for filename in list(files):
# Try JSON
try:
with open(filename, 'r') as f:
data = json.load(f)
except ValueError:
raise Exception("Cannot read %s as JSON, or CSV", filename)
self.ensure_required_groups(ROLES)
self.set_k8s_cluster()
for group, hosts in data.items():
self.ensure_required_groups([group])
for host, opts in hosts.items():
optstring = {'ansible_host': opts['ip'],
'ip': opts['ip'],
'access_ip': opts['ip']}
self.add_host_to_group('all', host, optstring)
self.add_host_to_group(group, host)
self.write_config(self.config_file)
def parse_command(self, command, args=None):
if command == 'help':
self.show_help()
elif command == 'print_cfg':
self.print_config()
elif command == 'print_ips':
self.print_ips()
elif command == 'print_hostnames':
self.print_hostnames()
elif command == 'load':
self.load_file(args)
else:
raise Exception("Invalid command specified.")
def show_help(self):
help_text = '''Usage: inventory.py ip1 [ip2 ...]
Examples: inventory.py 10.10.1.3 10.10.1.4 10.10.1.5
Available commands:
help - Display this message
print_cfg - Write inventory file to stdout
print_ips - Write a space-delimited list of IPs from "all" group
print_hostnames - Write a space-delimited list of Hostnames from "all" group
add - Adds specified hosts into an already existing inventory
Advanced usage:
Create new or overwrite old inventory file: inventory.py 10.10.1.5
Add another host after initial creation: inventory.py add 10.10.1.6
Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3
Add hosts with a specific hostname, ip, and optional access ip: first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3
Delete a host: inventory.py -10.10.1.3
Delete a host by id: inventory.py -node1
Configurable env vars:
DEBUG Enable debug printing. Default: True
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.yaml
HOST_PREFIX Host prefix for generated hosts. Default: node
KUBE_CONTROL_HOSTS Set the number of kube-control-planes. Default: 2
SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50
MASSIVE_SCALE_THRESHOLD Separate K8s control-plane and ETCD if # of nodes >= 200
''' # noqa
print(help_text)
def print_config(self):
yaml.dump(self.yaml_config, sys.stdout)
def print_hostnames(self):
print(' '.join(self.yaml_config['all']['hosts'].keys()))
def print_ips(self):
ips = []
for host, opts in self.yaml_config['all']['hosts'].items():
ips.append(self.get_ip_from_opts(opts))
print(' '.join(ips))
def main(argv=None):
if not argv:
argv = sys.argv[1:]
KubesprayInventory(argv, CONFIG_FILE)
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,3 @@
configparser>=3.3.0
ipaddress
ruamel.yaml>=0.15.88

View File

@@ -0,0 +1,3 @@
[metadata]
name = kubespray-inventory-builder
version = 0.1

View File

@@ -0,0 +1,29 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=[],
pbr=False)

View File

@@ -0,0 +1,3 @@
hacking>=0.10.2
mock>=1.3.0
pytest>=2.8.0

View File

@@ -0,0 +1,595 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import inventory
from io import StringIO
import unittest
from unittest import mock
from collections import OrderedDict
import sys
path = "./contrib/inventory_builder/"
if path not in sys.path:
sys.path.append(path)
import inventory # noqa
class TestInventoryPrintHostnames(unittest.TestCase):
@mock.patch('ruamel.yaml.YAML.load')
def test_print_hostnames(self, load_mock):
mock_io = mock.mock_open(read_data='')
load_mock.return_value = OrderedDict({'all': {'hosts': {
'node1': {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'},
'node2': {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}}}})
with mock.patch('builtins.open', mock_io):
with self.assertRaises(SystemExit) as cm:
with mock.patch('sys.stdout', new_callable=StringIO) as stdout:
inventory.KubesprayInventory(
changed_hosts=["print_hostnames"],
config_file="file")
self.assertEqual("node1 node2\n", stdout.getvalue())
self.assertEqual(cm.exception.code, 0)
class TestInventory(unittest.TestCase):
@mock.patch('inventory.sys')
def setUp(self, sys_mock):
sys_mock.exit = mock.Mock()
super(TestInventory, self).setUp()
self.data = ['10.90.3.2', '10.90.3.3', '10.90.3.4']
self.inv = inventory.KubesprayInventory()
def test_get_ip_from_opts(self):
optstring = {'ansible_host': '10.90.3.2',
'ip': '10.90.3.2',
'access_ip': '10.90.3.2'}
expected = "10.90.3.2"
result = self.inv.get_ip_from_opts(optstring)
self.assertEqual(expected, result)
def test_get_ip_from_opts_invalid(self):
optstring = "notanaddr=value something random!chars:D"
self.assertRaisesRegex(ValueError, "IP parameter not found",
self.inv.get_ip_from_opts, optstring)
def test_ensure_required_groups(self):
groups = ['group1', 'group2']
self.inv.ensure_required_groups(groups)
for group in groups:
self.assertIn(group, self.inv.yaml_config['all']['children'])
def test_get_host_id(self):
hostnames = ['node99', 'no99de01', '01node01', 'node1.domain',
'node3.xyz123.aaa']
expected = [99, 1, 1, 1, 3]
for hostname, expected in zip(hostnames, expected):
result = self.inv.get_host_id(hostname)
self.assertEqual(expected, result)
def test_get_host_id_invalid(self):
bad_hostnames = ['node', 'no99de', '01node', 'node.111111']
for hostname in bad_hostnames:
self.assertRaisesRegex(ValueError, "Host name must end in an",
self.inv.get_host_id, hostname)
def test_build_hostnames_add_duplicate(self):
changed_hosts = ['10.90.0.2']
expected = OrderedDict([('node3',
{'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'})])
self.inv.yaml_config['all']['hosts'] = expected
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_add_two(self):
changed_hosts = ['10.90.0.2', '10.90.0.3']
expected = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.inv.yaml_config['all']['hosts'] = OrderedDict()
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_add_three(self):
changed_hosts = ['10.90.0.2', '10.90.0.3', '10.90.0.4']
expected = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}),
('node3', {'ansible_host': '10.90.0.4',
'ip': '10.90.0.4',
'access_ip': '10.90.0.4'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_add_one(self):
changed_hosts = ['10.90.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_delete_first(self):
changed_hosts = ['-10.90.0.2']
existing_hosts = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.inv.yaml_config['all']['hosts'] = existing_hosts
expected = OrderedDict([
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_delete_by_hostname(self):
changed_hosts = ['-node1']
existing_hosts = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.inv.yaml_config['all']['hosts'] = existing_hosts
expected = OrderedDict([
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_exists_hostname_positive(self):
hostname = 'node1'
expected = True
existing_hosts = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.exists_hostname(existing_hosts, hostname)
self.assertEqual(expected, result)
def test_exists_hostname_negative(self):
hostname = 'node99'
expected = False
existing_hosts = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.exists_hostname(existing_hosts, hostname)
self.assertEqual(expected, result)
def test_exists_ip_positive(self):
ip = '10.90.0.2'
expected = True
existing_hosts = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.exists_ip(existing_hosts, ip)
self.assertEqual(expected, result)
def test_exists_ip_negative(self):
ip = '10.90.0.200'
expected = False
existing_hosts = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.exists_ip(existing_hosts, ip)
self.assertEqual(expected, result)
def test_delete_host_by_ip_positive(self):
ip = '10.90.0.2'
expected = OrderedDict([
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
existing_hosts = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.inv.delete_host_by_ip(existing_hosts, ip)
self.assertEqual(expected, existing_hosts)
def test_delete_host_by_ip_negative(self):
ip = '10.90.0.200'
existing_hosts = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.assertRaisesRegex(ValueError, "Unable to find host",
self.inv.delete_host_by_ip, existing_hosts, ip)
def test_purge_invalid_hosts(self):
proper_hostnames = ['node1', 'node2']
bad_host = 'doesnotbelong2'
existing_hosts = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}),
('doesnotbelong2', {'whateveropts=ilike'})])
self.inv.yaml_config['all']['hosts'] = existing_hosts
self.inv.purge_invalid_hosts(proper_hostnames)
self.assertNotIn(
bad_host, self.inv.yaml_config['all']['hosts'].keys())
def test_add_host_to_group(self):
group = 'etcd'
host = 'node1'
opts = {'ip': '10.90.0.2'}
self.inv.add_host_to_group(group, host, opts)
self.assertEqual(
self.inv.yaml_config['all']['children'][group]['hosts'].get(host),
None)
def test_set_kube_control_plane(self):
group = 'kube_control_plane'
host = 'node1'
self.inv.set_kube_control_plane([host])
self.assertIn(
host, self.inv.yaml_config['all']['children'][group]['hosts'])
def test_set_all(self):
hosts = OrderedDict([
('node1', 'opt1'),
('node2', 'opt2')])
self.inv.set_all(hosts)
for host, opt in hosts.items():
self.assertEqual(
self.inv.yaml_config['all']['hosts'].get(host), opt)
def test_set_k8s_cluster(self):
group = 'k8s_cluster'
expected_hosts = ['kube_node', 'kube_control_plane']
self.inv.set_k8s_cluster()
for host in expected_hosts:
self.assertIn(
host,
self.inv.yaml_config['all']['children'][group]['children'])
def test_set_kube_node(self):
group = 'kube_node'
host = 'node1'
self.inv.set_kube_node([host])
self.assertIn(
host, self.inv.yaml_config['all']['children'][group]['hosts'])
def test_set_etcd(self):
group = 'etcd'
host = 'node1'
self.inv.set_etcd([host])
self.assertIn(
host, self.inv.yaml_config['all']['children'][group]['hosts'])
def test_scale_scenario_one(self):
num_nodes = 50
hosts = OrderedDict()
for hostid in range(1, num_nodes+1):
hosts["node" + str(hostid)] = ""
self.inv.set_all(hosts)
self.inv.set_etcd(list(hosts.keys())[0:3])
self.inv.set_kube_control_plane(list(hosts.keys())[0:2])
self.inv.set_kube_node(hosts.keys())
for h in range(3):
self.assertFalse(
list(hosts.keys())[h] in
self.inv.yaml_config['all']['children']['kube_node']['hosts'])
def test_scale_scenario_two(self):
num_nodes = 500
hosts = OrderedDict()
for hostid in range(1, num_nodes+1):
hosts["node" + str(hostid)] = ""
self.inv.set_all(hosts)
self.inv.set_etcd(list(hosts.keys())[0:3])
self.inv.set_kube_control_plane(list(hosts.keys())[3:5])
self.inv.set_kube_node(hosts.keys())
for h in range(5):
self.assertFalse(
list(hosts.keys())[h] in
self.inv.yaml_config['all']['children']['kube_node']['hosts'])
def test_range2ips_range(self):
changed_hosts = ['10.90.0.2', '10.90.0.4-10.90.0.6', '10.90.0.8']
expected = ['10.90.0.2',
'10.90.0.4',
'10.90.0.5',
'10.90.0.6',
'10.90.0.8']
result = self.inv.range2ips(changed_hosts)
self.assertEqual(expected, result)
def test_range2ips_incorrect_range(self):
host_range = ['10.90.0.4-a.9b.c.e']
self.assertRaisesRegex(Exception, "Range of ip_addresses isn't valid",
self.inv.range2ips, host_range)
def test_build_hostnames_create_with_one_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_create_with_two_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2', '10.90.0.3,192.168.0.3']
expected = OrderedDict([
('node1', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node2', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_create_with_three_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2',
'10.90.0.3,192.168.0.3',
'10.90.0.4,192.168.0.4']
expected = OrderedDict([
('node1', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node2', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node3', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_overwrite_one_with_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
existing = OrderedDict([('node5',
{'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_overwrite_three_with_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
existing = OrderedDict([
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'}),
('node5', {'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_different_ips_add_duplicate(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node3',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
existing = expected
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_add_two_different_ips_into_one_existing(self):
changed_hosts = ['10.90.0.3,192.168.0.3', '10.90.0.4,192.168.0.4']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_add_two_different_ips_into_two_existing(self):
changed_hosts = ['10.90.0.4,192.168.0.4', '10.90.0.5,192.168.0.5']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'}),
('node5', {'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_add_two_different_ips_into_three_existing(self):
changed_hosts = ['10.90.0.5,192.168.0.5', '10.90.0.6,192.168.0.6']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'}),
('node5', {'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'}),
('node6', {'ansible_host': '192.168.0.6',
'ip': '10.90.0.6',
'access_ip': '192.168.0.6'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
# Add two IP addresses into a config that has
# three already defined IP addresses. One of the IP addresses
# is a duplicate.
def test_build_hostnames_add_two_duplicate_one_overlap(self):
changed_hosts = ['10.90.0.4,192.168.0.4', '10.90.0.5,192.168.0.5']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'}),
('node5', {'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
# Add two duplicate IP addresses into a config that has
# three already defined IP addresses
def test_build_hostnames_add_two_duplicate_two_overlap(self):
changed_hosts = ['10.90.0.3,192.168.0.3', '10.90.0.4,192.168.0.4']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)

View File

@@ -0,0 +1,34 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = pep8
[testenv]
allowlist_externals = py.test
usedevelop = True
deps =
-r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
setenv = VIRTUAL_ENV={envdir}
passenv =
http_proxy
HTTP_PROXY
https_proxy
HTTPS_PROXY
no_proxy
NO_PROXY
commands = pytest -vv #{posargs:./tests}
[testenv:pep8]
usedevelop = False
allowlist_externals = bash
commands =
bash -c "find {toxinidir}/* -type f -name '*.py' -print0 | xargs -0 flake8"
[testenv:venv]
commands = {posargs}
[flake8]
show-source = true
builtins = _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg

View File

@@ -0,0 +1,62 @@
%global srcname kubespray
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
Name: kubespray
Version: master
Release: %(git describe | sed -r 's/v(\S+-?)-(\S+)-(\S+)/\1.dev\2+\3/')
Summary: Ansible modules for installing Kubernetes
Group: System Environment/Libraries
License: ASL 2.0
Url: https://github.com/kubernetes-sigs/kubespray
Source0: https://github.com/kubernetes-sigs/kubespray/archive/%{upstream_version}.tar.gz#/%{name}-%{release}.tar.gz
BuildArch: noarch
BuildRequires: git
BuildRequires: python2
BuildRequires: python2-devel
BuildRequires: python2-setuptools
BuildRequires: python-d2to1
BuildRequires: python2-pbr
Requires: ansible >= 2.5.0
Requires: python-jinja2 >= 2.10
Requires: python-netaddr
Requires: python-pbr
%description
Ansible-kubespray is a set of Ansible modules and playbooks for
installing a Kubernetes cluster. If you have questions, join us
on the https://slack.k8s.io, channel '#kubespray'.
%prep
%autosetup -n %{name}-%{upstream_version} -S git
%build
export PBR_VERSION=%{release}
%{__python2} setup.py build bdist_rpm
%install
export PBR_VERSION=%{release}
export SKIP_PIP_INSTALL=1
%{__python2} setup.py install --skip-build --root %{buildroot} bdist_rpm
%files
%doc %{_docdir}/%{name}/README.md
%doc %{_docdir}/%{name}/inventory/sample/hosts.ini
%config %{_sysconfdir}/%{name}/ansible.cfg
%config %{_sysconfdir}/%{name}/inventory/sample/group_vars/all.yml
%config %{_sysconfdir}/%{name}/inventory/sample/group_vars/k8s_cluster.yml
%license %{_docdir}/%{name}/LICENSE
%{python2_sitelib}/%{srcname}-%{release}-py%{python2_version}.egg-info
%{_datarootdir}/%{name}/roles/
%{_datarootdir}/%{name}/playbooks/
%defattr(-,root,root)
%changelog

View File

@@ -60,17 +60,17 @@ You can create many different kubernetes topologies by setting the number of
different classes of hosts. For each class there are options for allocating different classes of hosts. For each class there are options for allocating
floating IP addresses or not. floating IP addresses or not.
- Control plane nodes with etcd - Master nodes with etcd
- Control plane nodes without etcd - Master nodes without etcd
- Standalone etcd hosts - Standalone etcd hosts
- Kubernetes worker nodes - Kubernetes worker nodes
Note that the Ansible script will report an invalid configuration if you wind up Note that the Ansible script will report an invalid configuration if you wind up
with an even number of etcd instances since that is not a valid configuration. This with an even number of etcd instances since that is not a valid configuration. This
restriction includes standalone etcd nodes that are deployed in a cluster along with restriction includes standalone etcd nodes that are deployed in a cluster along with
control plane nodes with etcd replicas. As an example, if you have three control plane master nodes with etcd replicas. As an example, if you have three master nodes with
nodes with etcd replicas and three standalone etcd nodes, the script will fail since etcd replicas and three standalone etcd nodes, the script will fail since there are
there are now six total etcd replicas. now six total etcd replicas.
### GlusterFS shared file system ### GlusterFS shared file system
@@ -258,8 +258,7 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`bastion_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to bastion node instead of creating new random floating IPs. | |`bastion_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to bastion node instead of creating new random floating IPs. |
|`external_net` | UUID of the external network that will be routed to | |`external_net` | UUID of the external network that will be routed to |
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through `openstack flavor list` | |`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through `openstack flavor list` |
|`image`,`image_gfs`, `image_master` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. | |`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|`image_uuid`,`image_gfs_uuid`, `image_master_uuid` | UUID of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected | |`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
|`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs | |`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
|`number_of_k8s_masters`, `number_of_k8s_masters_no_floating_ip` | Number of nodes that serve as both master and etcd. These can be provisioned with or without floating IP addresses| |`number_of_k8s_masters`, `number_of_k8s_masters_no_floating_ip` | Number of nodes that serve as both master and etcd. These can be provisioned with or without floating IP addresses|
@@ -300,10 +299,10 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`force_null_port_security` | Set `null` instead of `true` or `false` for `port_security`. `false` by default | |`force_null_port_security` | Set `null` instead of `true` or `false` for `port_security`. `false` by default |
|`k8s_nodes` | Map containing worker node definition, see explanation below | |`k8s_nodes` | Map containing worker node definition, see explanation below |
|`k8s_masters` | Map containing master node definition, see explanation for k8s_nodes and `sample-inventory/cluster.tfvars` | |`k8s_masters` | Map containing master node definition, see explanation for k8s_nodes and `sample-inventory/cluster.tfvars` |
|`k8s_master_loadbalancer_enabled` | Enable and use an Octavia load balancer for the K8s master nodes | | `k8s_master_loadbalancer_enabled`| Enable and use an Octavia load balancer for the K8s master nodes |
|`k8s_master_loadbalancer_listener_port` | Define via which port the K8s Api should be exposed. `6443` by default | | `k8s_master_loadbalancer_listener_port` | Define via which port the K8s Api should be exposed. `6443` by default |
|`k8s_master_loadbalancer_server_port` | Define via which port the K8S api is available on the master nodes. `6443` by default | | `k8s_master_loadbalancer_server_port` | Define via which port the K8S api is available on the mas. `6443` by default |
|`k8s_master_loadbalancer_public_ip` | Specify if an existing floating IP should be used for the load balancer. A new floating IP is assigned by default | | `k8s_master_loadbalancer_public_ip` | Specify if an existing floating IP should be used for the load balancer. A new floating IP is assigned by default |
##### k8s_nodes ##### k8s_nodes
@@ -318,8 +317,7 @@ k8s_nodes:
node-name: node-name:
az: string # Name of the AZ az: string # Name of the AZ
flavor: string # Flavor ID to use flavor: string # Flavor ID to use
floating_ip: bool # If floating IPs should be used or not floating_ip: bool # If floating IPs should be created or not
reserved_floating_ip: string # If floating_ip is true use existing floating IP, if reserved_floating_ip is an empty string and floating_ip is true, a new floating IP will be created
extra_groups: string # (optional) Additional groups to add for kubespray, defaults to no groups extra_groups: string # (optional) Additional groups to add for kubespray, defaults to no groups
image_id: string # (optional) Image ID to use, defaults to var.image_id or var.image image_id: string # (optional) Image ID to use, defaults to var.image_id or var.image
root_volume_size_in_gb: number # (optional) Size of the block storage to use as root disk, defaults to var.node_root_volume_size_in_gb or to use volume from flavor otherwise root_volume_size_in_gb: number # (optional) Size of the block storage to use as root disk, defaults to var.node_root_volume_size_in_gb or to use volume from flavor otherwise
@@ -621,7 +619,7 @@ Edit `inventory/$CLUSTER/group_vars/k8s_cluster/k8s_cluster.yml`:
- Set variable **kube_network_plugin** to your desired networking plugin. - Set variable **kube_network_plugin** to your desired networking plugin.
- **flannel** works out-of-the-box - **flannel** works out-of-the-box
- **calico** requires [configuring OpenStack Neutron ports](/docs/cloud_controllers/openstack.md) to allow service and pod subnets - **calico** requires [configuring OpenStack Neutron ports](/docs/cloud_providers/openstack.md) to allow service and pod subnets
```yml ```yml
# Choose network plugin (calico, weave or flannel) # Choose network plugin (calico, weave or flannel)

View File

@@ -89,15 +89,11 @@ variable "k8s_node_fips" {
} }
variable "k8s_masters_fips" { variable "k8s_masters_fips" {
type = map(object({ type = map
address = string
}))
} }
variable "k8s_nodes_fips" { variable "k8s_nodes_fips" {
type = map(object({ type = map
address = string
}))
} }
variable "bastion_fips" { variable "bastion_fips" {
@@ -140,9 +136,8 @@ variable "k8s_masters" {
type = map(object({ type = map(object({
az = string az = string
flavor = string flavor = string
etcd = bool
floating_ip = bool floating_ip = bool
reserved_floating_ip = optional(string) etcd = bool
image_id = optional(string) image_id = optional(string)
root_volume_size_in_gb = optional(number) root_volume_size_in_gb = optional(number)
volume_type = optional(string) volume_type = optional(string)
@@ -155,7 +150,6 @@ variable "k8s_nodes" {
az = string az = string
flavor = string flavor = string
floating_ip = bool floating_ip = bool
reserved_floating_ip = optional(string)
extra_groups = optional(string) extra_groups = optional(string)
image_id = optional(string) image_id = optional(string)
root_volume_size_in_gb = optional(number) root_volume_size_in_gb = optional(number)

View File

@@ -15,7 +15,7 @@ resource "openstack_networking_floatingip_v2" "k8s_master" {
} }
resource "openstack_networking_floatingip_v2" "k8s_masters" { resource "openstack_networking_floatingip_v2" "k8s_masters" {
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 ? { for key, value in var.k8s_masters : key => value if value.floating_ip && (lookup(value, "reserved_floating_ip", "") == "") } : {} for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 ? { for key, value in var.k8s_masters : key => value if value.floating_ip } : {}
pool = var.floatingip_pool pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency] depends_on = [null_resource.dummy_dependency]
} }
@@ -40,7 +40,7 @@ resource "openstack_networking_floatingip_v2" "bastion" {
} }
resource "openstack_networking_floatingip_v2" "k8s_nodes" { resource "openstack_networking_floatingip_v2" "k8s_nodes" {
for_each = var.number_of_k8s_nodes == 0 ? { for key, value in var.k8s_nodes : key => value if value.floating_ip && (lookup(value, "reserved_floating_ip", "") == "") } : {} for_each = var.number_of_k8s_nodes == 0 ? { for key, value in var.k8s_nodes : key => value if value.floating_ip } : {}
pool = var.floatingip_pool pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency] depends_on = [null_resource.dummy_dependency]
} }

View File

@@ -1,33 +1,10 @@
locals {
k8s_masters_reserved_fips = {
for key, value in var.k8s_masters : key => {
address = value.reserved_floating_ip
} if value.floating_ip && (lookup(value, "reserved_floating_ip", "") != "")
}
k8s_masters_create_fips = {
for key, value in openstack_networking_floatingip_v2.k8s_masters : key => {
address = value.address
}
}
k8s_nodes_reserved_fips = {
for key, value in var.k8s_nodes : key => {
address = value.reserved_floating_ip
} if value.floating_ip && (lookup(value, "reserved_floating_ip", "") != "")
}
k8s_nodes_create_fips = {
for key, value in openstack_networking_floatingip_v2.k8s_nodes : key => {
address = value.address
}
}
}
# If k8s_master_fips is already defined as input, keep the same value since new FIPs have not been created. # If k8s_master_fips is already defined as input, keep the same value since new FIPs have not been created.
output "k8s_master_fips" { output "k8s_master_fips" {
value = length(var.k8s_master_fips) > 0 ? var.k8s_master_fips : openstack_networking_floatingip_v2.k8s_master[*].address value = length(var.k8s_master_fips) > 0 ? var.k8s_master_fips : openstack_networking_floatingip_v2.k8s_master[*].address
} }
output "k8s_masters_fips" { output "k8s_masters_fips" {
value = merge(local.k8s_masters_create_fips, local.k8s_masters_reserved_fips) value = openstack_networking_floatingip_v2.k8s_masters
} }
# If k8s_master_fips is already defined as input, keep the same value since new FIPs have not been created. # If k8s_master_fips is already defined as input, keep the same value since new FIPs have not been created.
@@ -40,7 +17,7 @@ output "k8s_node_fips" {
} }
output "k8s_nodes_fips" { output "k8s_nodes_fips" {
value = merge(local.k8s_nodes_create_fips, local.k8s_nodes_reserved_fips) value = openstack_networking_floatingip_v2.k8s_nodes
} }
output "bastion_fips" { output "bastion_fips" {

View File

@@ -98,7 +98,7 @@ PARSERS = {}
def _clean_dc(dcname): def _clean_dc(dcname):
# Consul DCs are strictly alphanumeric with underscores and hyphens - # Consul DCs are strictly alphanumeric with underscores and hyphens -
# ensure that the consul_dc attribute meets these requirements. # ensure that the consul_dc attribute meets these requirements.
return re.sub(r'[^\w_\-]', '-', dcname) return re.sub('[^\w_\-]', '-', dcname)
def iterhosts(resources): def iterhosts(resources):
@@ -304,7 +304,7 @@ def openstack_host(resource, module_name):
try: try:
if 'metadata.prefer_ipv6' in raw_attrs and raw_attrs['metadata.prefer_ipv6'] == "1": if 'metadata.prefer_ipv6' in raw_attrs and raw_attrs['metadata.prefer_ipv6'] == "1":
attrs.update({ attrs.update({
'ansible_host': re.sub(r"[\[\]]", "", raw_attrs['access_ip_v6']), 'ansible_host': re.sub("[\[\]]", "", raw_attrs['access_ip_v6']),
'publicly_routable': True, 'publicly_routable': True,
}) })
else: else:

View File

@@ -89,12 +89,9 @@ node1 ansible_ssh_host=95.54.0.12 local_as=xxxxxx
Peers can be defined using the `peers` variable (see docs/calico_peer_example examples). Peers can be defined using the `peers` variable (see docs/calico_peer_example examples).
In order to define global peers, the `peers` variable can be defined in group_vars with the "scope" attribute of each global peer set to "global". In order to define global peers, the `peers` variable can be defined in group_vars with the "scope" attribute of each global peer set to "global".
In order to define peers on a per node basis, the `peers` variable must be defined in hostvars or group_vars with the "scope" attribute unset or set to "node". In order to define peers on a per node basis, the `peers` variable must be defined in hostvars.
NB: Ansible's `hash_behaviour` is by default set to "replace", thus defining both global and per node peers would end up with having only per node peers. If having both global and per node peers defined was meant to happen, global peers would have to be defined in hostvars for each host (as well as per node peers) NB: Ansible's `hash_behaviour` is by default set to "replace", thus defining both global and per node peers would end up with having only per node peers. If having both global and per node peers defined was meant to happen, global peers would have to be defined in hostvars for each host (as well as per node peers)
NB²: Peers definition at node scope can be customized with additional fields `filters`, `sourceAddress` and `numAllowedLocalASNumbers` (see <https://docs.tigera.io/calico/latest/reference/resources/bgppeer> for details)
Since calico 3.4, Calico supports advertising Kubernetes service cluster IPs over BGP, just as it advertises pod IPs. Since calico 3.4, Calico supports advertising Kubernetes service cluster IPs over BGP, just as it advertises pod IPs.
This can be enabled by setting the following variable as follow in group_vars (k8s_cluster/k8s-net-calico.yml) This can be enabled by setting the following variable as follow in group_vars (k8s_cluster/k8s-net-calico.yml)
@@ -130,7 +127,8 @@ recommended here:
You need to edit your inventory and add: You need to edit your inventory and add:
* `calico_rr` group with nodes in it. `calico_rr` can be combined with * `calico_rr` group with nodes in it. `calico_rr` can be combined with
`kube_node` and/or `kube_control_plane`. `kube_node` and/or `kube_control_plane`. `calico_rr` group also must be a child
group of `k8s_cluster` group.
* `cluster_id` by route reflector node/group (see details [here](https://hub.docker.com/r/calico/routereflector/)) * `cluster_id` by route reflector node/group (see details [here](https://hub.docker.com/r/calico/routereflector/))
Here's an example of Kubespray inventory with standalone route reflectors: Here's an example of Kubespray inventory with standalone route reflectors:
@@ -159,6 +157,11 @@ node3
node4 node4
node5 node5
[k8s_cluster:children]
kube_node
kube_control_plane
calico_rr
[calico_rr] [calico_rr]
rr0 rr0
rr1 rr1

View File

@@ -45,144 +45,10 @@ cilium_pool_mask_size Specifies the size allocated to node.ipam.podCIDRs from cl
cilium_pool_mask_size_ipv6: "120" cilium_pool_mask_size_ipv6: "120"
``` ```
### IP Load Balancer Pools
Cilium's IP Load Balancer Pools can be configured with the `cilium_loadbalancer_ip_pools` variable:
```yml
cilium_loadbalancer_ip_pools:
- name: "blue-pool"
cidrs:
- "10.0.10.0/24"
```
For further information, check [LB IPAM documentation](https://docs.cilium.io/en/stable/network/lb-ipam/)
### BGP Control Plane
Cilium's BGP Control Plane can be enabled by setting `cilium_enable_bgp_control_plane` to `true`.:
```yml
cilium_enable_bgp_control_plane: true
```
For further information, check [BGP Peering Policy documentation](https://docs.cilium.io/en/latest/network/bgp-control-plane/bgp-control-plane-v1/)
### BGP Control Plane Resources (New bgpv2 API v1.16+)
Cilium BGP control plane is managed by a set of custom resources which provide a flexible way to configure BGP peers, policies, and advertisements.
Cilium's BGP Instances can be configured with the `cilium_bgp_cluster_configs` variable:
```yml
cilium_bgp_cluster_configs:
- name: "cilium-bgp"
spec:
bgpInstances:
- name: "instance-64512"
localASN: 64512
peers:
- name: "peer-64512-tor1"
peerASN: 64512
peerAddress: '10.47.1.1'
peerConfigRef:
name: "cilium-peer"
nodeSelector:
matchExpressions:
- {key: somekey, operator: NotIn, values: ['never-used-value']}
```
Cillium's BGP Peers can be configured with the `cilium_bgp_peer_configs` variable:
```yml
cilium_bgp_peer_configs:
- name: cilium-peer
spec:
# authSecretRef: bgp-auth-secret
gracefulRestart:
enabled: true
restartTimeSeconds: 15
families:
- afi: ipv4
safi: unicast
advertisements:
matchLabels:
advertise: "bgp"
- afi: ipv6
safi: unicast
advertisements:
matchLabels:
advertise: "bgp"
```
Cillium's BGP Advertisements can be configured with the `cilium_bgp_advertisements` variable:
```yml
cilium_bgp_advertisements:
- name: bgp-advertisements
labels:
advertise: bgp
spec:
advertisements:
- advertisementType: "PodCIDR"
attributes:
communities:
standard: [ "64512:99" ]
- advertisementType: "Service"
service:
addresses:
- ClusterIP
- ExternalIP
- LoadBalancerIP
selector:
matchExpressions:
- {key: somekey, operator: NotIn, values: ['never-used-value']}
```
Cillium's BGP Node Config Overrides can be configured with the `cilium_bgp_node_config_overrides` variable:
```yml
cilium_bgp_node_config_overrides:
- name: bgpv2-cplane-dev-multi-homing-worker
spec:
bgpInstances:
- name: "instance-65000"
routerID: "192.168.10.1"
localPort: 1790
peers:
- name: "peer-65000-tor1"
localAddress: fd00:10:0:2::2
- name: "peer-65000-tor2"
localAddress: fd00:11:0:2::2
```
For further information, check [BGP Control Plane Resources documentation](https://docs.cilium.io/en/latest/network/bgp-control-plane/bgp-control-plane-v2/)
### BGP Peering Policies (Legacy < v1.16)
Cilium's BGP Peering Policies can be configured with the `cilium_bgp_peering_policies` variable:
```yml
cilium_bgp_peering_policies:
- name: "01-bgp-peering-policy"
spec:
virtualRouters:
- localASN: 64512
exportPodCIDR: false
neighbors:
- peerAddress: '10.47.1.1/24'
peerASN: 64512
serviceSelector:
matchExpressions:
- {key: somekey, operator: NotIn, values: ['never-used-value']}
```
For further information, check [BGP Peering Policy documentation](https://docs.cilium.io/en/latest/network/bgp-control-plane/bgp-control-plane-v1/#bgp-peering-policy-legacy)
## Kube-proxy replacement with Cilium ## Kube-proxy replacement with Cilium
Cilium can run without kube-proxy by setting `cilium_kube_proxy_replacement` Cilium can run without kube-proxy by setting `cilium_kube_proxy_replacement`
to `strict` (< v1.16) or `true` (Cilium v1.16+ no longer accepts `strict`, however this is converted to `true` by kubespray when running v1.16+). to `strict`.
Without kube-proxy, cilium needs to know the address of the kube-apiserver Without kube-proxy, cilium needs to know the address of the kube-apiserver
and this must be set globally for all Cilium components (agents and operators). and this must be set globally for all Cilium components (agents and operators).
@@ -304,14 +170,14 @@ Kubespray currently supports Linux distributions with Wireguard Kernel mode on L
## Bandwidth Manager ## Bandwidth Manager
Cilium's bandwidth manager supports the kubernetes.io/egress-bandwidth Pod annotation. Ciliums bandwidth manager supports the kubernetes.io/egress-bandwidth Pod annotation.
Bandwidth enforcement currently does not work in combination with L7 Cilium Network Policies. Bandwidth enforcement currently does not work in combination with L7 Cilium Network Policies.
In case they select the Pod at egress, then the bandwidth enforcement will be disabled for those Pods. In case they select the Pod at egress, then the bandwidth enforcement will be disabled for those Pods.
Bandwidth Manager requires a v5.1.x or more recent Linux kernel. Bandwidth Manager requires a v5.1.x or more recent Linux kernel.
For further information, make sure to check the official [Cilium documentation](https://docs.cilium.io/en/latest/network/kubernetes/bandwidth-manager/) For further information, make sure to check the official [Cilium documentation.](https://docs.cilium.io/en/v1.12/gettingstarted/bandwidth-manager/)
To use this function, set the following parameters To use this function, set the following parameters
@@ -319,26 +185,6 @@ To use this function, set the following parameters
cilium_enable_bandwidth_manager: true cilium_enable_bandwidth_manager: true
``` ```
## Host Firewall
Host Firewall enforces security policies for Kubernetes nodes. It is disable by default, since it can break the cluster connectivity.
```yaml
cilium_enable_host_firewall: true
```
For further information, check [host firewall documentation](https://docs.cilium.io/en/latest/security/host-firewall/)
## Policy Audit Mode
When _Policy Audit Mode_ is enabled, no network policy is enforced. This feature helps to validate the impact of host policies before enforcing them.
```yaml
cilium_policy_audit_mode: true
```
It is disable by default, and should not be enabled in production.
## Install Cilium Hubble ## Install Cilium Hubble
k8s-net-cilium.yml: k8s-net-cilium.yml:

View File

@@ -17,22 +17,6 @@ kube_network_plugin_multus: true
will install Multus and Calico and configure Multus to use Calico as the primary network plugin. will install Multus and Calico and configure Multus to use Calico as the primary network plugin.
Namespace isolation enables a mode where Multus only allows pods to access custom resources (the `NetworkAttachmentDefinitions`) within the namespace where that pod resides. To enable namespace isolation:
```yml
multus_namespace_isolation: true
```
### Cilium compatibility
If you are using `cilium` as the primary CNI you'll have to set `cilium_cni_exclusive` to `false` to avoid cillium reverting multus config.
```yml
kube_network_plugin: cilium
kube_network_plugin_multus: true
cilium_cni_exclusive: false
```
## Using Multus ## Using Multus
Once Multus is installed, you can create CNI configurations (as a CRD objects) for additional networks, in this case a macvlan CNI configuration is defined. You may replace the config field with any valid CNI configuration where the CNI binary is available on the nodes. Once Multus is installed, you can create CNI configurations (as a CRD objects) for additional networks, in this case a macvlan CNI configuration is defined. You may replace the config field with any valid CNI configuration where the CNI binary is available on the nodes.

View File

@@ -23,20 +23,20 @@ You need to source the vSphere credentials you use to deploy your machines that
| external_vsphere_password | TRUE | string | | | Password for vCenter (Can also be specified with the `VSPHERE_PASSWORD` environment variable) | | external_vsphere_password | TRUE | string | | | Password for vCenter (Can also be specified with the `VSPHERE_PASSWORD` environment variable) |
| external_vsphere_datacenter | TRUE | string | | | Datacenter name to use | | external_vsphere_datacenter | TRUE | string | | | Datacenter name to use |
| external_vsphere_kubernetes_cluster_id | TRUE | string | | "kubernetes-cluster-id" | Kubernetes cluster ID to use | | external_vsphere_kubernetes_cluster_id | TRUE | string | | "kubernetes-cluster-id" | Kubernetes cluster ID to use |
| external_vsphere_version | TRUE | string | | "7.0u1" | Vmware Vsphere version where located all VMs | | external_vsphere_version | TRUE | string | | "6.7u3" | Vmware Vsphere version where located all VMs |
| external_vsphere_cloud_controller_image_tag | TRUE | string | | "v1.31.0" | CPI manager image tag to use | | external_vsphere_cloud_controller_image_tag | TRUE | string | | "latest" | Kubernetes cluster ID to use |
| vsphere_syncer_image_tag | TRUE | string | | "v3.3.1" | Syncer image tag to use | | vsphere_syncer_image_tag | TRUE | string | | "v2.2.1" | Syncer image tag to use |
| vsphere_csi_attacher_image_tag | TRUE | string | | "v4.3.0" | CSI attacher image tag to use | | vsphere_csi_attacher_image_tag | TRUE | string | | "v3.1.0" | CSI attacher image tag to use |
| vsphere_csi_controller | TRUE | string | | "v3.3.1" | CSI controller image tag to use | | vsphere_csi_controller | TRUE | string | | "v2.2.1" | CSI controller image tag to use |
| vsphere_csi_controller_replicas | TRUE | integer | | 1 | Number of pods Kubernetes should deploy for the CSI controller | | vsphere_csi_controller_replicas | TRUE | integer | | 1 | Number of pods Kubernetes should deploy for the CSI controller |
| vsphere_csi_liveness_probe_image_tag | TRUE | string | | "v2.10.0" | CSI liveness probe image tag to use | | vsphere_csi_liveness_probe_image_tag | TRUE | string | | "v2.2.0" | CSI liveness probe image tag to use |
| vsphere_csi_provisioner_image_tag | TRUE | string | | "v2.1.0" | CSI provisioner image tag to use | | vsphere_csi_provisioner_image_tag | TRUE | string | | "v2.1.0" | CSI provisioner image tag to use |
| vsphere_csi_node_driver_registrar_image_tag | TRUE | string | | "v3.5.0" | CSI node driver registrar image tag to use | | vsphere_csi_node_driver_registrar_image_tag | TRUE | string | | "v1.1.0" | CSI node driver registrar image tag to use |
| vsphere_csi_driver_image_tag | TRUE | string | | "v3.3.1" | CSI driver image tag to use | | vsphere_csi_driver_image_tag | TRUE | string | | "v1.0.2" | CSI driver image tag to use |
| vsphere_csi_resizer_tag | TRUE | string | | "v1.8.0" | CSI resizer image tag to use | | vsphere_csi_resizer_tag | TRUE | string | | "v1.1.0" | CSI resizer image tag to use |
| vsphere_csi_aggressive_node_drain | FALSE | boolean | | false | Enable aggressive node drain strategy | | vsphere_csi_aggressive_node_drain | FALSE | boolean | | false | Enable aggressive node drain strategy |
| vsphere_csi_aggressive_node_unreachable_timeout | FALSE | int | | 300 | Timeout till node will be drained when it in an unreachable state | | vsphere_csi_aggressive_node_unreachable_timeout | FALSE | int | 300 | | Timeout till node will be drained when it in an unreachable state |
| vsphere_csi_aggressive_node_not_ready_timeout | FALSE | int | | 300 | Timeout till node will be drained when it in not-ready state | | vsphere_csi_aggressive_node_not_ready_timeout | FALSE | int | 300 | | Timeout till node will be drained when it in not-ready state |
| vsphere_csi_namespace | TRUE | string | | "kube-system" | vSphere CSI namespace to use; kube-system for backward compatibility, should be change to vmware-system-csi on the long run | | vsphere_csi_namespace | TRUE | string | | "kube-system" | vSphere CSI namespace to use; kube-system for backward compatibility, should be change to vmware-system-csi on the long run |
## Usage example ## Usage example

6
docs/_sidebar.md generated
View File

@@ -14,16 +14,14 @@
* Ansible * Ansible
* [Ansible](/docs/ansible/ansible.md) * [Ansible](/docs/ansible/ansible.md)
* [Ansible Collection](/docs/ansible/ansible_collection.md) * [Ansible Collection](/docs/ansible/ansible_collection.md)
* [Inventory](/docs/ansible/inventory.md)
* [Vars](/docs/ansible/vars.md) * [Vars](/docs/ansible/vars.md)
* Cloud Controllers
* [Openstack](/docs/cloud_controllers/openstack.md)
* [Vsphere](/docs/cloud_controllers/vsphere.md)
* Cloud Providers * Cloud Providers
* [Aws](/docs/cloud_providers/aws.md) * [Aws](/docs/cloud_providers/aws.md)
* [Azure](/docs/cloud_providers/azure.md) * [Azure](/docs/cloud_providers/azure.md)
* [Cloud](/docs/cloud_providers/cloud.md) * [Cloud](/docs/cloud_providers/cloud.md)
* [Equinix-metal](/docs/cloud_providers/equinix-metal.md) * [Equinix-metal](/docs/cloud_providers/equinix-metal.md)
* [Openstack](/docs/cloud_providers/openstack.md)
* [Vsphere](/docs/cloud_providers/vsphere.md)
* CNI * CNI
* [Calico](/docs/CNI/calico.md) * [Calico](/docs/CNI/calico.md)
* [Cilium](/docs/CNI/cilium.md) * [Cilium](/docs/CNI/cilium.md)

View File

@@ -64,10 +64,6 @@ Custom options to be added to the kubernetes coredns plugin.
Extra domains to be forwarded to the kubernetes coredns plugin. Extra domains to be forwarded to the kubernetes coredns plugin.
### coredns_additional_configs
Extra configuration to be added to CoreDNS configuration
### coredns_rewrite_block ### coredns_rewrite_block
[Rewrite](https://coredns.io/plugins/rewrite/) plugin block to perform internal message rewriting. [Rewrite](https://coredns.io/plugins/rewrite/) plugin block to perform internal message rewriting.
@@ -294,10 +290,6 @@ nodelocaldns_external_zones:
See [dns_etchosts](#dns_etchosts-coredns) above. See [dns_etchosts](#dns_etchosts-coredns) above.
### nodelocaldns_additional_configs
Extra configuration to be added to CoreDNS configuration
### Nodelocal DNS HA ### Nodelocal DNS HA
Under some circumstances the single POD nodelocaldns implementation may not be able to be replaced soon enough and a cluster upgrade or a nodelocaldns upgrade can cause DNS requests to time out for short intervals. If for any reason your applications cannot tolerate this behavior you can enable a redundant nodelocal DNS pod on each node: Under some circumstances the single POD nodelocaldns implementation may not be able to be replaced soon enough and a cluster upgrade or a nodelocaldns upgrade can cause DNS requests to time out for short intervals. If for any reason your applications cannot tolerate this behavior you can enable a redundant nodelocal DNS pod on each node:

View File

@@ -1,7 +1,5 @@
# GCP Load Balancers for type=LoadBalacer of Kubernetes Services # GCP Load Balancers for type=LoadBalacer of Kubernetes Services
> **Removed**: Since v1.31 (the Kubespray counterpart is v2.27), Kubernetes no longer supports `cloud_provider`. (except external cloud provider)
Google Cloud Platform can be used for creation of Kubernetes Service Load Balancer. Google Cloud Platform can be used for creation of Kubernetes Service Load Balancer.
This feature is able to deliver by adding parameters to `kube-controller-manager` and `kubelet`. You need specify: This feature is able to deliver by adding parameters to `kube-controller-manager` and `kubelet`. You need specify:

View File

@@ -48,9 +48,3 @@ Force sync time immediately by NTP after the ntp installed, which is useful in n
```ShellSession ```ShellSession
ntp_force_sync_immediately: true ntp_force_sync_immediately: true
``` ```
When using Ubuntu 24.04 or a distribution that already has `systemd-timesyncd` installed, use the `ntpsec` package.
```ShellSession
ntp_package: ntpsec
```

View File

@@ -34,22 +34,100 @@ Based on the table below and the available python version for your ansible host
|-----------------|----------------| |-----------------|----------------|
| >= 2.16.4 | 3.10-3.12 | | >= 2.16.4 | 3.10-3.12 |
## Customize Ansible vars ## Inventory
Kubespray expects users to use one of the following variables sources for settings and customization: The inventory is composed of 3 groups:
* **kube_node** : list of kubernetes nodes where the pods will run.
* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
Note: do not modify the children of _k8s_cluster_, like putting
the _etcd_ group into the _k8s_cluster_, unless you are certain
to do that and you have it fully contained in the latter:
```ShellSession
etcd ⊂ k8s_cluster => kube_node ∩ etcd = etcd
```
When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
If you want it a standalone, make sure those groups do not intersect.
If you want the server to act both as control-plane and node, the server must be defined
on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
not _kube_node_.
There are also two special groups:
* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md)
* **bastion** : configure a bastion host if your nodes are not directly reachable
Below is a complete inventory example:
```ini
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
node1 ansible_host=95.54.0.12 ip=10.3.0.1
node2 ansible_host=95.54.0.13 ip=10.3.0.2
node3 ansible_host=95.54.0.14 ip=10.3.0.3
node4 ansible_host=95.54.0.15 ip=10.3.0.4
node5 ansible_host=95.54.0.16 ip=10.3.0.5
node6 ansible_host=95.54.0.17 ip=10.3.0.6
[kube_control_plane]
node1
node2
[etcd]
node1
node2
node3
[kube_node]
node2
node3
node4
node5
node6
[k8s_cluster:children]
kube_node
kube_control_plane
```
## Group vars and overriding variables precedence
The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
Mandatory variables that are common for at least one role (or a node group) can be found in the
`inventory/sample/group_vars/k8s_cluster.yml`.
There are also role vars for docker, kubernetes preinstall and control plane roles.
According to the [ansible docs](https://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overridden from the group vars. In order to override, one should use
the `-e` runtime flags (most simple way) or other layers described in the docs.
Kubespray uses only a few layers to override things (or expect them to
be overridden for roles):
| Layer | Comment | | Layer | Comment |
|----------------------------------------|------------------------------------------------------------------------------| |----------------------------------------|------------------------------------------------------------------------------|
| inventory vars | | | **role defaults** | provides best UX to override things for Kubespray deployments |
| - **inventory group_vars** | most used | | inventory vars | Unused |
| - inventory host_vars | host specifc vars overrides, group_vars is usually more practical | | **inventory group_vars** | Expects users to use ``all.yml``,``k8s_cluster.yml`` etc. to override things |
| inventory host_vars | Unused |
| playbook group_vars | Unused |
| playbook host_vars | Unused |
| **host facts** | Kubespray overrides for internal roles' logic, like state flags |
| play vars | Unused |
| play vars_prompt | Unused |
| play vars_files | Unused |
| registered vars | Unused |
| set_facts | Kubespray overrides those, for some places |
| **role and include vars** | Provides bad UX to override things! Use extra vars to enforce |
| block vars (only for tasks in block) | Kubespray overrides for internal roles' logic |
| task vars (only for the task) | Unused for roles, but only for helper scripts |
| **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` | | **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` |
[!IMPORTANT]
Extra vars are best used to override kubespray internal variables, for instances, roles/vars/.
Those vars are usually **not expected** (by Kubespray developers) to be modified by end users, and not part of Kubespray
interface. Thus they can change, disappear, or break stuff unexpectedly.
## Ansible tags ## Ansible tags
The following tags are defined in playbooks: The following tags are defined in playbooks:
@@ -77,7 +155,6 @@ The following tags are defined in playbooks:
| container_engine_accelerator | Enable nvidia accelerator for runtimes | | container_engine_accelerator | Enable nvidia accelerator for runtimes |
| container-engine | Configuring container engines | | container-engine | Configuring container engines |
| container-runtimes | Configuring container runtimes | | container-runtimes | Configuring container runtimes |
| control-plane | Configuring K8s control plane node role |
| coredns | Configuring coredns deployment | | coredns | Configuring coredns deployment |
| crio | Configuring crio container engine for hosts | | crio | Configuring crio container engine for hosts |
| crun | Configuring crun runtime | | crun | Configuring crun runtime |
@@ -105,6 +182,8 @@ The following tags are defined in playbooks:
| init | Windows kubernetes init nodes | | init | Windows kubernetes init nodes |
| iptables | Flush and clear iptable when resetting | | iptables | Flush and clear iptable when resetting |
| k8s-pre-upgrade | Upgrading K8s cluster | | k8s-pre-upgrade | Upgrading K8s cluster |
| k8s-secrets | Configuring K8s certs/keys |
| k8s-gen-tokens | Configuring K8s tokens |
| kata-containers | Configuring kata-containers runtime | | kata-containers | Configuring kata-containers runtime |
| krew | Install and manage krew | | krew | Install and manage krew |
| kubeadm | Roles linked to kubeadm tasks | | kubeadm | Roles linked to kubeadm tasks |
@@ -120,7 +199,7 @@ The following tags are defined in playbooks:
| local-path-provisioner | Configure External provisioner: local-path | | local-path-provisioner | Configure External provisioner: local-path |
| local-volume-provisioner | Configure External provisioner: local-volume | | local-volume-provisioner | Configure External provisioner: local-volume |
| macvlan | Network plugin macvlan | | macvlan | Network plugin macvlan |
| master (DEPRECATED) | Deprecated - see `control-plane` | | master | Configuring K8s master node role |
| metallb | Installing and configuring metallb | | metallb | Installing and configuring metallb |
| metrics_server | Configuring metrics_server | | metrics_server | Configuring metrics_server |
| netchecker | Installing netchecker K8s app | | netchecker | Installing netchecker K8s app |
@@ -131,7 +210,7 @@ The following tags are defined in playbooks:
| node | Configuring K8s minion (compute) node role | | node | Configuring K8s minion (compute) node role |
| nodelocaldns | Configuring nodelocaldns daemonset | | nodelocaldns | Configuring nodelocaldns daemonset |
| node-label | Tasks linked to labeling of nodes | | node-label | Tasks linked to labeling of nodes |
| node-webhook | Tasks linked to webhook (granting access to resources)| | node-webhook | Tasks linked to webhook (grating access to resources) |
| nvidia_gpu | Enable nvidia accelerator for runtimes | | nvidia_gpu | Enable nvidia accelerator for runtimes |
| oci | Cloud provider: oci | | oci | Cloud provider: oci |
| persistent_volumes | Configure csi volumes | | persistent_volumes | Configure csi volumes |
@@ -188,32 +267,42 @@ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
--tags download --skip-tags upload,upgrade --tags download --skip-tags upload,upgrade
``` ```
Note: use `--tags` and `--skip-tags` wisely and only if you're 100% sure what you're doing. Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
## Bastion host
If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
you can use a so-called _bastion_ host to connect to your nodes. To specify and use a bastion,
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
bastion host.
```ShellSession
[bastion]
bastion ansible_host=x.x.x.x
```
For more information about Ansible and bastion hosts, read
[Running Ansible Through an SSH Bastion Host](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
## Mitogen ## Mitogen
Mitogen support is deprecated, please see [mitogen related docs](/docs/advanced/mitogen.md) for usage and reasons for deprecation. Mitogen support is deprecated, please see [mitogen related docs](/docs/advanced/mitogen.md) for usage and reasons for deprecation.
## Troubleshooting Ansible issues ## Beyond ansible 2.9
Having the wrong version of ansible, ansible collections or python dependencies can cause issue. Ansible project has decided, in order to ease their maintenance burden, to split between
In particular, Kubespray ship custom modules which Ansible needs to find, for which you should specify [ANSIBLE_LIBRAY](https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html#adding-a-module-or-plugin-outside-of-a-collection) two projects which are now joined under the Ansible umbrella.
Ansible-base (2.10.x branch) will contain just the ansible language implementation while
ansible modules that were previously bundled into a single repository will be part of the
ansible 3.x package. Please see [this blog post](https://blog.while-true-do.io/ansible-release-3-0-0/)
that explains in detail the need and the evolution plan.
**Note:** this change means that ansible virtual envs cannot be upgraded with `pip install -U`.
You first need to uninstall your old ansible (pre 2.10) version and install the new one.
```ShellSession ```ShellSession
export ANSIBLE_LIBRAY=<kubespray_dir>/library` pip uninstall ansible ansible-base ansible-core
``` cd kubespray/
pip install -U .
A simple way to ensure you get all the correct version of Ansible is to use
the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/)
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.26.0
docker pull quay.io/kubespray/kubespray:v2.26.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.26.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
``` ```

View File

@@ -1,71 +0,0 @@
# Inventory
The inventory is composed of 3 groups:
* **kube_node** : list of kubernetes nodes where the pods will run.
* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
If you want it a standalone, make sure those groups do not intersect.
If you want the server to act both as control-plane and node, the server must be defined
on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
not _kube_node_.
There are also two special groups:
* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md)
* **bastion** : configure a bastion host if your nodes are not directly reachable
Lastly, the **k8s_cluster** is dynamically defined as the union of **kube_node**, **kube_control_plane** and **calico_rr**.
This is used internally and for the purpose of defining whole cluster variables (`<inventory>/group_vars/k8s_cluster/*.yml`)
Below is a complete inventory example:
```ini
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
node1 ansible_host=95.54.0.12 ip=10.3.0.1
node2 ansible_host=95.54.0.13 ip=10.3.0.2
node3 ansible_host=95.54.0.14 ip=10.3.0.3
node4 ansible_host=95.54.0.15 ip=10.3.0.4
node5 ansible_host=95.54.0.16 ip=10.3.0.5
node6 ansible_host=95.54.0.17 ip=10.3.0.6
[kube_control_plane]
node1
node2
[etcd]
node1
node2
node3
[kube_node]
node2
node3
node4
node5
node6
```
## Inventory customization
See [Customize Ansible vars](/docs/ansible/ansible.md#customize-ansible-vars)
and [Ansible documentation on group_vars](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html#assigning-a-variable-to-many-machines-group-variables)
## Bastion host
If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
you can use a so-called _bastion_ host to connect to your nodes. To specify and use a bastion,
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
bastion host.
```ShellSession
[bastion]
bastion ansible_host=x.x.x.x
```
For more information about Ansible and bastion hosts, read
[Running Ansible Through an SSH Bastion Host](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)

View File

@@ -35,9 +35,9 @@ Some variables of note include:
## Addressing variables ## Addressing variables
* *ip* - IP to use for binding services (host var). This would **usually** be the public ip. * *ip* - IP to use for binding services (host var). This would **usually** be the public ip.
* *access_ip* - IP to use from other hosts to connect to this host. Often required when deploying * *access_ip* - IP for other hosts to use to connect to. Often required when
from a cloud, such as OpenStack or GCE and you have separate public/floating and private IPs. deploying from a cloud, such as OpenStack or GCE and you have separate
This would **usually** be the private ip. public/floating and private IPs. This would **usually** be the private ip.
* *ansible_default_ipv4.address* - Not Kubespray-specific, but it is used if ip * *ansible_default_ipv4.address* - Not Kubespray-specific, but it is used if ip
and access_ip are undefined and access_ip are undefined
* *ip6* - IPv6 address to use for binding services. (host var) * *ip6* - IPv6 address to use for binding services. (host var)
@@ -104,7 +104,8 @@ following default cluster parameters:
* *enable_coredns_k8s_endpoint_pod_names* - If enabled, it configures endpoint_pod_names option for kubernetes plugin. * *enable_coredns_k8s_endpoint_pod_names* - If enabled, it configures endpoint_pod_names option for kubernetes plugin.
on the CoreDNS service. on the CoreDNS service.
* *cloud_provider* - The provider for cloud services. (default is unset, Set to `external` for running with an external cloud provider) * *cloud_provider* - Enable extra Kubelet option if operating inside GCE or
OpenStack (default is unset)
* *kube_feature_gates* - A list of key=value pairs that describe feature gates for * *kube_feature_gates* - A list of key=value pairs that describe feature gates for
alpha/experimental Kubernetes features. (defaults is `[]`). alpha/experimental Kubernetes features. (defaults is `[]`).
@@ -171,7 +172,7 @@ variables to match your requirements.
* *dns_upstream_forward_extra_opts* - Options to add in the forward section of coredns/nodelocaldns related to upstream DNS servers * *dns_upstream_forward_extra_opts* - Options to add in the forward section of coredns/nodelocaldns related to upstream DNS servers
For more information, see [DNS For more information, see [DNS
Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/advanced/dns-stack.md). Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md).
## Other service variables ## Other service variables
@@ -295,8 +296,8 @@ node_taints:
For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments. For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments.
Extra flags for the kubelet can be specified using these variables, in the form of dicts of key-value pairs of Extra flags for the kubelet can be specified using these variables,
configuration parameters that will be inserted into the kubelet YAML config file. Example: in the form of dicts of key-value pairs of configuration parameters that will be inserted into the kubelet YAML config file. The `kubelet_node_config_extra_args` apply kubelet settings only to nodes and not control planes. Example:
```yml ```yml
kubelet_config_extra_args: kubelet_config_extra_args:
@@ -311,10 +312,14 @@ kubelet_config_extra_args:
The possible vars are: The possible vars are:
* *kubelet_config_extra_args* * *kubelet_config_extra_args*
* *kubelet_node_config_extra_args*
Previously, the same parameters could be passed as flags to kubelet binary with the following vars: Previously, the same parameters could be passed as flags to kubelet binary with the following vars:
* *kubelet_custom_flags* * *kubelet_custom_flags*
* *kubelet_node_custom_flags*
The `kubelet_node_custom_flags` apply kubelet settings only to nodes and not control planes. Example:
```yml ```yml
kubelet_custom_flags: kubelet_custom_flags:
@@ -332,13 +337,6 @@ in the form of dicts of key-value pairs of configuration parameters that will be
* *kube_kubeadm_controller_extra_args* * *kube_kubeadm_controller_extra_args*
* *kube_kubeadm_scheduler_extra_args* * *kube_kubeadm_scheduler_extra_args*
### Kubeadm patches
When extra flags are not sufficient and there is a need to further customize kubernetes components,
[kubeadm patches](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#patches)
can be used.
You should use the [`kubeadm_patches` variable](../../roles/kubernetes/kubeadm_common/defaults/main.yml) for that purpose.
## App variables ## App variables
* *helm_version* - Only supports v3.x. Existing v2 installs (with Tiller) will not be modified and need to be removed manually. * *helm_version* - Only supports v3.x. Existing v2 installs (with Tiller) will not be modified and need to be removed manually.

View File

@@ -2,13 +2,9 @@
# peers: # peers:
# - router_id: "10.99.0.34" # - router_id: "10.99.0.34"
# as: "65xxx" # as: "65xxx"
# filters: []
# numallowedlocalasnumbers: 0
# sourceaddress: "None" # sourceaddress: "None"
# - router_id: "10.99.0.35" # - router_id: "10.99.0.35"
# as: "65xxx" # as: "65xxx"
# filters: []
# numallowedlocalasnumbers: 0
# sourceaddress: "None" # sourceaddress: "None"
# loadbalancer_apiserver: # loadbalancer_apiserver:

View File

@@ -2,13 +2,9 @@
# peers: # peers:
# - router_id: "10.99.0.2" # - router_id: "10.99.0.2"
# as: "65xxx" # as: "65xxx"
# filters: []
# numallowedlocalasnumbers: 0
# sourceaddress: "None" # sourceaddress: "None"
# - router_id: "10.99.0.3" # - router_id: "10.99.0.3"
# as: "65xxx" # as: "65xxx"
# filters: []
# numallowedlocalasnumbers: 0
# sourceaddress: "None" # sourceaddress: "None"
# loadbalancer_apiserver: # loadbalancer_apiserver:

View File

@@ -1,7 +1,5 @@
# AWS # AWS
> **Removed**: Since v1.31 (the Kubespray counterpart is v2.27), Kubernetes no longer supports `cloud_provider`. (except external cloud provider)
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider. To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider.
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role. Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.

View File

@@ -1,7 +1,5 @@
# Azure # Azure
> **Removed**: Since v1.31 (the Kubespray counterpart is v2.27), Kubernetes no longer supports `cloud_provider`. (except external cloud provider)
To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all/all.yml` and set it to `'azure'`. To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all/all.yml` and set it to `'azure'`.
All your instances are required to run in a resource group and a routing table has to be attached to the subnet your instances are in. All your instances are required to run in a resource group and a routing table has to be attached to the subnet your instances are in.

View File

@@ -1,7 +1,5 @@
# Cloud providers # Cloud providers
> **Removed**: Since v1.31 (the Kubespray counterpart is v2.27), Kubernetes no longer supports `cloud_provider`. (except external cloud provider)
## Provisioning ## Provisioning
You can deploy instances in your cloud environment in several ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation. You can deploy instances in your cloud environment in several ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation.

View File

@@ -76,9 +76,9 @@ The cloud provider is configured to have Octavia by default in Kubespray.
external_openstack_lbaas_network_id: "Neutron network ID to create LBaaS VIP" external_openstack_lbaas_network_id: "Neutron network ID to create LBaaS VIP"
external_openstack_lbaas_manage_security_groups: false external_openstack_lbaas_manage_security_groups: false
external_openstack_lbaas_create_monitor: false external_openstack_lbaas_create_monitor: false
external_openstack_lbaas_monitor_delay: 5s external_openstack_lbaas_monitor_delay: 5
external_openstack_lbaas_monitor_max_retries: 1 external_openstack_lbaas_monitor_max_retries: 1
external_openstack_lbaas_monitor_timeout: 3s external_openstack_lbaas_monitor_timeout: 3
external_openstack_lbaas_internal_lb: false external_openstack_lbaas_internal_lb: false
``` ```

View File

@@ -90,7 +90,7 @@ containerd_registries_mirrors:
capabilities: ["pull", "resolve"] capabilities: ["pull", "resolve"]
skip_verify: false skip_verify: false
containerd_max_container_log_line_size: 16384 containerd_max_container_log_line_size: -1
crio_registries_mirrors: crio_registries_mirrors:
- prefix: docker.io - prefix: docker.io

View File

@@ -4,54 +4,54 @@ To generate this Matrix run `./tests/scripts/md-table/main.py`
## containerd ## containerd
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan | | OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan | weave |
|---| --- | --- | --- | --- | --- | --- | --- | |---| --- | --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | almalinux8 | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: |
amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | centos8 | :white_check_mark: | :x: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: |
debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: |
fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | fedora37 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
openeuler24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | fedora38 | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: |
opensuse | :x: | :x: | :x: | :x: | :x: | :x: | :x: | opensuse | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | rockylinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | rockylinux9 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | ubuntu20 | :white_check_mark: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
## crio ## crio
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan | | OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan | weave |
|---| --- | --- | --- | --- | --- | --- | --- | |---| --- | --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | centos8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | debian11 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | debian12 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | fedora37 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | fedora38 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
opensuse | :x: | :x: | :x: | :x: | :x: | :x: | :x: | opensuse | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | rockylinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu22 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
## docker ## docker
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan | | OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan | weave |
|---| --- | --- | --- | --- | --- | --- | --- | |---| --- | --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | centos8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | debian11 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora39 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | debian12 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | fedora37 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | fedora38 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :white_check_mark: |
opensuse | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | opensuse | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | rockylinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :white_check_mark: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |

View File

@@ -1,6 +1,6 @@
# Node Layouts # Node Layouts
There are five node layout types: `default`, `separate`, `ha`, `all-in-one`, and `node-etcd-client`. There are six node layout types: `default`, `separate`, `ha`, `scale`, `all-in-one`, and `node-etcd-client`.
`default` is a non-HA two nodes setup with one separate `kube_node` `default` is a non-HA two nodes setup with one separate `kube_node`
and the `etcd` group merged with the `kube_control_plane`. and the `etcd` group merged with the `kube_control_plane`.
@@ -11,6 +11,11 @@ and the `etcd` group merged with the `kube_control_plane`.
`ha` layout consists of two etcd nodes, two control planes and a single worker node, `ha` layout consists of two etcd nodes, two control planes and a single worker node,
with role intersection. with role intersection.
`scale` layout can be combined with above layouts (`ha-scale`, `separate-scale`). It includes 200 fake hosts
in the Ansible inventory. This helps test TLS certificate generation at scale
to prevent regressions and profile certain long-running tasks. These nodes are
never actually deployed, but certificates are generated for them.
`all-in-one` layout use a single node for with `kube_control_plane`, `etcd` and `kube_node` merged. `all-in-one` layout use a single node for with `kube_control_plane`, `etcd` and `kube_node` merged.
`node-etcd-client` layout consists of a 4 nodes cluster, all of them in `kube_node`, first 3 in `etcd` and only one `kube_control_plane`. `node-etcd-client` layout consists of a 4 nodes cluster, all of them in `kube_node`, first 3 in `etcd` and only one `kube_control_plane`.

View File

@@ -29,26 +29,7 @@ You can override the default settings in the `Vagrantfile` either by
directly modifying the `Vagrantfile` or through an override file. directly modifying the `Vagrantfile` or through an override file.
In the same directory as the `Vagrantfile`, create a folder called In the same directory as the `Vagrantfile`, create a folder called
`vagrant` and create `config.rb` file in it. `vagrant` and create `config.rb` file in it.
An example of how to configure this file is given below.
Example:
```ruby
# vagrant/config.rb
$instance_name_prefix = "kub"
$vm_cpus = 1
$num_instances = 3
$os = "centos8-bento"
$subnet = "10.0.20"
$network_plugin = "flannel"
$extra_vars = {
dns_domain: my.custom.domain
}
# or
$extra_vars = "path/to/extra/vars/file.yml"
```
For all available options look at the Vagrantfile (search for "CONFIG")
## Use alternative OS for Vagrant ## Use alternative OS for Vagrant
@@ -76,33 +57,73 @@ see [download documentation](/docs/advanced/downloads.md).
## Example use of Vagrant ## Example use of Vagrant
The following is an example of setting up and running kubespray using `vagrant`. The following is an example of setting up and running kubespray using `vagrant`.
Customize your settings as shown, above, then run the commands: For repeated runs, you could save the script to a file in the root of the
kubespray and run it by executing `source <name_of_the_file>`.
```ShellSession ```ShellSession
# use virtualenv to install all python requirements # use virtualenv to install all python requirements
VENVDIR=venv VENVDIR=venv
$ virtualenv --python=/usr/bin/python3.7 $VENVDIR virtualenv --python=/usr/bin/python3.7 $VENVDIR
$ source $VENVDIR/bin/activate source $VENVDIR/bin/activate
$ pip install -r requirements.txt pip install -r requirements.txt
$ vagrant up # prepare an inventory to test with
INV=inventory/my_lab
rm -rf ${INV}.bak &> /dev/null
mv ${INV} ${INV}.bak &> /dev/null
cp -a inventory/sample ${INV}
rm -f ${INV}/hosts.ini
# Access the cluster # customize the vagrant environment
$ export INV=.vagrant/provisionners/ansible/inventory mkdir vagrant
$ export KUBECONFIG=${INV}/artifacts/admin.conf cat << EOF > vagrant/config.rb
\$instance_name_prefix = "kub"
\$vm_cpus = 1
\$num_instances = 3
\$os = "centos8-bento"
\$subnet = "10.0.20"
\$network_plugin = "flannel"
\$inventory = "$INV"
\$shared_folders = { 'temp/docker_rpms' => "/var/cache/yum/x86_64/7/docker-ce/packages" }
\$extra_vars = {
dns_domain: my.custom.domain
}
# or
\$extra_vars = "path/to/extra/vars/file.yml"
EOF
# make the rpm cache
mkdir -p temp/docker_rpms
vagrant up
# make a copy of the downloaded docker rpm, to speed up the next provisioning run
scp kub-1:/var/cache/yum/x86_64/7/docker-ce/packages/* temp/docker_rpms/
# copy kubectl access configuration in place
mkdir $HOME/.kube/ &> /dev/null
ln -s $PWD/$INV/artifacts/admin.conf $HOME/.kube/config
# make the kubectl binary available # make the kubectl binary available
$ export PATH=$PATH:$PWD/$INV/artifacts sudo ln -s $PWD/$INV/artifacts/kubectl /usr/local/bin/kubectl
#or
export PATH=$PATH:$PWD/$INV/artifacts
``` ```
If a vagrant run failed and you've made some changes to fix the issue causing If a vagrant run failed and you've made some changes to fix the issue causing
the fail, here is how you would re-run ansible: the fail, here is how you would re-run ansible:
```ShellSession ```ShellSession
vagrant provision ansible-playbook -vvv -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory cluster.yml
``` ```
If all went well, you check if it's all working as expected: If all went well, you check if it's all working as expected:
```ShellSession
kubectl get nodes
```
The output should look like this:
```ShellSession ```ShellSession
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
@@ -113,6 +134,12 @@ kub-3 Ready <none> 3m7s v1.22.5
Another nice test is the following: Another nice test is the following:
```ShellSession
kubectl get pods --all-namespaces -o wide
```
Which should yield something like the following:
```ShellSession ```ShellSession
$ kubectl get pods --all-namespaces -o wide $ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

View File

@@ -1,33 +1,43 @@
# Getting started # Getting started
## Install ansible
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible).
## Building your own inventory ## Building your own inventory
Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. See the Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is
[example inventory](/inventory/sample/inventory.ini) an example inventory located
and [Ansible documentation on building your inventory](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html), [here](https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/inventory.ini).
and [details on the inventory structure expected by Kubespray](/docs/ansible/inventory.md).
You can use an
[inventory generator](https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py)
to create or modify an Ansible inventory. Currently, it is limited in
functionality and is only used for configuring a basic Kubespray cluster inventory, but it does
support creating inventory file for large clusters as well. It now supports
separated ETCD and Kubernetes control plane roles from node role if the size exceeds a
certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` for more information.
Example inventory generator usage:
```ShellSession ```ShellSession
<your-favorite-editor> inventory/mycluster/inventory.ini cp -r inventory/sample inventory/mycluster
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
# Review and change parameters under ``inventory/mycluster/group_vars`` CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
<your-favorite-editor> inventory/mycluster/group_vars/all.yml # for every node, including etcd
<your-favorite-editor> inventory/mycluster/group_vars/k8s_cluster.yml # for every node in the cluster (not etcd when it's separate)
<your-favorite-editor> inventory/mycluster/group_vars/kube_control_plane.yml # for the control plane
<your-favorite-editor> inventory/myclsuter/group_vars/kube_node.yml # for worker nodes
``` ```
## Installing the cluster Then use `inventory/mycluster/hosts.yml` as inventory file.
## Starting custom deployment
Once you have an inventory, you may want to customize deployment data vars
and start the deployment:
**IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars:
```ShellSession ```ShellSession
ansible-playbook -i inventory/mycluster/ cluster.yml -b -v \ ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml -b -v \
--private-key=~/.ssh/private_key --private-key=~/.ssh/private_key
``` ```
See more details in the [ansible guide](/docs/ansible/ansible.md).
### Adding nodes ### Adding nodes
You may want to add worker, control plane or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your control planes. This is especially helpful when doing something like autoscaling your clusters. You may want to add worker, control plane or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your control planes. This is especially helpful when doing something like autoscaling your clusters.

View File

@@ -212,15 +212,17 @@ Copy ``inventory/sample`` as ``inventory/mycluster``:
cp -rfp inventory/sample inventory/mycluster cp -rfp inventory/sample inventory/mycluster
``` ```
Update the sample Ansible inventory file with ip given by gcloud: Update Ansible inventory file with inventory builder:
```ShellSession ```ShellSession
gcloud compute instances list --filter="tags.items=kubernetes-the-kubespray-way" declare -a IPS=($(gcloud compute instances list --filter="tags.items=kubernetes-the-kubespray-way" --format="value(EXTERNAL_IP)" | tr '\n' ' '))
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
``` ```
Open `inventory/mycluster/inventory.ini` file and add it so Open the generated `inventory/mycluster/hosts.yaml` file and adjust it so
that controller-0, controller-1 and controller-2 in the `kube_control_plane` group and that controller-0, controller-1 and controller-2 are control plane nodes and
worker-0, worker-1 and worker-2 in the `kube_node` group. Add respective `ip` to the respective local VPC IP for each node. worker-0, worker-1 and worker-2 are worker nodes. Also update the `ip` to the respective local VPC IP and
remove the `access_ip`.
The main configuration for the cluster is stored in The main configuration for the cluster is stored in
`inventory/mycluster/group_vars/k8s_cluster/k8s_cluster.yml`. In this file we `inventory/mycluster/group_vars/k8s_cluster/k8s_cluster.yml`. In this file we
@@ -240,7 +242,7 @@ the kubernetes cluster, just change the 'false' to 'true' for
Now we will deploy the configuration: Now we will deploy the configuration:
```ShellSession ```ShellSession
ansible-playbook -i inventory/mycluster/ -u $USERNAME -b -v --private-key=~/.ssh/id_rsa cluster.yml ansible-playbook -i inventory/mycluster/hosts.yaml -u $USERNAME -b -v --private-key=~/.ssh/id_rsa cluster.yml
``` ```
Ansible will now execute the playbook, this can take up to 20 minutes. Ansible will now execute the playbook, this can take up to 20 minutes.
@@ -594,7 +596,7 @@ If you want to keep the VMs and just remove the cluster state, you can simply
run another Ansible playbook: run another Ansible playbook:
```ShellSession ```ShellSession
ansible-playbook -i inventory/mycluster/ -u $USERNAME -b -v --private-key=~/.ssh/id_rsa reset.yml ansible-playbook -i inventory/mycluster/hosts.yaml -u $USERNAME -b -v --private-key=~/.ssh/id_rsa reset.yml
``` ```
Resetting the cluster to the VMs original state usually takes about a couple Resetting the cluster to the VMs original state usually takes about a couple

View File

@@ -35,7 +35,7 @@ kubectl create clusterrolebinding cluster-admin-binding \
The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version. The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version.
```console ```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/cloud/deploy.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml
``` ```
### Provider Specific Steps ### Provider Specific Steps

View File

@@ -40,6 +40,10 @@ Variables are listed with their default values, if applicable.
* `centos_fastestmirror_enabled: false` * `centos_fastestmirror_enabled: false`
Whether the [fastestmirror](https://wiki.centos.org/PackageManagement/Yum/FastestMirror) yum plugin should be enabled. Whether the [fastestmirror](https://wiki.centos.org/PackageManagement/Yum/FastestMirror) yum plugin should be enabled.
## Dependencies
The `kubespray-defaults` role is expected to be run before this role.
## Example Playbook ## Example Playbook
Remember to disable fact gathering since Python might not be present on hosts. Remember to disable fact gathering since Python might not be present on hosts.

View File

@@ -14,7 +14,7 @@ Installs docker in etcd group members and runs etcd on docker containers. Only u
### Kubeadm ### Kubeadm
This deployment method is experimental and is only available for new deployments. This deploys etcd as a static pod on control plane hosts. This deployment method is experimental and is only available for new deployments. This deploys etcd as a static pod in master hosts.
## Metrics ## Metrics

View File

@@ -71,6 +71,9 @@
[kube_node:children] [kube_node:children]
kubenode kubenode
[k8s_cluster:children]
kubernetes
[etcd:children] [etcd:children]
kubemaster kubemaster
kubemaster-ha kubemaster-ha
@@ -78,6 +81,9 @@
[kube_control_plane:children] [kube_control_plane:children]
kubemaster kubemaster
kubemaster-ha kubemaster-ha
[kubespray:children]
kubernetes
``` ```
* Last entry here needed to apply kubespray.yml config file, renamed from all.yml of kubespray project. * Last entry here needed to apply kubespray.yml config file, renamed from all.yml of kubespray project.

View File

@@ -7,7 +7,6 @@ The public mirror is useful to make the public resources download quickly in som
You can follow the [offline](offline-environment.md) to config the image/file download configuration to the public mirror site. If you want to download quickly in China, the configuration can be like: You can follow the [offline](offline-environment.md) to config the image/file download configuration to the public mirror site. If you want to download quickly in China, the configuration can be like:
```shell ```shell
# this should be in <your_inventory>/group_vars/k8s_cluster.yml
gcr_image_repo: "gcr.m.daocloud.io" gcr_image_repo: "gcr.m.daocloud.io"
kube_image_repo: "k8s.m.daocloud.io" kube_image_repo: "k8s.m.daocloud.io"
docker_image_repo: "docker.m.daocloud.io" docker_image_repo: "docker.m.daocloud.io"
@@ -20,6 +19,45 @@ files_repo: "https://files.m.daocloud.io"
Use mirror sites only if you trust the provider. The Kubespray team cannot verify their reliability or security. Use mirror sites only if you trust the provider. The Kubespray team cannot verify their reliability or security.
You can replace the `m.daocloud.io` with any site you want. You can replace the `m.daocloud.io` with any site you want.
## Example Usage Full Steps
You can follow the full steps to use the kubesray with mirror. for example:
Install Ansible according to Ansible installation guide then run the following steps:
```shell
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
# Update Ansible inventory file with inventory builder
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Use the download mirror
cp inventory/mycluster/group_vars/all/offline.yml inventory/mycluster/group_vars/all/mirror.yml
sed -i -E '/# .*\{\{ files_repo/s/^# //g' inventory/mycluster/group_vars/all/mirror.yml
tee -a inventory/mycluster/group_vars/all/mirror.yml <<EOF
gcr_image_repo: "gcr.m.daocloud.io"
kube_image_repo: "k8s.m.daocloud.io"
docker_image_repo: "docker.m.daocloud.io"
quay_image_repo: "quay.m.daocloud.io"
github_image_repo: "ghcr.m.daocloud.io"
files_repo: "https://files.m.daocloud.io"
EOF
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
```
The above steps are by adding the "Use the download mirror" step to the [README.md](../README.md) steps.
## Community-run mirror sites ## Community-run mirror sites
DaoCloud(China) DaoCloud(China)

View File

@@ -392,7 +392,7 @@ ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=etcd --limi
Upgrade kubelet: Upgrade kubelet:
```ShellSession ```ShellSession
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=node --skip-tags=k8s-gen-certs ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=node --skip-tags=k8s-gen-certs,k8s-gen-tokens
``` ```
Upgrade Kubernetes master components: Upgrade Kubernetes master components:
@@ -425,7 +425,7 @@ Please note that **migrating container engines is not officially supported by Ku
As of Kubespray 2.18.0, containerd is already the default container engine. If you have the chance, it is advisable and safer to reset and redeploy the entire cluster with a new container engine. As of Kubespray 2.18.0, containerd is already the default container engine. If you have the chance, it is advisable and safer to reset and redeploy the entire cluster with a new container engine.
* [Migrating from Docker to Containerd](/docs/upgrades/migrate_docker2containerd.md) * [Migrating from Docker to Containerd](upgrades/migrate_docker2containerd.md)
## System upgrade ## System upgrade

View File

@@ -13,7 +13,7 @@
tasks: tasks:
- name: Include kubespray-default variables - name: Include kubespray-default variables
include_vars: ../roles/kubespray-defaults/defaults/main/main.yml include_vars: ../roles/kubespray-defaults/defaults/main/main.yml
- name: Copy get_cinder_pvs.sh to first control plane node - name: Copy get_cinder_pvs.sh to master
copy: copy:
src: get_cinder_pvs.sh src: get_cinder_pvs.sh
dest: /tmp dest: /tmp

View File

@@ -36,7 +36,7 @@
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: kubernetes/preinstall, tags: preinstall } - { role: kubernetes/preinstall, tags: preinstall }
- name: Handle upgrades to control plane components first to maintain backwards compat. - name: Handle upgrades to master components first to maintain backwards compat.
hosts: kube_control_plane hosts: kube_control_plane
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
serial: 1 serial: 1

View File

@@ -2,7 +2,7 @@
namespace: kubernetes_sigs namespace: kubernetes_sigs
description: Deploy a production ready Kubernetes cluster description: Deploy a production ready Kubernetes cluster
name: kubespray name: kubespray
version: 2.27.1 version: 2.26.0
readme: README.md readme: README.md
authors: authors:
- The Kubespray maintainers (https://kubernetes.slack.com/channels/kubespray) - The Kubespray maintainers (https://kubernetes.slack.com/channels/kubespray)
@@ -14,7 +14,6 @@ documentation: https://kubespray.io
license_file: LICENSE license_file: LICENSE
dependencies: dependencies:
ansible.utils: '>=2.5.0' ansible.utils: '>=2.5.0'
community.crypto: '>=2.22.3'
community.general: '>=7.0.0' community.general: '>=7.0.0'
ansible.netcommon: '>=5.3.0' ansible.netcommon: '>=5.3.0'
ansible.posix: '>=1.5.4' ansible.posix: '>=1.5.4'

View File

@@ -6,7 +6,6 @@
<meta name="description" content="Deploy a Production Ready Kubernetes Cluster"> <meta name="description" content="Deploy a Production Ready Kubernetes Cluster">
<meta name="viewport" content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0"> <meta name="viewport" content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
<link rel="stylesheet" href="//unpkg.com/docsify-themeable/dist/css/theme-simple.css"> <link rel="stylesheet" href="//unpkg.com/docsify-themeable/dist/css/theme-simple.css">
<link rel="icon" href="/logo/logo-clear.png" type="image/png" />
<style> <style>
:root { :root {
--base-font-size: 16px; --base-font-size: 16px;

View File

@@ -8,3 +8,7 @@ node1
[kube_node] [kube_node]
node1 node1
[k8s_cluster:children]
kube_node
kube_control_plane

View File

@@ -42,14 +42,14 @@ loadbalancer_apiserver_healthcheck_port: 8081
## There are some changes specific to the cloud providers ## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins ## for instance we need to encapsulate packets with some network plugins
## If set the possible values only 'external' after K8s v1.31. ## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', 'vsphere', 'oci', or 'external'
## When openstack is used make sure to source in the openstack credentials
## like you would do when using openstack-client before starting the playbook.
# cloud_provider: # cloud_provider:
# External Cloud Controller Manager (Formerly known as cloud provider) ## When cloud_provider is set to 'external', you can set the cloud controller to deploy
# cloud_provider must be "external", otherwise this setting is invalid. ## Supported cloud controllers are: 'openstack', 'vsphere', 'huaweicloud' and 'hcloud'
# Supported external cloud controllers are: 'openstack', 'vsphere', 'oci', 'huaweicloud', 'hcloud' and 'manual' ## When openstack or vsphere are used make sure to source in the required fields
# 'manual' does not install the cloud controller manager used by Kubespray.
# If you fill in a value other than the above, the check will fail.
# external_cloud_provider: # external_cloud_provider:
## Set these proxy values in order to update package manager and docker daemon to use proxies and custom CA for https_proxy if needed ## Set these proxy values in order to update package manager and docker daemon to use proxies and custom CA for https_proxy if needed
@@ -75,8 +75,8 @@ loadbalancer_apiserver_healthcheck_port: 8081
# skip_http_proxy_on_os_packages: false # skip_http_proxy_on_os_packages: false
## Since workers are included in the no_proxy variable by default, docker engine will be restarted on all nodes (all ## Since workers are included in the no_proxy variable by default, docker engine will be restarted on all nodes (all
## pods will restart) when adding or removing workers. To override this behaviour by only including control plane nodes ## pods will restart) when adding or removing workers. To override this behaviour by only including master nodes in the
## in the no_proxy variable, set below to true: ## no_proxy variable, set below to true:
no_proxy_exclude_workers: false no_proxy_exclude_workers: false
## Certificate Management ## Certificate Management

View File

@@ -51,7 +51,7 @@
# capabilities: ["pull", "resolve"] # capabilities: ["pull", "resolve"]
# skip_verify: false # skip_verify: false
# containerd_max_container_log_line_size: 16384 # containerd_max_container_log_line_size: -1
# containerd_registry_auth: # containerd_registry_auth:
# - registry: 10.0.0.2:5000 # - registry: 10.0.0.2:5000

View File

@@ -1,30 +1,3 @@
## When External Oracle Cloud Infrastructure is used, set these variables
## External OCI Cloud Controller Manager
## https://github.com/oracle/oci-cloud-controller-manager/blob/v1.29.0/manifests/provider-config-example.yaml
# external_oracle_auth_region: ""
# external_oracle_auth_tenancy: ""
# external_oracle_auth_user: ""
# external_oracle_auth_key: ""
# external_oracle_auth_passphrase: ""
# external_oracle_auth_fingerprint: ""
# external_oracle_auth_use_instance_principals: false
# external_oracle_compartment: ""
# external_oracle_vcn: ""
# external_oracle_load_balancer_subnet1: ""
# external_oracle_load_balancer_subnet2: ""
# external_oracle_load_balancer_security_list_management_mode: All
# external_oracle_load_balancer_security_lists: {}
# external_oracle_ratelimiter_qps_read: 20.0
# external_oracle_ratelimiter_bucket_read: 5
# external_oracle_ratelimiter_qps_write: 20.0
# external_oracle_ratelimiter_bucket_write: 5
# external_oracle_cloud_controller_image_repo: ghcr.io/oracle/cloud-provider-oci
# external_oracle_cloud_controller_image_tag: "v1.29.0"
## When Oracle Cloud Infrastructure is used, set these variables ## When Oracle Cloud Infrastructure is used, set these variables
# oci_private_key: # oci_private_key:
# oci_region_id: # oci_region_id:

View File

@@ -41,6 +41,9 @@
# external_openstack_application_credential_id: # external_openstack_application_credential_id:
# external_openstack_application_credential_secret: # external_openstack_application_credential_secret:
## The tag of the external OpenStack Cloud Controller image
# external_openstack_cloud_controller_image_tag: "v1.28.2"
## Tags for the Cinder CSI images ## Tags for the Cinder CSI images
## registry.k8s.io/sig-storage/csi-attacher ## registry.k8s.io/sig-storage/csi-attacher
# cinder_csi_attacher_image_tag: "v4.4.2" # cinder_csi_attacher_image_tag: "v4.4.2"

View File

@@ -11,14 +11,14 @@
# external_vsphere_version: "6.7u3" # external_vsphere_version: "6.7u3"
## Tags for the external vSphere Cloud Provider images ## Tags for the external vSphere Cloud Provider images
## registry.k8s.io/cloud-pv-vsphere/cloud-provider-vsphere ## gcr.io/cloud-provider-vsphere/cpi/release/manager
# external_vsphere_cloud_controller_image_tag: "v1.31.0" # external_vsphere_cloud_controller_image_tag: "latest"
## registry.k8s.io/csi-vsphere/syncer ## gcr.io/cloud-provider-vsphere/csi/release/syncer
# vsphere_syncer_image_tag: "v3.3.1" # vsphere_syncer_image_tag: "v2.5.1"
## registry.k8s.io/sig-storage/csi-attacher ## registry.k8s.io/sig-storage/csi-attacher
# vsphere_csi_attacher_image_tag: "v3.4.0" # vsphere_csi_attacher_image_tag: "v3.4.0"
## registry.k8s.io/csi-vsphere/driver ## gcr.io/cloud-provider-vsphere/csi/release/driver
# vsphere_csi_controller: "v3.3.1" # vsphere_csi_controller: "v2.5.1"
## registry.k8s.io/sig-storage/livenessprobe ## registry.k8s.io/sig-storage/livenessprobe
# vsphere_csi_liveness_probe_image_tag: "v2.6.0" # vsphere_csi_liveness_probe_image_tag: "v2.6.0"
## registry.k8s.io/sig-storage/csi-provisioner ## registry.k8s.io/sig-storage/csi-provisioner

View File

@@ -104,8 +104,6 @@ gateway_api_enabled: false
ingress_nginx_enabled: false ingress_nginx_enabled: false
# ingress_nginx_host_network: false # ingress_nginx_host_network: false
# ingress_nginx_service_type: LoadBalancer # ingress_nginx_service_type: LoadBalancer
# ingress_nginx_service_annotations:
# example.io/loadbalancerIPs: 1.2.3.4
# ingress_nginx_service_nodeport_http: 30080 # ingress_nginx_service_nodeport_http: 30080
# ingress_nginx_service_nodeport_https: 30081 # ingress_nginx_service_nodeport_https: 30081
ingress_publish_status_address: "" ingress_publish_status_address: ""

View File

@@ -17,7 +17,7 @@ kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release ## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.31.9 kube_version: v1.30.4
# Where the binaries will be downloaded. # Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G) # Note: ensure that you've enough disk space (about 1G)
@@ -140,7 +140,11 @@ kube_proxy_nodeport_addresses: >-
{%- endif -%} {%- endif -%}
# If non-empty, will use this string as identification instead of the actual hostname # If non-empty, will use this string as identification instead of the actual hostname
# kube_override_hostname: {{ inventory_hostname }} # kube_override_hostname: >-
# {%- if cloud_provider is defined and cloud_provider in ['aws'] -%}
# {%- else -%}
# {{ inventory_hostname }}
# {%- endif -%}
## Encrypting Secret Data at Rest ## Encrypting Secret Data at Rest
kube_encrypt_secret_data: false kube_encrypt_secret_data: false
@@ -268,6 +272,11 @@ default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
# kube_cpu_reserved: 100m # kube_cpu_reserved: 100m
# kube_ephemeral_storage_reserved: 2Gi # kube_ephemeral_storage_reserved: 2Gi
# kube_pid_reserved: "1000" # kube_pid_reserved: "1000"
# Reservation for master hosts
# kube_master_memory_reserved: 512Mi
# kube_master_cpu_reserved: 200m
# kube_master_ephemeral_storage_reserved: 2Gi
# kube_master_pid_reserved: "1000"
## Optionally reserve resources for OS system daemons. ## Optionally reserve resources for OS system daemons.
# system_reserved: true # system_reserved: true
@@ -278,6 +287,10 @@ default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
# system_memory_reserved: 512Mi # system_memory_reserved: 512Mi
# system_cpu_reserved: 500m # system_cpu_reserved: 500m
# system_ephemeral_storage_reserved: 2Gi # system_ephemeral_storage_reserved: 2Gi
## Reservation for master hosts
# system_master_memory_reserved: 256Mi
# system_master_cpu_reserved: 250m
# system_master_ephemeral_storage_reserved: 2Gi
## Eviction Thresholds to avoid system OOMs ## Eviction Thresholds to avoid system OOMs
# https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#eviction-thresholds # https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#eviction-thresholds
@@ -353,25 +366,11 @@ auto_renew_certificates: false
# First Monday of each month # First Monday of each month
# auto_renew_certificates_systemd_calendar: "Mon *-*-1,2,3,4,5,6,7 03:{{ groups['kube_control_plane'].index(inventory_hostname) }}0:00" # auto_renew_certificates_systemd_calendar: "Mon *-*-1,2,3,4,5,6,7 03:{{ groups['kube_control_plane'].index(inventory_hostname) }}0:00"
kubeadm_patches_dir: "{{ kube_config_dir }}/patches" # kubeadm patches path
kubeadm_patches: [] kubeadm_patches:
# See https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#patches enabled: false
# Correspondance with this link source_dir: "{{ inventory_dir }}/patches"
# patchtype = type dest_dir: "{{ kube_config_dir }}/patches"
# target = target
# suffix -> managed automatically
# extension -> always "yaml"
# kubeadm_patches:
# - target: kube-apiserver|kube-controller-manager|kube-scheduler|etcd|kubeletconfiguration
# type: strategic(default)|json|merge
# patch:
# metadata:
# annotations:
# example.com/test: "true"
# labels:
# example.com/prod_level: "{{ prod_level }}"
# - ...
# Patches are applied in the order they are specified.
# Set to true to remove the role binding to anonymous users created by kubeadm # Set to true to remove the role binding to anonymous users created by kubeadm
remove_anonymous_access: false remove_anonymous_access: false

View File

@@ -1,5 +1,5 @@
--- ---
# cilium_version: "v1.15.9" # cilium_version: "v1.15.4"
# Log-level # Log-level
# cilium_debug: false # cilium_debug: false
@@ -145,16 +145,9 @@ cilium_l2announcements: false
### A time interval at which the agent attempts to reload config from disk ### A time interval at which the agent attempts to reload config from disk
# cilium_ip_masq_resync_interval: 60s # cilium_ip_masq_resync_interval: 60s
### Host Firewall and Policy Audit Mode
# cilium_enable_host_firewall: false
# cilium_policy_audit_mode: false
# Hubble # Hubble
### Enable Hubble without install ### Enable Hubble without install
# cilium_enable_hubble: false # cilium_enable_hubble: false
### Enable Hubble-ui
### Installed by default when hubble is enabled. To disable set to false
# cilium_enable_hubble_ui: "{{ cilium_enable_hubble }}
### Enable Hubble Metrics ### Enable Hubble Metrics
# cilium_enable_hubble_metrics: false # cilium_enable_hubble_metrics: false
### if cilium_enable_hubble_metrics: true ### if cilium_enable_hubble_metrics: true
@@ -247,101 +240,6 @@ cilium_l2announcements: false
# -- Enable native IP masquerade support in eBPF # -- Enable native IP masquerade support in eBPF
# cilium_enable_bpf_masquerade: false # cilium_enable_bpf_masquerade: false
# -- Enable BGP Control Plane
# cilium_enable_bgp_control_plane: false
# -- Configure Loadbalancer IP Pools
# cilium_loadbalancer_ip_pools:
# - name: "blue-pool"
# cidrs:
# - "10.0.10.0/24"
# -- Configure BGP Instances (New bgpv2 API v1.16+)
# cilium_bgp_cluster_configs:
# - name: "cilium-bgp"
# spec:
# bgpInstances:
# - name: "instance-64512"
# localASN: 64512
# peers:
# - name: "peer-64512-tor1"
# peerASN: 64512
# peerAddress: '10.47.1.1'
# peerConfigRef:
# name: "cilium-peer"
# nodeSelector:
# matchExpressions:
# - {key: somekey, operator: NotIn, values: ['never-used-value']}
# -- Configure BGP Peers (New bgpv2 API v1.16+)
# cilium_bgp_peer_configs:
# - name: cilium-peer
# spec:
# # authSecretRef: bgp-auth-secret
# gracefulRestart:
# enabled: true
# restartTimeSeconds: 15
# families:
# - afi: ipv4
# safi: unicast
# advertisements:
# matchLabels:
# advertise: "bgp"
# - afi: ipv6
# safi: unicast
# advertisements:
# matchLabels:
# advertise: "bgp"
# -- Configure BGP Advertisements (New bgpv2 API v1.16+)
# cilium_bgp_advertisements:
# - name: bgp-advertisements
# labels:
# advertise: bgp
# spec:
# advertisements:
# # - advertisementType: "PodCIDR"
# # attributes:
# # communities:
# # standard: [ "64512:99" ]
# - advertisementType: "Service"
# service:
# addresses:
# - ClusterIP
# - ExternalIP
# - LoadBalancerIP
# selector:
# matchExpressions:
# - {key: somekey, operator: NotIn, values: ['never-used-value']}
# -- Configure BGP Node Config Overrides (New bgpv2 API v1.16+)
# cilium_bgp_node_config_overrides:
# - name: bgp-node-config-override
# spec:
# bgpInstances:
# - name: "instance-65000"
# routerID: "192.168.10.1"
# localPort: 1790
# peers:
# - name: "peer-65000-tor1"
# localAddress: fd00:10:0:2::2
# - name: "peer-65000-tor2"
# localAddress: fd00:11:0:2::2
# -- Configure BGP Peers (Legacy v1.16+)
# cilium_bgp_peering_policies:
# - name: "01-bgp-peering-policy"
# spec:
# virtualRouters:
# - localASN: 64512
# exportPodCIDR: false
# neighbors:
# - peerAddress: '10.47.1.1/24'
# peerASN: 64512
# serviceSelector:
# matchExpressions:
# - {key: somekey, operator: NotIn, values: ['never-used-value']}
# -- Configure whether direct routing mode should route traffic via # -- Configure whether direct routing mode should route traffic via
# host stack (true) or directly and more efficiently out of BPF (false) if # host stack (true) or directly and more efficiently out of BPF (false) if
# the kernel supports it. The latter has the implication that it will also # the kernel supports it. The latter has the implication that it will also

View File

@@ -1,11 +0,0 @@
# Reservation for control plane kubernetes components
# kube_memory_reserved: 512Mi
# kube_cpu_reserved: 200m
# kube_ephemeral_storage_reserved: 2Gi
# kube_pid_reserved: "1000"
# Reservation for control plane host system
# system_memory_reserved: 256Mi
# system_cpu_reserved: 250m
# system_ephemeral_storage_reserved: 2Gi
# system_pid_reserved: "1000"

View File

@@ -1,20 +1,38 @@
# This inventory describe a HA typology with stacked etcd (== same nodes as control plane) # ## Configure 'ip' variable to bind kubernetes services on a
# and 3 worker nodes # ## different ip than the default iface
# See https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html # ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
# for tips on building your # inventory [all]
# Configure 'ip' variable to bind kubernetes services on a different ip than the default iface
# We should set etcd_member_name for etcd cluster. The node that are not etcd members do not need to set the value,
# or can set the empty string value.
[kube_control_plane]
# node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1 # node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1
# node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2 # node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2
# node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3 # node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3
# node4 ansible_host=95.54.0.15 # ip=10.3.0.4 etcd_member_name=etcd4
# node5 ansible_host=95.54.0.16 # ip=10.3.0.5 etcd_member_name=etcd5
# node6 ansible_host=95.54.0.17 # ip=10.3.0.6 etcd_member_name=etcd6
[etcd:children] # ## configure a bastion host if your nodes are not directly reachable
kube_control_plane # [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube_control_plane]
# node1
# node2
# node3
[etcd]
# node1
# node2
# node3
[kube_node] [kube_node]
# node4 ansible_host=95.54.0.15 # ip=10.3.0.4 # node2
# node5 ansible_host=95.54.0.16 # ip=10.3.0.5 # node3
# node6 ansible_host=95.54.0.17 # ip=10.3.0.6 # node4
# node5
# node6
[calico_rr]
[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr

View File

@@ -0,0 +1,8 @@
---
apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '10257'

View File

@@ -0,0 +1,8 @@
---
apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '10259'

View File

@@ -2,33 +2,52 @@
- name: Check ansible version - name: Check ansible version
import_playbook: ansible_version.yml import_playbook: ansible_version.yml
# These are inventory compatibility tasks with two purposes: # These are inventory compatibility tasks to ensure we keep compatibility with old style group names
# - to ensure we keep compatibility with old style group names
# - to reduce inventory boilerplate (defining parent groups / empty groups)
- name: Define groups for legacy less structured inventories - name: Add kube-master nodes to kube_control_plane
hosts: all hosts: kube-master
gather_facts: false gather_facts: false
tags: always tags: always
tasks: tasks:
- name: Match needed groups by their old names or definition - name: Add nodes to kube_control_plane group
vars:
group_mappings:
kube_control_plane:
- kube-master
kube_node:
- kube-node
calico_rr:
- calico-rr
no_floating:
- no-floating
k8s_cluster:
- kube_node
- kube_control_plane
- calico_rr
group_by: group_by:
key: "{{ (group_names | intersect(item.value) | length > 0) | ternary(item.key, '_all') }}" key: 'kube_control_plane'
loop: "{{ group_mappings | dict2items }}"
- name: Add kube-node nodes to kube_node
hosts: kube-node
gather_facts: false
tags: always
tasks:
- name: Add nodes to kube_node group
group_by:
key: 'kube_node'
- name: Add k8s-cluster nodes to k8s_cluster
hosts: k8s-cluster
gather_facts: false
tags: always
tasks:
- name: Add nodes to k8s_cluster group
group_by:
key: 'k8s_cluster'
- name: Add calico-rr nodes to calico_rr
hosts: calico-rr
gather_facts: false
tags: always
tasks:
- name: Add nodes to calico_rr group
group_by:
key: 'calico_rr'
- name: Add no-floating nodes to no_floating
hosts: no-floating
gather_facts: false
tags: always
tasks:
- name: Add nodes to no-floating group
group_by:
key: 'no_floating'
- name: Install bastion ssh config - name: Install bastion ssh config
hosts: bastion[0] hosts: bastion[0]

View File

@@ -10,8 +10,8 @@
# fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled. # fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled.
ansible_ssh_pipelining: false ansible_ssh_pipelining: false
roles: roles:
- { role: bootstrap-os, tags: bootstrap-os}
- { role: kubespray-defaults } - { role: kubespray-defaults }
- { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts - name: Gather facts
hosts: k8s_cluster:etcd:calico_rr hosts: k8s_cluster:etcd:calico_rr

View File

@@ -34,11 +34,10 @@
roles: roles:
- { role: kubespray-defaults, when: reset_nodes | default(True) | bool } - { role: kubespray-defaults, when: reset_nodes | default(True) | bool }
- { role: remove-node/pre-remove, tags: pre-remove } - { role: remove-node/pre-remove, tags: pre-remove }
- role: remove-node/remove-etcd-node - { role: remove-node/remove-etcd-node }
when: "'etcd' in group_names"
- { role: reset, tags: reset, when: reset_nodes | default(True) | bool } - { role: reset, tags: reset, when: reset_nodes | default(True) | bool }
# Currently cannot remove first control plane node or first etcd node # Currently cannot remove first master or etcd
- name: Post node removal - name: Post node removal
hosts: "{{ node | default('kube_control_plane[1:]:etcd[1:]') }}" hosts: "{{ node | default('kube_control_plane[1:]:etcd[1:]') }}"
gather_facts: false gather_facts: false

View File

@@ -38,7 +38,7 @@
- name: Install etcd - name: Install etcd
import_playbook: install_etcd.yml import_playbook: install_etcd.yml
- name: Handle upgrades to control plane components first to maintain backwards compat. - name: Handle upgrades to master components first to maintain backwards compat.
gather_facts: false gather_facts: false
hosts: kube_control_plane hosts: kube_control_plane
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -60,7 +60,7 @@
- { role: kubernetes-apps, tags: csi-driver } - { role: kubernetes-apps, tags: csi-driver }
- { role: upgrade/post-upgrade, tags: post-upgrade } - { role: upgrade/post-upgrade, tags: post-upgrade }
- name: Upgrade calico and external cloud provider on all control plane nodes, calico-rrs, and nodes - name: Upgrade calico and external cloud provider on all masters, calico-rrs, and nodes
hosts: kube_control_plane:calico_rr:kube_node hosts: kube_control_plane:calico_rr:kube_node
gather_facts: false gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"

View File

@@ -1,7 +1,7 @@
ansible==9.13.0 ansible==9.8.0
# Needed for community.crypto module
cryptography==44.0.2
# Needed for jinja2 json_query templating # Needed for jinja2 json_query templating
jmespath==1.0.1 jmespath==1.0.1
# Needed for ansible.utils.validate module
jsonschema==4.23.0
# Needed for ansible.utils.ipaddr # Needed for ansible.utils.ipaddr
netaddr==1.3.0 netaddr==1.3.0

View File

@@ -21,7 +21,6 @@ fedora_coreos_packages:
- ethtool # required in kubeadm preflight phase for verifying the environment - ethtool # required in kubeadm preflight phase for verifying the environment
- ipset # required in kubeadm preflight phase for verifying the environment - ipset # required in kubeadm preflight phase for verifying the environment
- conntrack-tools # required by kube-proxy - conntrack-tools # required by kube-proxy
- containernetworking-plugins # required by crio
## General ## General
# Set the hostname to inventory_hostname # Set the hostname to inventory_hostname

View File

@@ -14,14 +14,3 @@
enabled: true enabled: true
repo_gpgcheck: false repo_gpgcheck: false
when: epel_enabled when: epel_enabled
# iproute is required for networking related facts gathering
# See https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_vars_facts.html#package-requirements-for-fact-gathering
# Note: It is not recommended way, but since the tasks execution order, put it here is the simplest way so far. We can move it to a proper place later.
# TODO: move this to roles/kubernetes/preinstall/vars/main.yml -> pkgs variables
# Currently not possible because the collect the network facts before that step, needs reordering of the exec flow.
- name: Ensure iproute is installed
package:
name: iproute
state: present
become: true

View File

@@ -116,14 +116,3 @@
name: "{{ ((ansible_distribution_major_version | int) < 8) | ternary('libselinux-python', 'python3-libselinux') }}" name: "{{ ((ansible_distribution_major_version | int) < 8) | ternary('libselinux-python', 'python3-libselinux') }}"
state: present state: present
become: true become: true
# iproute is required for networking related facts gathering
# See https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_vars_facts.html#package-requirements-for-fact-gathering
# Note: It is not recommended way, but since the tasks execution order, put it here is the simplest way so far. We can move it to a proper place later.
# TODO: move this to roles/kubernetes/preinstall/vars/main.yml -> pkgs variables
# Currently not possible because the collect the network facts before that step, needs reordering of the exec flow.
- name: Ensure iproute is installed
package:
name: iproute
state: present
become: true

View File

@@ -14,14 +14,3 @@
daemon_reload: true daemon_reload: true
state: started state: started
become: true become: true
# iproute2 is required for networking related facts gathering
# See https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_vars_facts.html#package-requirements-for-fact-gathering
# Note: It is not recommended way, but since the tasks execution order, put it here is the simplest way so far. We can move it to a proper place later.
# TODO: move this to roles/kubernetes/preinstall/vars/main.yml -> pkgs variables
# Currently not possible because the collect the network facts before that step, needs reordering of the exec flow.
- name: Ensure iproute2 is installed
package:
name: iproute2
state: present
become: true

View File

@@ -62,14 +62,3 @@
- '"changed its" in bootstrap_update_apt_result.stdout' - '"changed its" in bootstrap_update_apt_result.stdout'
- '"value from" in bootstrap_update_apt_result.stdout' - '"value from" in bootstrap_update_apt_result.stdout'
ignore_errors: true ignore_errors: true
# iproute2 is required for networking related facts gathering
# See https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_vars_facts.html#package-requirements-for-fact-gathering
# Note: It is not recommended way, but since the tasks execution order, put it here is the simplest way so far. We can move it to a proper place later.
# TODO: move this to roles/kubernetes/preinstall/vars/main.yml -> pkgs variables
# Currently not possible because the collect the network facts before that step, needs reordering of the exec flow.
- name: Ensure iproute2 is installed
package:
name: iproute2
state: present
become: true

View File

@@ -28,14 +28,3 @@
become: true become: true
when: when:
- need_bootstrap.rc != 0 - need_bootstrap.rc != 0
# iproute is required for networking related facts gathering
# See https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_vars_facts.html#package-requirements-for-fact-gathering
# Note: It is not recommended way, but since the tasks execution order, put it here is the simplest way so far. We can move it to a proper place later.
# TODO: move this to roles/kubernetes/preinstall/vars/main.yml -> pkgs variables
# Currently not possible because the collect the network facts before that step, needs reordering of the exec flow.
- name: Ensure iproute is installed
package:
name: iproute
state: present
become: true

View File

@@ -23,7 +23,7 @@
- name: Make interpreter discovery works on Flatcar - name: Make interpreter discovery works on Flatcar
set_fact: set_fact:
ansible_interpreter_python_fallback: "{{ (ansible_interpreter_python_fallback | default([])) + [ '/opt/bin/python' ] }}" ansible_interpreter_python_fallback: "{{ ansible_interpreter_python_fallback + [ '/opt/bin/python' ] }}"
- name: Disable auto-upgrade - name: Disable auto-upgrade
systemd_service: systemd_service:

View File

@@ -83,15 +83,3 @@
- apparmor-parser - apparmor-parser
state: present state: present
become: true become: true
# iproute2 is required for networking related facts gathering
# See https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_vars_facts.html#package-requirements-for-fact-gathering
# Note: It is not recommended way, but since the tasks execution order, put it here is the simplest way so far. We can move it to a proper place later.
# TODO: move this to roles/kubernetes/preinstall/vars/main.yml -> pkgs variables
# Currently not possible because the collect the network facts before that step, needs reordering of the exec flow.
- name: Ensure iproute2 is installed
community.general.zypper:
name: iproute2
state: present
update_cache: true
become: true

View File

@@ -100,14 +100,3 @@
name: "{{ ((ansible_distribution_major_version | int) < 8) | ternary('libselinux-python', 'python3-libselinux') }}" name: "{{ ((ansible_distribution_major_version | int) < 8) | ternary('libselinux-python', 'python3-libselinux') }}"
state: present state: present
become: true become: true
# iproute is required for networking related facts gathering
# See https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_vars_facts.html#package-requirements-for-fact-gathering
# Note: It is not recommended way, but since the tasks execution order, put it here is the simplest way so far. We can move it to a proper place later.
# TODO: move this to roles/kubernetes/preinstall/vars/main.yml -> pkgs variables
# Currently not possible because the collect the network facts before that step, needs reordering of the exec flow.
- name: Ensure iproute is installed
package:
name: iproute
state: present
become: true

View File

@@ -63,7 +63,7 @@ containerd_registries_mirrors:
capabilities: ["pull", "resolve"] capabilities: ["pull", "resolve"]
skip_verify: false skip_verify: false
containerd_max_container_log_line_size: 16384 containerd_max_container_log_line_size: -1
# If enabled it will allow non root users to use port numbers <1024 # If enabled it will allow non root users to use port numbers <1024
containerd_enable_unprivileged_ports: false containerd_enable_unprivileged_ports: false
@@ -93,6 +93,9 @@ containerd_limit_core: "infinity"
containerd_limit_open_file_num: "infinity" containerd_limit_open_file_num: "infinity"
containerd_limit_mem_lock: "infinity" containerd_limit_mem_lock: "infinity"
# If enabled it will use config_path and config to be put in {{ containerd_cfg_dir }}/certs.d/
containerd_use_config_path: false
# OS distributions that already support containerd # OS distributions that already support containerd
containerd_supported_distributions: containerd_supported_distributions:
- "CentOS" - "CentOS"

Some files were not shown because too many files have changed in this diff Show More