Compare commits

..

51 Commits

Author SHA1 Message Date
github-actions[bot]
d8153cfa3b Patch versions updates 2026-01-31 03:10:22 +00:00
k8s-infra-cherrypick-robot
46a1b887cd wait for control plane node to become ready after joining (#12923)
When joining a control plane node and "upgrading" the cluster setup (for
example, to update etcd addresses after adding a new etcd) in the same
playbook run, the node can take a bit of time to become ready after
joining.
This triggers a kubeadm preflight check (ControlPlaneNodesReady) in
kubeadm upgrade, which is run directly after the join tasks.

Add a configurable wait for the control plane node to become Ready to
fix this race condition.

Co-authored-by: Max Gautier <mg@max.gautier.name>
2026-01-29 15:49:50 +05:30
k8s-infra-cherrypick-robot
5ee67297bc etcd-certs: only change necessary permissions (#12915)
We currently **recursively** set the permissions of /etc/ssl/etcd/ssl
(default path) to 700. But this removes group permission from the files
under it, and certain composents (like calio with etcd datastore) rely
on it ; thus, the upgrade of a cluster can fail because the
calico-kube-controller can't access the certs, and thus the etcd.

This works in other case because as far as I can tell, the apiserver
which do access the etcd run as root (the owner of the files, not just
the "group owner")

We also for some reasons do this twice.

Only create the etcd cert directory with the correct permissions once,
not recursively.

Co-authored-by: Max Gautier <mg@max.gautier.name>
2026-01-27 20:31:52 +05:30
Max Gautier
41d4ae9894 Patch versions updates (#12896)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-23 21:35:31 +05:30
k8s-infra-cherrypick-robot
5979110c2c cri-o: fix duplicate top-level "auths" keys in registry config template (#12885)
The config.json.j2 template was generating invalid JSON when multiple
crio_registry_auth entries were defined, resulting in multiple top-level
"auths" objects being rendered, e.g.:

{
  "auths": { "registry1": { "auth": "xxxx" } },
  "auths": { "registry2": { "auth": "yyyy" } }
}

This change moves the loop inside the "auths" object so that all registries
are rendered as siblings under a single "auths" key, producing valid JSON:

{
  "auths": {
    "registry1": { "auth": "xxxx" },
    "registry2": { "auth": "yyyy" }
  }
}

Co-authored-by: Martin Cahill <martin.cahill@gmail.com>
2026-01-20 19:58:51 +05:30
Max Gautier
183330f93d Patch versions updates (#12856)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-14 13:05:36 +05:30
k8s-infra-cherrypick-robot
7bc37640d7 Fix ansible-lint config error (#12864)
Co-authored-by: Max Gautier <mg@max.gautier.name>
2026-01-13 20:27:38 +05:30
Max Gautier
9ccd606ac1 Patch versions updates (#12802)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-20 21:08:33 -08:00
k8s-infra-cherrypick-robot
84dbb84386 Let containerd create storage / state dir (#12807)
Containerd manages by itself, so there is no need to override it and
change permissions.

Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-12-20 06:44:30 -08:00
Max Gautier
360ea8238e Patch versions updates (#12745)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-25 01:08:37 -08:00
k8s-infra-cherrypick-robot
c6050f4b10 [release-2.28] CI: enable unsafe_show_logs == true by default (#12727)
* CI: enable unsafe_show_logs == true by default

* Deduplicate defaults vars (unsafe_show_logs)

---------

Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-11-19 23:50:00 -08:00
Max Gautier
0140fda589 Patch versions updates (#12720)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-18 02:13:39 -08:00
Max Gautier
ebf9fedf9b Remove etcd member by peerURLs (#12690)
The way to obtain the IP of a particular member is convoluted and depend
on multiple variables. The match is also textual and it's not clear
against what we're matching

It's also broken for etcd member which are not also Kubernetes nodes,
because the "Lookup node IP in kubernetes" task will fail and abort the
play.

Instead, match against 'peerURLs', which does not need new variable, and
use json output.

- Add testcase for etcd removal on external etcd
2025-11-17 02:57:40 -08:00
Max Gautier
f15f58b5b9 Update pre-commit hooks (#12699) 2025-11-15 21:49:36 -08:00
k8s-infra-cherrypick-robot
5505943ddc [release-2.28] CI: Try a full ssh connection on hosts instead of only checking the port (#12710)
* CI: Try a full ssh connection on hosts instead of only checking the port

If we only try the port, we can try to connect in the playbook which is
executed next even though the managed node has not yet completed it's
boot-up sequence ("System is booting up. Unprivileged users are not
permitted to log in yet. Please come back later. For technical details,
see pam_nologin(8).")

This does not account for python-less hosts, but we don't use those in
CI anyway (for now, at least).

* CI: Remove connection method override when creating VMs

This prevented wait_for_connection to work correctly by hijacking the
connection to localhost, thus bypassing the connection check.

---------

Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-11-15 12:39:36 -08:00
k8s-infra-cherrypick-robot
f9e82eb6f5 fix(cilium):correct loadBalancer.mode rendering in values.yaml (#12704)
Co-authored-by: Anurag Ojha <aojharaj2004@gmail.com>
2025-11-14 07:03:41 -08:00
Ali Afsharzadeh
3320c329e5 Upgrade cilium from 1.17.8 to 1.17.9 (#12648) 2025-10-29 19:54:01 -07:00
Ali Afsharzadeh
8f5e599cac [release-2.28] Upgrade cilium from 1.17.7 to 1.17.8 (#12570)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-10-23 23:07:35 -07:00
k8s-infra-cherrypick-robot
bb91baf1cb Fix breakage when ignoring all kubeadm preflight errors (#12617)
kubeadm errors out if 'all' is specified with specific checks, so check
that case when we add hardcoded checks.

Add a test to catch regression.

Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-10-13 21:45:32 -07:00
k8s-infra-cherrypick-robot
40e015f98f Fix cilium_policy_audit_mode variable (#12612)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
Co-authored-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-10-12 03:42:52 -07:00
Max Gautier
0a975f6447 Patch versions updates (#12607)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-09 09:57:01 -07:00
k8s-infra-cherrypick-robot
4471512232 Make main_ip cacheable in facts (#12601)
Co-authored-by: fox0430 <shuu.n@protonmail.com>
2025-10-09 02:23:03 -07:00
k8s-infra-cherrypick-robot
2336d54088 Fix calico etcd mode networkpolicy RBAC (#12587)
Co-authored-by: Chad Swenson <chadswen@gmail.com>
2025-09-29 10:36:17 -07:00
ChengHao Yang
e52f788942 Add proxy_env to cilium install task for proxy (#12530)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-09-09 03:53:30 -07:00
Chad Swenson
ad7ab2cd37 [release-2.28] Fix automated cherrypick failure in UpgradeConfiguration.node (#12524) 2025-09-03 08:25:16 -07:00
Max Gautier
4fd9e6d7a0 Patch versions updates (#12519)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-03 01:07:15 -07:00
ChengHao Yang
ba4044b90d Bump: Galaxy version to 2.28.2 (#12525)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-09-02 23:15:14 -07:00
k8s-infra-cherrypick-robot
a20891ab67 Fix SAN address collection from ansible_default_ipv{4,6} (#12505)
Signed-off-by: Hyeonki Hong <hhk7734@gmail.com>
Co-authored-by: Hyeonki Hong <hhk7734@gmail.com>
2025-08-26 03:02:11 -07:00
Max Gautier
0858e46dc6 Patch versions updates (#12504)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-26 02:36:11 -07:00
ChengHao Yang
d695115061 Fix: constant etcd_supported_version to dynamic (#12499)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-08-25 07:15:10 -07:00
k8s-infra-cherrypick-robot
9a09ac5a40 Fix: Change "empty" definition for PodSecurity Admission configuration (#12478)
Fixes a bug where `kube-apiserver` fails to start if the PodSecurity
configuration file doesn't have the `apiVersion` and `kind` keys.

Signed-off-by: Alejandro Macedo <alex.macedopereira@gmail.com>
Co-authored-by: Alejandro Macedo <alex.macedopereira@gmail.com>
2025-08-19 04:51:36 -07:00
k8s-infra-cherrypick-robot
3a1a2bd4f4 Argo CD : checksum support for the install url (#12467)
Fixes https://github.com/kubernetes-sigs/kubespray/issues/12223

Co-authored-by: Romain Lalaut <rlalaut@proton.me>
2025-08-17 20:09:07 -07:00
Ali Afsharzadeh
842e352767 [release-2.28] Upgrade cilium from 1.17.3 to 1.17.7 (#12471)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-08-17 19:11:07 -07:00
ChengHao Yang
4bb24b253c Patch versions updates (#12462)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-15 13:23:07 -07:00
k8s-infra-cherrypick-robot
f815c80139 bug: fix missing cilium_enable_bgp_control_plane config (#12432)
Co-authored-by: XuhuiSun95 <ericsun1995@gmail.com>
2025-07-26 22:10:27 -07:00
Max Gautier
86fcc2ba59 Patch versions updates (#12431)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-25 01:18:28 -07:00
k8s-infra-cherrypick-robot
e7cf546f37 Fix#12385 cilium typo (#12404)
Signed-off-by: wangsifei99 <wangsifei@kylinos.cn>
Co-authored-by: wangsifei99 <wangsifei@kylinos.cn>
2025-07-15 08:14:24 -07:00
k8s-infra-cherrypick-robot
a59585b6dc Fixed syntax error in _bgp_config dict (#12394)
Co-authored-by: mathgaming <mathgaming@outlook.com>
2025-07-11 00:31:28 -07:00
k8s-infra-cherrypick-robot
2d0cc2b4ea Fix kubeadm upgrade node skipPhases with multiple CP nodes (#12384)
Add 1.32 conditional defaults

Restore support for kubeadm upgrade node --skip-phases < 1.32, apply still needs to be restricted

Co-authored-by: Chad Swenson <chadswen@gmail.com>
2025-07-07 23:35:26 -07:00
k8s-infra-cherrypick-robot
9a08afa3f9 [release-2.28] Cilium: Pass cluster DNS to hubble.peerService in values.yaml.j2 (#12374)
* cilium: pass cluster DNS to hubble.peerService in values.yaml.j2

* Add dedicated Hubble variable defaulting to inventory cluster domain

---------

Co-authored-by: Mustafa Mertcan CAM <mertcancam@gmail.com>
2025-07-04 01:23:25 -07:00
k8s-infra-cherrypick-robot
22e9335484 fix(kubeadm): Conditionally add --skip-phases flag for v1.32.0+ (#12354)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
Co-authored-by: bo.jiang <bo.jiang@daocloud.io>
2025-06-28 05:44:29 -07:00
k8s-infra-cherrypick-robot
4789e9dd89 fix ETCD_INITIAL_CLUSTER config in etcd.env and etcd-events.env (#12352)
Co-authored-by: liuxu <liuxu623@gmail.com>
2025-06-27 23:54:29 -07:00
k8s-infra-cherrypick-robot
9a86253beb fix: add cilium extraConfig values (#12338)
Co-authored-by: atobaum <atobaum@gmail.com>
2025-06-24 00:00:31 -07:00
ChengHao Yang
1e76d9113b Patch versions updates (#12330)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-23 19:22:28 -07:00
k8s-infra-cherrypick-robot
d7c00ce698 [release-2.28] Fix: the cluster is upgraded from 2.27 to 2.28 cilium will break (#12324)
* Fix: add cilium remove old resources option

Give users two options: besides skip Cilium, add
`cilium_remove_old_resources`, default is `false`, when set to `true`,
it will remove the content of the old version, but it will cause the
downtime, need to be careful to use.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Fix: if cilium release exist, the action will set upgrade

`cilium install` is equivalent to `helm install`, it will failed if
cilium relase exist. `cilium version` can know the release exist without
helm binary

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-19 20:16:51 -07:00
k8s-infra-cherrypick-robot
cf6e96deb0 Add version pinning for AWS tf provider to fix CI (#12327)
Co-authored-by: Chad Swenson <chadswen@gmail.com>
2025-06-19 19:52:52 -07:00
k8s-infra-cherrypick-robot
0e5c532c9e Skip kube-proxy addon phase during kubeadm upgrade if disabled (#12320)
Co-authored-by: Roman Davydchenko <xatteg@gmail.com>
2025-06-18 04:42:51 -07:00
k8s-infra-cherrypick-robot
a8f5277628 fix manage-offline-container-images.sh get image_id (#12316)
Co-authored-by: DearJay <zhongtianjieyi143@gmail.com>
2025-06-15 07:42:58 -07:00
k8s-infra-cherrypick-robot
1290466c53 Add tico88612 as approver (#12292)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-10 10:06:24 -07:00
k8s-infra-cherrypick-robot
6ec991e772 Fix indentation issue in Cilium values file and ensure booleans are lowercase (#12283)
This patch fixes the indentation in the `encryption` section.
Previously configuration like this:

```yml
cilium_encryption_enabled: true
cilium_encryption_type: wireguard
```

Would template to a `values.yaml` file with indentation that looks like this:

```yml
encryption:
  enabled: True
    type: wireguard
    nodeEncryption: False
```

instead of this:

```yml
encryption:
  enabled: true
  type: wireguard
  nodeEncryption: false
```

This syntax issue causes an error during Cilium installation.

This patch also makes all boolean values in this template file go through the `to_json` filter.
Since values like `True` and `False` are not compliant with the YAML v1.2 spec,
avoiding them is preferable.

`to_json` may be used for all other values in this template to ensure we end up with
a valid YAML document in all cases (even when various strings include special characters),
but this was left for another (future) patch.

Co-authored-by: Slavi Pantaleev <slavi@devture.com>
2025-06-10 07:50:24 -07:00
ChengHao Yang
a12e53e845 Bump galaxy.yml version (#12290)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-10 06:44:25 -07:00
275 changed files with 3163 additions and 3345 deletions

View File

@@ -11,12 +11,10 @@ skip_list:
# (Disabled in June 2021) # (Disabled in June 2021)
- 'role-name' - 'role-name'
# [var-naming] # [var-naming] "defaults/main.yml" File defines variable 'apiVersion' that violates variable naming standards
# In Kubespray we use variables that use camelCase to match their k8s counterparts # In Kubespray we use variables that use camelCase to match their k8s counterparts
# (Disabled in June 2021) # (Disabled in June 2021)
- 'var-naming[pattern]' - 'var-naming'
# Variables names from within roles in kubespray don't need role name as a prefix
- 'var-naming[no-role-prefix]'
# [fqcn-builtins] # [fqcn-builtins]
# Roles in kubespray don't need fully qualified collection names # Roles in kubespray don't need fully qualified collection names

View File

@@ -108,6 +108,7 @@ body:
- meta - meta
- multus - multus
- ovn4nfv - ovn4nfv
- weave
validations: validations:
required: true required: true

View File

@@ -13,7 +13,7 @@ jobs:
issues: write issues: write
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: Parse issue form - name: Parse issue form
uses: stefanbuck/github-issue-parser@2ea9b35a8c584529ed00891a8f7e41dc46d0441e uses: stefanbuck/github-issue-parser@2ea9b35a8c584529ed00891a8f7e41dc46d0441e
@@ -22,7 +22,7 @@ jobs:
template-path: .github/ISSUE_TEMPLATE/bug-report.yaml template-path: .github/ISSUE_TEMPLATE/bug-report.yaml
- name: Set labels based on OS field - name: Set labels based on OS field
uses: redhat-plumbers-in-action/advanced-issue-labeler@e38e6809c5420d038eed380d49ee9a6ca7c92dbf uses: redhat-plumbers-in-action/advanced-issue-labeler@39087a4b30cb98d57f25f34d617a6af8163c17d9
with: with:
issue-form: ${{ steps.issue-parser.outputs.jsonString }} issue-form: ${{ steps.issue-parser.outputs.jsonString }}
section: os section: os

View File

@@ -20,7 +20,7 @@ jobs:
query get_release_branches($owner:String!, $name:String!) { query get_release_branches($owner:String!, $name:String!) {
repository(owner:$owner, name:$name) { repository(owner:$owner, name:$name) {
refs(refPrefix: "refs/heads/", refs(refPrefix: "refs/heads/",
first: 1, # TODO increment once we have release branch with the new checksums format first: 0, # TODO increment once we have release branch with the new checksums format
query: "release-", query: "release-",
orderBy: { orderBy: {
field: ALPHABETICAL, field: ALPHABETICAL,

View File

@@ -11,10 +11,10 @@ jobs:
update-patch-versions: update-patch-versions:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with: with:
ref: ${{ inputs.branch }} ref: ${{ inputs.branch }}
- uses: actions/setup-python@v6 - uses: actions/setup-python@v5
with: with:
python-version: '3.13' python-version: '3.13'
cache: 'pip' cache: 'pip'

View File

@@ -1,9 +1,9 @@
--- ---
stages: stages:
- build # build docker image used in most other jobs - build
- test # unit tests - test
- deploy-part1 # kubespray runs - common setup - deploy-part1
- deploy-extended # kubespray runs - rarer or costlier (to test) setups - deploy-extended
variables: variables:
FAILFASTCI_NAMESPACE: 'kargo-ci' FAILFASTCI_NAMESPACE: 'kargo-ci'
@@ -24,7 +24,6 @@ variables:
ANSIBLE_REMOTE_USER: kubespray ANSIBLE_REMOTE_USER: kubespray
ANSIBLE_PRIVATE_KEY_FILE: /tmp/id_rsa ANSIBLE_PRIVATE_KEY_FILE: /tmp/id_rsa
ANSIBLE_INVENTORY: /tmp/inventory ANSIBLE_INVENTORY: /tmp/inventory
ANSIBLE_STDOUT_CALLBACK: "default"
RESET_CHECK: "false" RESET_CHECK: "false"
REMOVE_NODE_CHECK: "false" REMOVE_NODE_CHECK: "false"
UPGRADE_TEST: "false" UPGRADE_TEST: "false"
@@ -32,7 +31,7 @@ variables:
ANSIBLE_VERBOSITY: 2 ANSIBLE_VERBOSITY: 2
RECOVER_CONTROL_PLANE_TEST: "false" RECOVER_CONTROL_PLANE_TEST: "false"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]" RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]"
OPENTOFU_VERSION: v1.9.1 TF_VERSION: 1.3.7
PIPELINE_IMAGE: "$CI_REGISTRY_IMAGE/pipeline:${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}" PIPELINE_IMAGE: "$CI_REGISTRY_IMAGE/pipeline:${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}"
before_script: before_script:
@@ -49,6 +48,8 @@ before_script:
- cluster-dump/ - cluster-dump/
needs: needs:
- pipeline-image - pipeline-image
variables:
ANSIBLE_STDOUT_CALLBACK: "debug"
.job-moderated: .job-moderated:
extends: .job extends: .job

View File

@@ -7,24 +7,23 @@ pipeline-image:
tags: tags:
- ffci - ffci
stage: build stage: build
image: moby/buildkit:rootless image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: ['']
variables: variables:
BUILDKITD_FLAGS: --oci-worker-no-process-sandbox GODEBUG: "http2client=0"
CACHE_IMAGE: $CI_REGISTRY_IMAGE/pipeline:cache
# TODO: remove the override # TODO: remove the override
# currently rebase.sh depends on bash (not available in the kaniko image) # currently rebase.sh depends on bash (not available in the kaniko image)
# once we have a simpler rebase (which should be easy if the target branch ref is available as variable # once we have a simpler rebase (which should be easy if the target branch ref is available as variable
# we'll be able to rebase here as well hopefully # we'll be able to rebase here as well hopefully
before_script: before_script: []
- mkdir -p ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > ~/.docker/config.json
script: script:
- | - echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > /kaniko/.docker/config.json
buildctl-daemonless.sh build \ - /kaniko/executor --cache=true
--frontend dockerfile.v0 \ --cache-dir=image-cache
--local context=$CI_PROJECT_DIR \ --context $CI_PROJECT_DIR
--local dockerfile=$CI_PROJECT_DIR \ --dockerfile $CI_PROJECT_DIR/pipeline.Dockerfile
--opt filename=pipeline.Dockerfile \ --label 'git-branch'=$CI_COMMIT_REF_SLUG
--export-cache type=registry,ref=$CACHE_IMAGE \ --label 'git-tag=$CI_COMMIT_TAG'
--import-cache type=registry,ref=$CACHE_IMAGE \ --destination $PIPELINE_IMAGE
--output type=image,name=$PIPELINE_IMAGE,push=true --log-timestamp=true

View File

@@ -34,33 +34,34 @@ pr:
parallel: parallel:
matrix: matrix:
- TESTCASE: - TESTCASE:
- almalinux8-calico
- almalinux9-crio - almalinux9-crio
- almalinux9-kube-ovn - almalinux9-kube-ovn
- debian11-calico-collection - debian11-calico-collection
- debian11-macvlan - debian11-macvlan
- debian12-cilium - debian12-cilium
- debian13-cilium
- fedora39-kube-router - fedora39-kube-router
# FIXME: this test if broken (perma-failing)
- openeuler24-calico - openeuler24-calico
- opensuse15-6-calico
- rockylinux8-calico
- rockylinux9-cilium - rockylinux9-cilium
- ubuntu20-calico-all-in-one-hardening
- ubuntu20-cilium-sep
- ubuntu20-flannel-collection
- ubuntu20-kube-router-sep
- ubuntu20-kube-router-svc-proxy
- ubuntu22-calico-all-in-one - ubuntu22-calico-all-in-one
- ubuntu22-calico-all-in-one-upgrade - ubuntu22-calico-all-in-one-upgrade
- ubuntu24-calico-etcd-datastore - ubuntu24-calico-etcd-datastore
- ubuntu24-calico-all-in-one-hardening
- ubuntu24-cilium-sep
- ubuntu24-flannel-collection
- ubuntu24-kube-router-sep
- ubuntu24-kube-router-svc-proxy
- ubuntu24-ha-separate-etcd - ubuntu24-ha-separate-etcd
- flatcar4081-calico
- fedora40-flannel-crio-collection-scale
# The ubuntu24-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken # The ubuntu20-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
ubuntu24-calico-all-in-one: ubuntu20-calico-all-in-one:
stage: deploy-part1 stage: deploy-part1
extends: .kubevirt extends: .kubevirt
variables: variables:
TESTCASE: ubuntu24-calico-all-in-one TESTCASE: ubuntu20-calico-all-in-one
rules: rules:
- if: $CI_COMMIT_BRANCH =~ /^pr-.*$/ - if: $CI_COMMIT_BRANCH =~ /^pr-.*$/
when: on_success when: on_success
@@ -90,9 +91,10 @@ pr_full:
- debian12-custom-cni-helm - debian12-custom-cni-helm
- fedora39-calico-swap-selinux - fedora39-calico-swap-selinux
- fedora39-crio - fedora39-crio
- ubuntu24-calico-ha-wireguard - ubuntu20-all-in-one-docker
- ubuntu24-flannel-ha - ubuntu20-calico-ha-wireguard
- ubuntu24-flannel-ha-once - ubuntu20-flannel-ha
- ubuntu20-flannel-ha-once
# Need an update of the container image to use schema v2 # Need an update of the container image to use schema v2
# update: quay.io/kubespray/vm-amazon-linux-2:latest # update: quay.io/kubespray/vm-amazon-linux-2:latest
@@ -125,13 +127,13 @@ pr_extended:
- debian11-docker - debian11-docker
- debian12-calico - debian12-calico
- debian12-docker - debian12-docker
- debian13-calico - opensuse15-6-docker-cilium
- rockylinux9-calico - rockylinux9-calico
- ubuntu20-calico-etcd-kubeadm
- ubuntu20-flannel
- ubuntu22-all-in-one-docker - ubuntu22-all-in-one-docker
- ubuntu24-all-in-one-docker - ubuntu24-all-in-one-docker
- ubuntu24-calico-all-in-one - ubuntu24-calico-all-in-one
- ubuntu24-calico-etcd-kubeadm
- ubuntu24-flannel
# TODO: migrate to pr-full, fix the broken ones # TODO: migrate to pr-full, fix the broken ones
periodic: periodic:
@@ -148,6 +150,6 @@ periodic:
- debian12-cilium-svc-proxy - debian12-cilium-svc-proxy
- fedora39-calico-selinux - fedora39-calico-selinux
- fedora40-docker-calico - fedora40-docker-calico
- ubuntu24-calico-etcd-kubeadm-upgrade-ha - ubuntu20-calico-etcd-kubeadm-upgrade-ha
- ubuntu24-calico-ha-recover - ubuntu20-calico-ha-recover
- ubuntu24-calico-ha-recover-noquorum - ubuntu20-calico-ha-recover-noquorum

View File

@@ -6,7 +6,6 @@ pre-commit:
image: 'ghcr.io/pre-commit-ci/runner-image@sha256:fe01a6ec51b298412990b88627c3973b1146c7304f930f469bafa29ba60bcde9' image: 'ghcr.io/pre-commit-ci/runner-image@sha256:fe01a6ec51b298412990b88627c3973b1146c7304f930f469bafa29ba60bcde9'
variables: variables:
PRE_COMMIT_HOME: ${CI_PROJECT_DIR}/.cache/pre-commit PRE_COMMIT_HOME: ${CI_PROJECT_DIR}/.cache/pre-commit
ANSIBLE_STDOUT_CALLBACK: default
script: script:
- pre-commit run --all-files --show-diff-on-failure - pre-commit run --all-files --show-diff-on-failure
cache: cache:
@@ -24,3 +23,4 @@ vagrant-validate:
VAGRANT_VERSION: 2.3.7 VAGRANT_VERSION: 2.3.7
script: script:
- ./tests/scripts/vagrant-validate.sh - ./tests/scripts/vagrant-validate.sh
except: ['triggers', 'master']

View File

@@ -34,8 +34,6 @@ molecule:
- container-engine/cri-dockerd - container-engine/cri-dockerd
- container-engine/containerd - container-engine/containerd
- container-engine/cri-o - container-engine/cri-o
- container-engine/gvisor
- container-engine/youki
- adduser - adduser
- bastion-ssh-config - bastion-ssh-config
- bootstrap_os - bootstrap_os
@@ -53,3 +51,5 @@ molecule_full:
- ROLE: - ROLE:
# FIXME : tests below are perma-failing # FIXME : tests below are perma-failing
- container-engine/kata-containers - container-engine/kata-containers
- container-engine/gvisor
- container-engine/youki

View File

@@ -13,18 +13,18 @@
before_script: before_script:
- ./tests/scripts/rebase.sh - ./tests/scripts/rebase.sh
- mkdir -p cluster-dump $ANSIBLE_INVENTORY - mkdir -p cluster-dump $ANSIBLE_INVENTORY
- ./tests/scripts/opentofu_install.sh - ./tests/scripts/terraform_install.sh
- cp contrib/terraform/$PROVIDER/sample-inventory/cluster.tfvars . - cp contrib/terraform/$PROVIDER/sample-inventory/cluster.tfvars .
- ln -rs -t $ANSIBLE_INVENTORY contrib/terraform/$PROVIDER/hosts - ln -rs -t $ANSIBLE_INVENTORY contrib/terraform/$PROVIDER/hosts
- tofu -chdir="contrib/terraform/$PROVIDER" init - terraform -chdir="contrib/terraform/$PROVIDER" init
terraform_validate: terraform_validate:
extends: .terraform_install extends: .terraform_install
tags: [ffci] tags: [ffci]
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
script: script:
- tofu -chdir="contrib/terraform/$PROVIDER" validate - terraform -chdir="contrib/terraform/$PROVIDER" validate
- tofu -chdir="contrib/terraform/$PROVIDER" fmt -check -diff - terraform -chdir="contrib/terraform/$PROVIDER" fmt -check -diff
stage: test stage: test
needs: needs:
- pipeline-image - pipeline-image
@@ -58,11 +58,11 @@ terraform_validate:
- mkdir -p contrib/terraform/$PROVIDER/group_vars - mkdir -p contrib/terraform/$PROVIDER/group_vars
# Random subnet to avoid routing conflicts # Random subnet to avoid routing conflicts
- export TF_VAR_subnet_cidr="10.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).0/24" - export TF_VAR_subnet_cidr="10.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).0/24"
- tofu -chdir="contrib/terraform/$PROVIDER" apply -auto-approve -parallelism=1 - terraform -chdir="contrib/terraform/$PROVIDER" apply -auto-approve -parallelism=1
- tests/scripts/testcases_run.sh - tests/scripts/testcases_run.sh
after_script: after_script:
# Cleanup regardless of exit code # Cleanup regardless of exit code
- tofu -chdir="contrib/terraform/$PROVIDER" destroy -auto-approve - terraform -chdir="contrib/terraform/$PROVIDER" destroy -auto-approve
# Elastx is generously donating resources for Kubespray on Openstack CI # Elastx is generously donating resources for Kubespray on Openstack CI
# Contacts: @gix @bl0m1 # Contacts: @gix @bl0m1

View File

@@ -40,8 +40,6 @@ vagrant:
when: on_success when: on_success
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci" - if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
when: on_success when: on_success
- when: manual
allow_failure: true
parallel: parallel:
matrix: matrix:
- TESTCASE: - TESTCASE:

View File

@@ -40,7 +40,7 @@ Vagrant with VirtualBox or libvirt driver helps you to quickly spin test cluster
2. The [repo owners](OWNERS) will respond to your issue promptly. 2. The [repo owners](OWNERS) will respond to your issue promptly.
3. Fork the desired repo, develop and test your code changes. 3. Fork the desired repo, develop and test your code changes.
4. Install [pre-commit](https://pre-commit.com) and install it in your development repo. 4. Install [pre-commit](https://pre-commit.com) and install it in your development repo.
5. Address any pre-commit validation failures. 5. Addess any pre-commit validation failures.
6. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>) 6. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
7. Submit a pull request. 7. Submit a pull request.
8. Work with the reviewers on their suggestions. 8. Work with the reviewers on their suggestions.

View File

@@ -1,6 +1,6 @@
# syntax=docker/dockerfile:1 # syntax=docker/dockerfile:1
# Use immutable image tags rather than mutable tags (like ubuntu:22.04) # Use imutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37 FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37
# Some tools like yamllint need this # Some tools like yamllint need this
@@ -35,8 +35,8 @@ RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
SHELL ["/bin/bash", "-o", "pipefail", "-c"] SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN OS_ARCHITECTURE=$(dpkg --print-architecture) \ RUN OS_ARCHITECTURE=$(dpkg --print-architecture) \
&& curl -L "https://dl.k8s.io/release/v1.33.7/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \ && curl -L "https://dl.k8s.io/release/v1.32.11/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/v1.33.7/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \ && echo "$(curl -L "https://dl.k8s.io/release/v1.32.11/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl && chmod a+x /usr/local/bin/kubectl
COPY *.yml ./ COPY *.yml ./

View File

@@ -8,6 +8,7 @@ aliases:
kubespray-reviewers: kubespray-reviewers:
- cyclinder - cyclinder
- erikjiang - erikjiang
- mrfreezeex
- mzaian - mzaian
- tico88612 - tico88612
- vannten - vannten

View File

@@ -22,7 +22,7 @@ Ensure you have installed Docker then
```ShellSession ```ShellSession
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \ docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \ --mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.29.0 bash quay.io/kubespray/kubespray:v2.27.0 bash
# Inside the container you may now run the kubespray playbooks: # Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
``` ```
@@ -87,8 +87,8 @@ vagrant up
## Supported Linux Distributions ## Supported Linux Distributions
- **Flatcar Container Linux by Kinvolk** - **Flatcar Container Linux by Kinvolk**
- **Debian** Bookworm, Bullseye, Trixie - **Debian** Bookworm, Bullseye
- **Ubuntu** 22.04, 24.04 - **Ubuntu** 20.04, 22.04, 24.04
- **CentOS/RHEL** [8, 9](docs/operating_systems/rhel.md#rhel-8) - **CentOS/RHEL** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **Fedora** 39, 40 - **Fedora** 39, 40
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md)) - **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
@@ -111,26 +111,27 @@ Note:
<!-- BEGIN ANSIBLE MANAGED BLOCK --> <!-- BEGIN ANSIBLE MANAGED BLOCK -->
- Core - Core
- [kubernetes](https://github.com/kubernetes/kubernetes) 1.33.7 - [kubernetes](https://github.com/kubernetes/kubernetes) 1.32.11
- [etcd](https://github.com/etcd-io/etcd) 3.5.26 - [etcd](https://github.com/etcd-io/etcd) 3.5.26
- [docker](https://www.docker.com/) 28.3 - [docker](https://www.docker.com/) 28.0
- [containerd](https://containerd.io/) 2.1.6 - [containerd](https://containerd.io/) 2.0.7
- [cri-o](http://cri-o.io/) 1.33.8 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS) - [cri-o](http://cri-o.io/) 1.32.0 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin - Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) 1.8.0 - [cni-plugins](https://github.com/containernetworking/plugins) 1.4.1
- [calico](https://github.com/projectcalico/calico) 3.30.6 - [calico](https://github.com/projectcalico/calico) 3.29.7
- [cilium](https://github.com/cilium/cilium) 1.18.5 - [cilium](https://github.com/cilium/cilium) 1.17.9
- [flannel](https://github.com/flannel-io/flannel) 0.27.3 - [flannel](https://github.com/flannel-io/flannel) 0.22.0
- [kube-ovn](https://github.com/alauda/kube-ovn) 1.12.21 - [kube-ovn](https://github.com/alauda/kube-ovn) 1.12.21
- [kube-router](https://github.com/cloudnativelabs/kube-router) 2.1.1 - [kube-router](https://github.com/cloudnativelabs/kube-router) 2.1.1
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) 4.2.2 - [multus](https://github.com/k8snetworkplumbingwg/multus-cni) 4.1.0
- [weave](https://github.com/rajch/weave) 2.8.7
- [kube-vip](https://github.com/kube-vip/kube-vip) 0.8.0 - [kube-vip](https://github.com/kube-vip/kube-vip) 0.8.0
- Application - Application
- [cert-manager](https://github.com/jetstack/cert-manager) 1.15.3 - [cert-manager](https://github.com/jetstack/cert-manager) 1.15.3
- [coredns](https://github.com/coredns/coredns) 1.12.0 - [coredns](https://github.com/coredns/coredns) 1.11.3
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) 1.13.3 - [ingress-nginx](https://github.com/kubernetes/ingress-nginx) 1.12.1
- [argocd](https://argoproj.github.io/) 2.14.5 - [argocd](https://argoproj.github.io/) 2.14.5
- [helm](https://helm.sh/) 3.18.4 - [helm](https://helm.sh/) 3.16.4
- [metallb](https://metallb.universe.tf/) 0.13.9 - [metallb](https://metallb.universe.tf/) 0.13.9
- [registry](https://github.com/distribution/distribution) 2.8.1 - [registry](https://github.com/distribution/distribution) 2.8.1
- Storage Plugin - Storage Plugin
@@ -138,7 +139,7 @@ Note:
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) 1.10.0 - [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) 1.10.0
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) 1.30.0 - [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) 1.30.0
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) 1.9.2 - [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) 1.9.2
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) 0.0.32 - [local-path-provisioner](https://github.com/rancher/local-path-provisioner) 0.0.24
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) 2.5.0 - [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) 2.5.0
- [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) 0.16.4 - [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) 0.16.4
@@ -182,6 +183,9 @@ You can choose among ten network plugins. (default: `calico`, except Vagrant use
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic. - [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [weave](docs/CNI/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
- [kube-ovn](docs/CNI/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises. - [kube-ovn](docs/CNI/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-router](docs/CNI/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational - [kube-router](docs/CNI/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational

29
Vagrantfile vendored
View File

@@ -4,8 +4,6 @@
# For help on using kubespray with vagrant, check out docs/developers/vagrant.md # For help on using kubespray with vagrant, check out docs/developers/vagrant.md
require 'fileutils' require 'fileutils'
require 'ipaddr'
require 'socket'
Vagrant.require_version ">= 2.0.0" Vagrant.require_version ">= 2.0.0"
@@ -101,33 +99,6 @@ $extra_vars ||= {}
host_vars = {} host_vars = {}
def collect_networks(subnet, subnet_ipv6)
Socket.getifaddrs.filter_map do |iface|
next unless iface&.netmask&.ip_address && iface.addr
is_ipv6 = iface.addr.ipv6?
ip = IPAddr.new(iface.addr.ip_address.split('%').first)
ip_test = is_ipv6 ? IPAddr.new("#{subnet_ipv6}::0") : IPAddr.new("#{subnet}.0")
prefix = IPAddr.new(iface.netmask.ip_address).to_i.to_s(2).count('1')
network = ip.mask(prefix)
[IPAddr.new("#{network}/#{prefix}"), ip_test]
end
end
def subnet_in_use?(network_ips)
network_ips.any? { |net, test_ip| net.include?(test_ip) && test_ip != net }
end
network_ips = collect_networks($subnet, $subnet_ipv6)
if subnet_in_use?(network_ips)
puts "Invalid subnet provided, subnet is already in use: #{$subnet}.0"
puts "Subnets in use: #{network_ips.inspect}"
exit 1
end
# throw error if os is not supported # throw error if os is not supported
if ! SUPPORTED_OS.key?($os) if ! SUPPORTED_OS.key?($os)
puts "Unsupported OS: #{$os}" puts "Unsupported OS: #{$os}"

View File

@@ -15,7 +15,7 @@ timeout = 300
stdout_callback = default stdout_callback = default
display_skipped_hosts = no display_skipped_hosts = no
library = ./library library = ./library
callbacks_enabled = profile_tasks callbacks_enabled = profile_tasks,ara_default
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg

View File

@@ -36,7 +36,7 @@ function create_container_image_tar() {
mkdir ${IMAGE_DIR} mkdir ${IMAGE_DIR}
cd ${IMAGE_DIR} cd ${IMAGE_DIR}
sudo --preserve-env=http_proxy,https_proxy,no_proxy ${runtime} pull registry:latest sudo ${runtime} pull registry:latest
sudo ${runtime} save -o registry-latest.tar registry:latest sudo ${runtime} save -o registry-latest.tar registry:latest
while read -r image while read -r image
@@ -45,7 +45,7 @@ function create_container_image_tar() {
set +e set +e
for step in $(seq 1 ${RETRY_COUNT}) for step in $(seq 1 ${RETRY_COUNT})
do do
sudo --preserve-env=http_proxy,https_proxy,no_proxy ${runtime} pull ${image} sudo ${runtime} pull ${image}
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
break break
fi fi

View File

@@ -41,7 +41,7 @@ fi
sudo "${runtime}" container inspect nginx >/dev/null 2>&1 sudo "${runtime}" container inspect nginx >/dev/null 2>&1
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
sudo --preserve-env=http_proxy,https_proxy,no_proxy "${runtime}" run \ sudo "${runtime}" run \
--restart=always -d -p ${NGINX_PORT}:80 \ --restart=always -d -p ${NGINX_PORT}:80 \
--volume "${OFFLINE_FILES_DIR}":/usr/share/nginx/html/download \ --volume "${OFFLINE_FILES_DIR}":/usr/share/nginx/html/download \
--volume "${CURRENT_DIR}"/nginx.conf:/etc/nginx/nginx.conf \ --volume "${CURRENT_DIR}"/nginx.conf:/etc/nginx/nginx.conf \

View File

@@ -624,7 +624,7 @@ Edit `inventory/$CLUSTER/group_vars/k8s_cluster/k8s_cluster.yml`:
- **calico** requires [configuring OpenStack Neutron ports](/docs/cloud_controllers/openstack.md) to allow service and pod subnets - **calico** requires [configuring OpenStack Neutron ports](/docs/cloud_controllers/openstack.md) to allow service and pod subnets
```yml ```yml
# Choose network plugin (calico or flannel) # Choose network plugin (calico, weave or flannel)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing # Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: flannel kube_network_plugin: flannel
``` ```

View File

@@ -180,7 +180,7 @@ calico_group_id=rr1
The inventory above will deploy the following topology assuming that calico's The inventory above will deploy the following topology assuming that calico's
`global_as_num` is set to `65400`: `global_as_num` is set to `65400`:
![Image](../figures/kubespray-calico-rr.png?raw=true) ![Image](figures/kubespray-calico-rr.png?raw=true)
### Optional : Define default endpoint to host action ### Optional : Define default endpoint to host action

View File

@@ -237,7 +237,7 @@ cilium_operator_extra_volume_mounts:
## Choose Cilium version ## Choose Cilium version
```yml ```yml
cilium_version: "1.18.5" cilium_version: "1.17.9"
``` ```
## Add variable to config ## Add variable to config

79
docs/CNI/weave.md Normal file
View File

@@ -0,0 +1,79 @@
# Weave
Weave 2.0.1 is supported by kubespray
Weave uses [**consensus**](https://www.weave.works/docs/net/latest/ipam/##consensus) mode (default mode) and [**seed**](https://www.weave.works/docs/net/latest/ipam/#seed) mode.
`Consensus` mode is best to use on static size cluster and `seed` mode is best to use on dynamic size cluster
Weave encryption is supported for all communication
* To use Weave encryption, specify a strong password (if no password, no encryption)
```ShellSession
# In file ./inventory/sample/group_vars/k8s_cluster.yml
weave_password: EnterPasswordHere
```
This password is used to set an environment variable inside weave container.
Weave is deployed by kubespray using a daemonSet
* Check the status of Weave containers
```ShellSession
# From client
kubectl -n kube-system get pods | grep weave
# output
weave-net-50wd2 2/2 Running 0 2m
weave-net-js9rb 2/2 Running 0 2m
```
There must be as many pods as nodes (here kubernetes have 2 nodes so there are 2 weave pods).
* Check status of weave (connection,encryption ...) for each node
```ShellSession
# On nodes
curl http://127.0.0.1:6784/status
# output on node1
Version: 2.0.1 (up to date; next check at 2017/08/01 13:51:34)
Service: router
Protocol: weave 1..2
Name: fa:16:3e:b3:d6:b2(node1)
Encryption: enabled
PeerDiscovery: enabled
Targets: 2
Connections: 2 (1 established, 1 failed)
Peers: 2 (with 2 established connections)
TrustedSubnets: none
Service: ipam
Status: ready
Range: 10.233.64.0/18
DefaultSubnet: 10.233.64.0/18
```
* Check parameters of weave for each node
```ShellSession
# On nodes
ps -aux | grep weaver
# output on node1 (here its use seed mode)
root 8559 0.2 3.0 365280 62700 ? Sl 08:25 0:00 /home/weave/weaver --name=fa:16:3e:b3:d6:b2 --port=6783 --datapath=datapath --host-root=/host --http-addr=127.0.0.1:6784 --status-addr=0.0.0.0:6782 --docker-api= --no-dns --db-prefix=/weavedb/weave-net --ipalloc-range=10.233.64.0/18 --nickname=node1 --ipalloc-init seed=fa:16:3e:b3:d6:b2,fa:16:3e:f0:50:53 --conn-limit=30 --expect-npc 192.168.208.28 192.168.208.19
```
## Consensus mode (default mode)
This mode is best to use on static size cluster
### Seed mode
This mode is best to use on dynamic size cluster
The seed mode also allows multi-clouds and hybrid on-premise/cloud clusters deployment.
* Switch from consensus mode to seed/Observation mode
See [weave ipam documentation](https://www.weave.works/docs/net/latest/tasks/ipam/ipam/) and use `weave_extra_args` to enable.

View File

@@ -149,11 +149,3 @@ following configuration:
```yaml ```yaml
nri_enabled: true nri_enabled: true
``` ```
### Optional : Static Binary
To ensure compatibility with older distributions (such as Debian 11), you can use a static containerd binary. By default, this is static binary if the system's glibc version is less than 2.34; otherwise, it is the default binary.
```yaml
containerd_static_binary: true
```

1
docs/_sidebar.md generated
View File

@@ -32,6 +32,7 @@
* [Kube-router](/docs/CNI/kube-router.md) * [Kube-router](/docs/CNI/kube-router.md)
* [Macvlan](/docs/CNI/macvlan.md) * [Macvlan](/docs/CNI/macvlan.md)
* [Multus](/docs/CNI/multus.md) * [Multus](/docs/CNI/multus.md)
* [Weave](/docs/CNI/weave.md)
* CRI * CRI
* [Containerd](/docs/CRI/containerd.md) * [Containerd](/docs/CRI/containerd.md)
* [Cri-o](/docs/CRI/cri-o.md) * [Cri-o](/docs/CRI/cri-o.md)

View File

@@ -9,6 +9,7 @@ The following table shows the impact of the CPU architecture on compatible featu
| kube_network_plugin | amd64 | arm64 | amd64 + arm64 | | kube_network_plugin | amd64 | arm64 | amd64 + arm64 |
|---------------------|-------|-------|---------------| |---------------------|-------|-------|---------------|
| Calico | Y | Y | Y | | Calico | Y | Y | Y |
| Weave | Y | Y | Y |
| Flannel | Y | N | N | | Flannel | Y | N | N |
| Canal | Y | N | N | | Canal | Y | N | N |
| Cilium | Y | Y | N | | Cilium | Y | Y | N |

View File

@@ -13,7 +13,7 @@ KUBESPRAYDIR=kubespray
python3 -m venv $VENVDIR python3 -m venv $VENVDIR
source $VENVDIR/bin/activate source $VENVDIR/bin/activate
cd $KUBESPRAYDIR cd $KUBESPRAYDIR
pip install -r requirements.txt pip install -U -r requirements.txt
``` ```
In case you have a similar message when installing the requirements: In case you have a similar message when installing the requirements:
@@ -32,7 +32,7 @@ Based on the table below and the available python version for your ansible host
| Ansible Version | Python Version | | Ansible Version | Python Version |
|-----------------|----------------| |-----------------|----------------|
| >= 2.17.3 | 3.10-3.12 | | >= 2.16.4 | 3.10-3.12 |
## Customize Ansible vars ## Customize Ansible vars
@@ -118,6 +118,7 @@ The following tags are defined in playbooks:
| local-path-provisioner | Configure External provisioner: local-path | | local-path-provisioner | Configure External provisioner: local-path |
| local-volume-provisioner | Configure External provisioner: local-volume | | local-volume-provisioner | Configure External provisioner: local-volume |
| macvlan | Network plugin macvlan | | macvlan | Network plugin macvlan |
| master (DEPRECATED) | Deprecated - see `control-plane` |
| metallb | Installing and configuring metallb | | metallb | Installing and configuring metallb |
| metrics_server | Configuring metrics_server | | metrics_server | Configuring metrics_server |
| netchecker | Installing netchecker K8s app | | netchecker | Installing netchecker K8s app |
@@ -152,9 +153,14 @@ The following tags are defined in playbooks:
| upgrade | Upgrading, f.e. container images/binaries | | upgrade | Upgrading, f.e. container images/binaries |
| upload | Distributing images/binaries across hosts | | upload | Distributing images/binaries across hosts |
| vsphere-csi-driver | Configuring csi driver: vsphere | | vsphere-csi-driver | Configuring csi driver: vsphere |
| weave | Network plugin Weave |
| win_nodes | Running windows specific tasks | | win_nodes | Running windows specific tasks |
| youki | Configuring youki runtime | | youki | Configuring youki runtime |
Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
tags found in the codebase. New tags will be listed with the empty "Used for"
field.
## Example commands ## Example commands
Example command to filter and apply only DNS configuration tasks and skip Example command to filter and apply only DNS configuration tasks and skip
@@ -200,11 +206,11 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
to access the inventory and SSH key in the container, like this: to access the inventory and SSH key in the container, like this:
```ShellSession ```ShellSession
git checkout v2.29.0 git checkout v2.27.0
docker pull quay.io/kubespray/kubespray:v2.29.0 docker pull quay.io/kubespray/kubespray:v2.27.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \ docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \ --mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.29.0 bash quay.io/kubespray/kubespray:v2.27.0 bash
# Inside the container you may now run the kubespray playbooks: # Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
``` ```

View File

@@ -2,13 +2,14 @@
Kubespray can be installed as an [Ansible collection](https://docs.ansible.com/ansible/latest/user_guide/collections_using.html). Kubespray can be installed as an [Ansible collection](https://docs.ansible.com/ansible/latest/user_guide/collections_using.html).
## Requirements
- An inventory file with the appropriate host groups. See the [README](../README.md#usage).
- A `group_vars` directory. These group variables **need** to match the appropriate variable names under `inventory/local/group_vars`. See the [README](../README.md#usage).
## Usage ## Usage
1. Set up an inventory with the appropriate host groups and required group vars. 1. Add Kubespray to your requirements.yml file
See also the documentation on [kubespray inventories](./inventory.md) and the
general ["Getting started" documentation](../getting_started/getting-started.md#building-your-own-inventory).
2. Add Kubespray to your requirements.yml file
```yaml ```yaml
collections: collections:
@@ -17,20 +18,20 @@ Kubespray can be installed as an [Ansible collection](https://docs.ansible.com/a
version: master # use the appropriate tag or branch for the version you need version: master # use the appropriate tag or branch for the version you need
``` ```
3. Install your collection 2. Install your collection
```ShellSession ```ShellSession
ansible-galaxy install -r requirements.yml ansible-galaxy install -r requirements.yml
``` ```
4. Create a playbook to install your Kubernetes cluster 3. Create a playbook to install your Kubernetes cluster
```yaml ```yaml
- name: Install Kubernetes - name: Install Kubernetes
ansible.builtin.import_playbook: kubernetes_sigs.kubespray.cluster ansible.builtin.import_playbook: kubernetes_sigs.kubespray.cluster
``` ```
5. Update INVENTORY and PLAYBOOK so that they point to your inventory file and the playbook you created above, and then install Kubespray 4. Update INVENTORY and PLAYBOOK so that they point to your inventory file and the playbook you created above, and then install Kubespray
```ShellSession ```ShellSession
ansible-playbook -i INVENTORY --become --become-user=root PLAYBOOK ansible-playbook -i INVENTORY --become --become-user=root PLAYBOOK

View File

@@ -103,13 +103,13 @@ following default cluster parameters:
* *kube_service_addresses_ipv6* - Subnet for cluster IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1000/116``). Must not overlap with ``kube_pods_subnet_ipv6``. * *kube_service_addresses_ipv6* - Subnet for cluster IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1000/116``). Must not overlap with ``kube_pods_subnet_ipv6``.
* *kube_service_subnets* - All service subnets separated by commas (default is a mix of ``kube_service_addresses`` and ``kube_service_addresses_ipv6`` depending on ``ipv4_stack`` and ``ipv6_stack`` options), * *kube_service_subnets* - All service subnets separated by commas (default is a mix of ``kube_service_addresses`` and ``kube_service_addresses_ipv6`` depending on ``ipv4_stack`` and ``ipv6_stacke`` options),
for example ``10.233.0.0/18,fd85:ee78:d8a6:8607::1000/116`` for dual stack(ipv4_stack/ipv6_stack set to `true`). for example ``10.233.0.0/18,fd85:ee78:d8a6:8607::1000/116`` for dual stack(ipv4_stack/ipv6_stack set to `true`).
It is not recommended to change this variable directly. It is not recommended to change this variable directly.
* *kube_pods_subnet_ipv6* - Subnet for Pod IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1:0000/112``). Must not overlap with ``kube_service_addresses_ipv6``. * *kube_pods_subnet_ipv6* - Subnet for Pod IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1:0000/112``). Must not overlap with ``kube_service_addresses_ipv6``.
* *kube_pods_subnets* - All pods subnets separated by commas (default is a mix of ``kube_pods_subnet`` and ``kube_pod_subnet_ipv6`` depending on ``ipv4_stack`` and ``ipv6_stack`` options), * *kube_pods_subnets* - All pods subnets separated by commas (default is a mix of ``kube_pods_subnet`` and ``kube_pod_subnet_ipv6`` depending on ``ipv4_stack`` and ``ipv6_stacke`` options),
for example ``10.233.64.0/18,fd85:ee78:d8a6:8607::1:0000/112`` for dual stack(ipv4_stack/ipv6_stack set to `true`). for example ``10.233.64.0/18,fd85:ee78:d8a6:8607::1:0000/112`` for dual stack(ipv4_stack/ipv6_stack set to `true`).
It is not recommended to change this variable directly. It is not recommended to change this variable directly.

View File

@@ -73,7 +73,6 @@ The cloud provider is configured to have Octavia by default in Kubespray.
external_openstack_lbaas_method: ROUND_ROBIN external_openstack_lbaas_method: ROUND_ROBIN
external_openstack_lbaas_provider: amphora external_openstack_lbaas_provider: amphora
external_openstack_lbaas_subnet_id: "Neutron subnet ID to create LBaaS VIP" external_openstack_lbaas_subnet_id: "Neutron subnet ID to create LBaaS VIP"
external_openstack_lbaas_member_subnet_id: "Neutron subnet ID on which to create the members of the load balancer"
external_openstack_lbaas_network_id: "Neutron network ID to create LBaaS VIP" external_openstack_lbaas_network_id: "Neutron network ID to create LBaaS VIP"
external_openstack_lbaas_manage_security_groups: false external_openstack_lbaas_manage_security_groups: false
external_openstack_lbaas_create_monitor: false external_openstack_lbaas_create_monitor: false

View File

@@ -2,14 +2,19 @@
## Pipeline ## Pipeline
See [.gitlab-ci.yml](/.gitlab-ci.yml) and the included files for an overview. 1. build: build a docker image to be used in the pipeline
2. unit-tests: fast jobs for fast feedback (linting, etc...)
3. deploy-part1: small number of jobs to test if the PR works with default settings
4. deploy-extended: slow jobs testing different platforms, OS, settings, CNI, etc...
5. deploy-extended: very slow jobs (upgrades, etc...)
## Runners ## Runners
Kubespray has 2 types of GitLab runners, both deployed on the Kubespray CI cluster (hosted on Oracle Cloud Infrastucture): Kubespray has 3 types of GitLab runners:
- pods: use the [gitlab-ci kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/) - packet runners: used for E2E jobs (usually long), running on Equinix Metal (ex-packet), on kubevirt managed VMs
- vagrant: custom executor running in pods with access to the libvirt socket on the nodes - light runners: used for short lived jobs, running on Equinix Metal (ex-packet), as managed pods
- auto scaling runners (managed via docker-machine on Equinix Metal): used for on-demand resources, see [GitLab docs](https://docs.gitlab.com/runner/configuration/autoscale.html) for more info
## Vagrant ## Vagrant
@@ -17,17 +22,18 @@ Vagrant jobs are using the [quay.io/kubespray/vagrant](/test-infra/vagrant-docke
## CI Variables ## CI Variables
In CI we have a [set of extra vars](/test/common_vars.yml) we use to ensure greater success of our CI jobs and avoid throttling by various APIs we depend on. In CI we have a set of overrides we use to ensure greater success of our CI jobs and avoid throttling by various APIs we depend on. See:
## CI clusters - [Docker mirrors](/tests/common/_docker_hub_registry_mirror.yml)
- [Test settings](/tests/common/_kubespray_test_settings.yml)
DISCLAIMER: The following information is not fully up to date, in particular, the CI cluster is now on Oracle Cloud Infrastcture, not Equinix. ## CI Environment
The cluster is deployed with kubespray itself and maintained by the kubespray maintainers. The CI packet and light runners are deployed on a kubernetes cluster on Equinix Metal. The cluster is deployed with kubespray itself and maintained by the kubespray maintainers.
The following files are used for that inventory: The following files are used for that inventory:
### cluster.tfvars (OBSOLETE: this section is no longer accurate) ### cluster.tfvars
```ini ```ini
# your Kubernetes cluster name here # your Kubernetes cluster name here
@@ -160,6 +166,18 @@ kube_feature_gates:
This section documents additional files used to complete a deployment of the kubespray CI, these files sit on the control-plane node and assume a working kubernetes cluster. This section documents additional files used to complete a deployment of the kubespray CI, these files sit on the control-plane node and assume a working kubernetes cluster.
### /root/nscleanup.sh
```bash
#!/bin/bash
kubectl=/usr/local/bin/kubectl
$kubectl get ns | grep -P "(\d.+-\d.+)" | awk 'match($3,/[0-9]+d/) {print $1}' | xargs -r $kubectl delete ns
$kubectl get ns | grep -P "(\d.+-\d.+)" | awk 'match($3,/[3-9]+h/) {print $1}' | xargs -r $kubectl delete ns
$kubectl get ns | grep Terminating | awk '{print $1}' | xargs -i $kubectl delete vmi/instance-1 vmi/instance-0 vmi/instance-2 -n {} --force --grace-period=0 &
```
### /root/path-calico.sh ### /root/path-calico.sh
```bash ```bash

View File

@@ -6,52 +6,55 @@ To generate this Matrix run `./tests/scripts/md-table/main.py`
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan | | OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan |
|---| --- | --- | --- | --- | --- | --- | --- | |---| --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux9 | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | almalinux9 | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: |
debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: |
debian13 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: |
fedora40 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | fedora40 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | openeuler24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
opensuse15 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | rockylinux9 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu20 | :white_check_mark: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :white_check_mark: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
## crio ## crio
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan | | OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan |
|---| --- | --- | --- | --- | --- | --- | --- | |---| --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux9 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | almalinux9 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | debian11 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | debian12 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian13 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | fedora40 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
opensuse15 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu22 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
## docker ## docker
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan | | OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan |
|---| --- | --- | --- | --- | --- | --- | --- | |---| --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux9 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | almalinux9 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | debian11 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | debian12 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
debian13 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora39 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | fedora39 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
opensuse15 | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
rockylinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |

View File

@@ -7,7 +7,7 @@ The kube-scheduler binary includes a list of plugins:
- [CapacityScheduling](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/capacityscheduling) [Beta] - [CapacityScheduling](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/capacityscheduling) [Beta]
- [CoScheduling](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/coscheduling) [Beta] - [CoScheduling](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/coscheduling) [Beta]
- [NodeResources](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/noderesources) [Beta] - [NodeResources](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/noderesources) [Beta]
- [NodeResourceTopology](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/noderesourcetopology/README.md) [Beta] - [NodeResouceTopology](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/noderesourcetopology/README.md) [Beta]
- [PreemptionToleration](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/preemptiontoleration/README.md) [Alpha] - [PreemptionToleration](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/preemptiontoleration/README.md) [Alpha]
- [Trimaran](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/trimaran/README.md) [Alpha] - [Trimaran](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/trimaran/README.md) [Alpha]
- [NetworkAware](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/networkaware/README.md) [Sample] - [NetworkAware](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/networkaware/README.md) [Sample]

View File

@@ -61,12 +61,12 @@ gcloud compute networks subnets create kubernetes \
#### Firewall Rules #### Firewall Rules
Create a firewall rule that allows internal communication across all protocols. Create a firewall rule that allows internal communication across all protocols.
It is important to note that the vxlan (udp) protocol has to be allowed in order for It is important to note that the vxlan protocol has to be allowed in order for
the calico (see later) networking plugin to work. the calico (see later) networking plugin to work.
```ShellSession ```ShellSession
gcloud compute firewall-rules create kubernetes-the-kubespray-way-allow-internal \ gcloud compute firewall-rules create kubernetes-the-kubespray-way-allow-internal \
--allow tcp,udp,icmp \ --allow tcp,udp,icmp,vxlan \
--network kubernetes-the-kubespray-way \ --network kubernetes-the-kubespray-way \
--source-ranges 10.240.0.0/24 --source-ranges 10.240.0.0/24
``` ```
@@ -88,7 +88,7 @@ cluster.
### Compute Instances ### Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 24.04. The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 18.04.
Each compute instance will be provisioned with a fixed private IP address and Each compute instance will be provisioned with a fixed private IP address and
a public IP address (that can be fixed - see [guide](https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address)). a public IP address (that can be fixed - see [guide](https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address)).
Using fixed public IP addresses has the advantage that our cluster node Using fixed public IP addresses has the advantage that our cluster node
@@ -103,7 +103,7 @@ for i in 0 1 2; do
--async \ --async \
--boot-disk-size 200GB \ --boot-disk-size 200GB \
--can-ip-forward \ --can-ip-forward \
--image-family ubuntu-2404-lts-amd64 \ --image-family ubuntu-1804-lts \
--image-project ubuntu-os-cloud \ --image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \ --machine-type e2-standard-2 \
--private-network-ip 10.240.0.1${i} \ --private-network-ip 10.240.0.1${i} \
@@ -124,7 +124,7 @@ for i in 0 1 2; do
--async \ --async \
--boot-disk-size 200GB \ --boot-disk-size 200GB \
--can-ip-forward \ --can-ip-forward \
--image-family ubuntu-2404-lts-amd64 \ --image-family ubuntu-1804-lts \
--image-project ubuntu-os-cloud \ --image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \ --machine-type e2-standard-2 \
--private-network-ip 10.240.0.2${i} \ --private-network-ip 10.240.0.2${i} \

View File

@@ -35,7 +35,7 @@ kubectl create clusterrolebinding cluster-admin-binding \
The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version. The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version.
```console ```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.13.3/deploy/static/provider/cloud/deploy.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/cloud/deploy.yaml
``` ```
### Provider Specific Steps ### Provider Specific Steps

View File

@@ -30,6 +30,11 @@ kube_memory_reserved: 256Mi
kube_cpu_reserved: 100m kube_cpu_reserved: 100m
# kube_ephemeral_storage_reserved: 2Gi # kube_ephemeral_storage_reserved: 2Gi
# kube_pid_reserved: "1000" # kube_pid_reserved: "1000"
# Reservation for master hosts
kube_master_memory_reserved: 512Mi
kube_master_cpu_reserved: 200m
# kube_master_ephemeral_storage_reserved: 2Gi
# kube_master_pid_reserved: "1000"
# Set to true to reserve resources for system daemons # Set to true to reserve resources for system daemons
system_reserved: true system_reserved: true
@@ -39,6 +44,11 @@ system_memory_reserved: 512Mi
system_cpu_reserved: 500m system_cpu_reserved: 500m
# system_ephemeral_storage_reserved: 2Gi # system_ephemeral_storage_reserved: 2Gi
# system_pid_reserved: "1000" # system_pid_reserved: "1000"
# Reservation for master hosts
system_master_memory_reserved: 256Mi
system_master_cpu_reserved: 250m
# system_master_ephemeral_storage_reserved: 2Gi
# system_master_pid_reserved: "1000"
``` ```
After the setup, the cgroups hierarchy is as follows: After the setup, the cgroups hierarchy is as follows:

View File

@@ -18,6 +18,8 @@ The **kubernetes** version should be at least `v1.23.6` to have all the most rec
## kube-apiserver ## kube-apiserver
authorization_modes: ['Node', 'RBAC'] authorization_modes: ['Node', 'RBAC']
# AppArmor-based OS
# kube_apiserver_feature_gates: ['AppArmor=true']
kube_apiserver_request_timeout: 120s kube_apiserver_request_timeout: 120s
kube_apiserver_service_account_lookup: true kube_apiserver_service_account_lookup: true
@@ -75,17 +77,17 @@ remove_anonymous_access: true
## kube-controller-manager ## kube-controller-manager
kube_controller_manager_bind_address: 127.0.0.1 kube_controller_manager_bind_address: 127.0.0.1
kube_controller_terminated_pod_gc_threshold: 50 kube_controller_terminated_pod_gc_threshold: 50
# AppArmor-based OS
# kube_controller_feature_gates: ["RotateKubeletServerCertificate=true", "AppArmor=true"]
kube_controller_feature_gates: ["RotateKubeletServerCertificate=true"] kube_controller_feature_gates: ["RotateKubeletServerCertificate=true"]
## kube-scheduler ## kube-scheduler
kube_scheduler_bind_address: 127.0.0.1 kube_scheduler_bind_address: 127.0.0.1
# AppArmor-based OS
# kube_scheduler_feature_gates: ["AppArmor=true"]
## etcd ## etcd
# Running etcd (on dedicated hosts) outside the Kubernetes cluster is the most secure deployment option, etcd_deployment_type: kubeadm
# as it isolates etcd from the cluster's CNI network and removes direct pod-level attack vectors.
# This approach prevents RBAC misconfigurations that potentially compromise etcd,
# creating an additional security boundary that protects the cluster's critical state store.
etcd_deployment_type: host
## kubelet ## kubelet
kubelet_authorization_mode_webhook: true kubelet_authorization_mode_webhook: true
@@ -100,8 +102,6 @@ kubelet_make_iptables_util_chains: true
kubelet_feature_gates: ["RotateKubeletServerCertificate=true"] kubelet_feature_gates: ["RotateKubeletServerCertificate=true"]
kubelet_seccomp_default: true kubelet_seccomp_default: true
kubelet_systemd_hardening: true kubelet_systemd_hardening: true
# To disable kubelet's staticPodPath (for nodes that don't use static pods like worker nodes)
kubelet_static_pod_path: ""
# In case you have multiple interfaces in your # In case you have multiple interfaces in your
# control plane nodes and you want to specify the right # control plane nodes and you want to specify the right
# IP addresses, kubelet_secure_addresses allows you # IP addresses, kubelet_secure_addresses allows you
@@ -126,8 +126,9 @@ Let's take a deep look to the resultant **kubernetes** configuration:
* The `encryption-provider-config` provide encryption at rest. This means that the `kube-apiserver` encrypt data that is going to be stored before they reach `etcd`. So the data is completely unreadable from `etcd` (in case an attacker is able to exploit this). * The `encryption-provider-config` provide encryption at rest. This means that the `kube-apiserver` encrypt data that is going to be stored before they reach `etcd`. So the data is completely unreadable from `etcd` (in case an attacker is able to exploit this).
* The `rotateCertificates` in `KubeletConfiguration` is set to `true` along with `serverTLSBootstrap`. This could be used in alternative to `tlsCertFile` and `tlsPrivateKeyFile` parameters. Additionally it automatically generates certificates by itself. By default the CSRs are approved automatically via [kubelet-csr-approver](https://github.com/postfinance/kubelet-csr-approver). You can customize approval configuration by modifying Helm values via `kubelet_csr_approver_values`. * The `rotateCertificates` in `KubeletConfiguration` is set to `true` along with `serverTLSBootstrap`. This could be used in alternative to `tlsCertFile` and `tlsPrivateKeyFile` parameters. Additionally it automatically generates certificates by itself. By default the CSRs are approved automatically via [kubelet-csr-approver](https://github.com/postfinance/kubelet-csr-approver). You can customize approval configuration by modifying Helm values via `kubelet_csr_approver_values`.
See <https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/> for more information on the subject. See <https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/> for more information on the subject.
* If you are installing **kubernetes** in an AppArmor-based OS (eg. Debian/Ubuntu) you can enable the `AppArmor` feature gate uncommenting the lines with the comment `# AppArmor-based OS` on top.
* The `kubelet_systemd_hardening`, both with `kubelet_secure_addresses` setup a minimal firewall on the system. To better understand how these variables work, here's an explanatory image: * The `kubelet_systemd_hardening`, both with `kubelet_secure_addresses` setup a minimal firewall on the system. To better understand how these variables work, here's an explanatory image:
![kubelet hardening](../img/kubelet-hardening.png) ![kubelet hardening](img/kubelet-hardening.png)
Once you have the file properly filled, you can run the **Ansible** command to start the installation: Once you have the file properly filled, you can run the **Ansible** command to start the installation:

View File

@@ -2,6 +2,58 @@
Modified from [comments in #3471](https://github.com/kubernetes-sigs/kubespray/issues/3471#issuecomment-530036084) Modified from [comments in #3471](https://github.com/kubernetes-sigs/kubespray/issues/3471#issuecomment-530036084)
## Limitation: Removal of first kube_control_plane and etcd-master
Currently you can't remove the first node in your kube_control_plane and etcd-master list. If you still want to remove this node you have to:
### 1) Change order of current control planes
Modify the order of your control plane list by pushing your first entry to any other position. E.g. if you want to remove `node-1` of the following example:
```yaml
children:
kube_control_plane:
hosts:
node-1:
node-2:
node-3:
kube_node:
hosts:
node-1:
node-2:
node-3:
etcd:
hosts:
node-1:
node-2:
node-3:
```
change your inventory to:
```yaml
children:
kube_control_plane:
hosts:
node-2:
node-3:
node-1:
kube_node:
hosts:
node-2:
node-3:
node-1:
etcd:
hosts:
node-2:
node-3:
node-1:
```
## 2) Upgrade the cluster
run `upgrade-cluster.yml` or `cluster.yml`. Now you are good to go on with the removal.
## Adding/replacing a worker node ## Adding/replacing a worker node
This should be the easiest. This should be the easiest.
@@ -48,74 +100,40 @@ crictl ps | grep nginx-proxy | awk '{print $1}' | xargs crictl stop
With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=NODE_NAME` to the playbook to limit the execution to the node being removed. With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=NODE_NAME` to the playbook to limit the execution to the node being removed.
If the node you want to remove is not online, you should add `reset_nodes=false` and `allow_ungraceful_removal=true` to your extra-vars. If the node you want to remove is not online, you should add `reset_nodes=false` and `allow_ungraceful_removal=true` to your extra-vars.
## Adding/Removal of first `kube_control_plane` and etcd-master ## Replacing a first control plane node
Currently you can't remove the first node in your `kube_control_plane` and etcd-master list. If you still want to remove this node you have to: ### 1) Change control plane nodes order in inventory
### 1) Change order of current control planes from
Modify the order of your control plane list by pushing your first entry to any other position. E.g. if you want to remove `node-1` of the following example: ```ini
[kube_control_plane]
```yaml node-1
all: node-2
hosts: node-3
children:
kube_control_plane:
hosts:
node-1:
node-2:
node-3:
kube_node:
hosts:
node-1:
node-2:
node-3:
etcd:
hosts:
node-1:
node-2:
node-3:
``` ```
change your inventory to: to
```yaml ```ini
all: [kube_control_plane]
hosts: node-2
children: node-3
kube_control_plane: node-1
hosts:
node-2:
node-3:
node-1:
kube_node:
hosts:
node-2:
node-3:
node-1:
etcd:
hosts:
node-2:
node-3:
node-1:
``` ```
### 2) Upgrade the cluster ### 2) Remove old first control plane node from cluster
run `upgrade-cluster.yml` or `cluster.yml`. Now you are good to go on with the removal.
### 3) Remove old first control plane node from cluster
With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=node-1` to the playbook to limit the execution to the node being removed. With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=node-1` to the playbook to limit the execution to the node being removed.
If the node you want to remove is not online, you should add `reset_nodes=false` and `allow_ungraceful_removal=true` to your extra-vars. If the node you want to remove is not online, you should add `reset_nodes=false` and `allow_ungraceful_removal=true` to your extra-vars.
### 4) Edit cluster-info configmap in kube-public namespace ### 3) Edit cluster-info configmap in kube-public namespace
`kubectl edit cm -n kube-public cluster-info` `kubectl edit cm -n kube-public cluster-info`
Change ip of old kube_control_plane node with ip of live kube_control_plane node (`server` field). Also, update `certificate-authority-data` field if you changed certs. Change ip of old kube_control_plane node with ip of live kube_control_plane node (`server` field). Also, update `certificate-authority-data` field if you changed certs.
### 5) Add new control plane node ### 4) Add new control plane node
Update inventory (if needed) Update inventory (if needed)

View File

@@ -13,6 +13,7 @@ versions. Here are all version vars for each component:
* etcd_version * etcd_version
* calico_version * calico_version
* calico_cni_version * calico_cni_version
* weave_version
* flannel_version * flannel_version
> **Warning** > **Warning**
@@ -355,7 +356,7 @@ follows:
* Containerd * Containerd
* etcd * etcd
* kubelet and kube-proxy * kubelet and kube-proxy
* network_plugin (such as Calico) * network_plugin (such as Calico or Weave)
* kube-apiserver, kube-scheduler, and kube-controller-manager * kube-apiserver, kube-scheduler, and kube-controller-manager
* Add-ons (such as KubeDNS) * Add-ons (such as KubeDNS)

View File

@@ -2,7 +2,7 @@
namespace: kubernetes_sigs namespace: kubernetes_sigs
description: Deploy a production ready Kubernetes cluster description: Deploy a production ready Kubernetes cluster
name: kubespray name: kubespray
version: 2.29.2 version: 2.28.2
readme: README.md readme: README.md
authors: authors:
- The Kubespray maintainers (https://kubernetes.slack.com/channels/kubespray) - The Kubespray maintainers (https://kubernetes.slack.com/channels/kubespray)

View File

@@ -115,9 +115,6 @@ no_proxy_exclude_workers: false
# sysctl_file_path to add sysctl conf to # sysctl_file_path to add sysctl conf to
# sysctl_file_path: "/etc/sysctl.d/99-sysctl.conf" # sysctl_file_path: "/etc/sysctl.d/99-sysctl.conf"
# ignore sysctl errors about unknown keys
# sysctl_ignore_unknown_keys: false
## Variables for webhook token auth https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication ## Variables for webhook token auth https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication
kube_webhook_token_auth: false kube_webhook_token_auth: false
kube_webhook_token_auth_url_skip_tls_verify: false kube_webhook_token_auth_url_skip_tls_verify: false

View File

@@ -50,8 +50,6 @@
# - host: https://registry-1.docker.io # - host: https://registry-1.docker.io
# capabilities: ["pull", "resolve"] # capabilities: ["pull", "resolve"]
# skip_verify: false # skip_verify: false
# header:
# Authorization: "Basic XXX"
# containerd_max_container_log_line_size: 16384 # containerd_max_container_log_line_size: 16384

View File

@@ -0,0 +1,38 @@
---
## Etcd auto compaction retention for mvcc key value store in hour
# etcd_compaction_retention: 0
## Set level of detail for etcd exported metrics, specify 'extensive' to include histogram metrics.
# etcd_metrics: basic
## Etcd is restricted by default to 512M on systems under 4GB RAM, 512MB is not enough for much more than testing.
## Set this if your etcd nodes have less than 4GB but you want more RAM for etcd. Set to 0 for unrestricted RAM.
## This value is only relevant when deploying etcd with `etcd_deployment_type: docker`
# etcd_memory_limit: "512M"
## Etcd has a default of 2G for its space quota. If you put a value in etcd_memory_limit which is less than
## etcd_quota_backend_bytes, you may encounter out of memory terminations of the etcd cluster. Please check
## etcd documentation for more information.
# 8G is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it.
# etcd_quota_backend_bytes: "2147483648"
# Maximum client request size in bytes the server will accept.
# etcd is designed to handle small key value pairs typical for metadata.
# Larger requests will work, but may increase the latency of other requests
# etcd_max_request_bytes: "1572864"
### ETCD: disable peer client cert authentication.
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
# etcd_peer_client_auth: true
## Enable distributed tracing
## To enable this experimental feature, set the etcd_experimental_enable_distributed_tracing: true, along with the
## etcd_experimental_distributed_tracing_sample_rate to choose how many samples to collect per million spans,
## the default sampling rate is 0 https://etcd.io/docs/v3.5/op-guide/monitoring/#distributed-tracing
# etcd_experimental_enable_distributed_tracing: false
# etcd_experimental_distributed_tracing_sample_rate: 100
# etcd_experimental_distributed_tracing_address: "localhost:4317"
# etcd_experimental_distributed_tracing_service_name: etcd
## The interval for etcd watch progress notify events
# etcd_experimental_watch_progress_notify_interval: 5s

View File

@@ -29,6 +29,7 @@ local_path_provisioner_enabled: false
# local_path_provisioner_claim_root: /opt/local-path-provisioner/ # local_path_provisioner_claim_root: /opt/local-path-provisioner/
# local_path_provisioner_debug: false # local_path_provisioner_debug: false
# local_path_provisioner_image_repo: "{{ docker_image_repo }}/rancher/local-path-provisioner" # local_path_provisioner_image_repo: "{{ docker_image_repo }}/rancher/local-path-provisioner"
# local_path_provisioner_image_tag: "v0.0.24"
# local_path_provisioner_helper_image_repo: "busybox" # local_path_provisioner_helper_image_repo: "busybox"
# local_path_provisioner_helper_image_tag: "latest" # local_path_provisioner_helper_image_tag: "latest"

View File

@@ -62,7 +62,7 @@ credentials_dir: "{{ inventory_dir }}/credentials"
# kube_webhook_authorization_url: https://... # kube_webhook_authorization_url: https://...
# kube_webhook_authorization_url_skip_tls_verify: false # kube_webhook_authorization_url_skip_tls_verify: false
# Choose network plugin (cilium, calico, kube-ovn or flannel. Use cni for generic cni plugin) # Choose network plugin (cilium, calico, kube-ovn, weave or flannel. Use cni for generic cni plugin)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing # Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: calico kube_network_plugin: calico
@@ -346,7 +346,7 @@ event_ttl_duration: "1h0m0s"
## Automatically renew K8S control plane certificates on first Monday of each month ## Automatically renew K8S control plane certificates on first Monday of each month
auto_renew_certificates: false auto_renew_certificates: false
# First Monday of each month # First Monday of each month
# auto_renew_certificates_systemd_calendar: "Mon *-*-1,2,3,4,5,6,7 03:00:00" # auto_renew_certificates_systemd_calendar: "Mon *-*-1,2,3,4,5,6,7 03:{{ groups['kube_control_plane'].index(inventory_hostname) }}0:00"
kubeadm_patches_dir: "{{ kube_config_dir }}/patches" kubeadm_patches_dir: "{{ kube_config_dir }}/patches"
kubeadm_patches: [] kubeadm_patches: []

View File

@@ -25,9 +25,15 @@ calico_pool_blocksize: 26
# add default ippool CIDR (must be inside kube_pods_subnet, defaults to kube_pods_subnet otherwise) # add default ippool CIDR (must be inside kube_pods_subnet, defaults to kube_pods_subnet otherwise)
# calico_pool_cidr: 1.2.3.4/5 # calico_pool_cidr: 1.2.3.4/5
# add default ippool CIDR to CNI config
# calico_cni_pool: true
# Add default IPV6 IPPool CIDR. Must be inside kube_pods_subnet_ipv6. Defaults to kube_pods_subnet_ipv6 if not set. # Add default IPV6 IPPool CIDR. Must be inside kube_pods_subnet_ipv6. Defaults to kube_pods_subnet_ipv6 if not set.
# calico_pool_cidr_ipv6: fd85:ee78:d8a6:8607::1:0000/112 # calico_pool_cidr_ipv6: fd85:ee78:d8a6:8607::1:0000/112
# Add default IPV6 IPPool CIDR to CNI config
# calico_cni_pool_ipv6: true
# Global as_num (/calico/bgp/v1/global/as_num) # Global as_num (/calico/bgp/v1/global/as_num)
# global_as_num: "64512" # global_as_num: "64512"

View File

@@ -56,8 +56,8 @@ cilium_l2announcements: false
# #
# Only effective when monitor aggregation is set to "medium" or higher. # Only effective when monitor aggregation is set to "medium" or higher.
# cilium_monitor_aggregation_flags: "all" # cilium_monitor_aggregation_flags: "all"
# Kube Proxy Replacement mode (true/false) # Kube Proxy Replacement mode (strict/partial)
# cilium_kube_proxy_replacement: false # cilium_kube_proxy_replacement: partial
# If upgrading from Cilium < 1.5, you may want to override some of these options # If upgrading from Cilium < 1.5, you may want to override some of these options
# to prevent service disruptions. See also: # to prevent service disruptions. See also:
@@ -384,7 +384,3 @@ cilium_l2announcements: false
# resourceNames: # resourceNames:
# - toto # - toto
# cilium_clusterrole_rules_operator_extra_vars: [] # cilium_clusterrole_rules_operator_extra_vars: []
# Cilium extra values, use any values from cilium Helm Chart
# ref: https://docs.cilium.io/en/stable/helm-reference/
# cilium_extra_values: {}

View File

@@ -0,0 +1,64 @@
# see roles/network_plugin/weave/defaults/main.yml
# Weave's network password for encryption, if null then no network encryption.
# weave_password: ~
# If set to 1, disable checking for new Weave Net versions (default is blank,
# i.e. check is enabled)
# weave_checkpoint_disable: false
# Soft limit on the number of connections between peers. Defaults to 100.
# weave_conn_limit: 100
# Weave Net defaults to enabling hairpin on the bridge side of the veth pair
# for containers attached. If you need to disable hairpin, e.g. your kernel is
# one of those that can panic if hairpin is enabled, then you can disable it by
# setting `HAIRPIN_MODE=false`.
# weave_hairpin_mode: true
# The range of IP addresses used by Weave Net and the subnet they are placed in
# (CIDR format; default 10.32.0.0/12)
# weave_ipalloc_range: "{{ kube_pods_subnet }}"
# Set to 0 to disable Network Policy Controller (default is on)
# weave_expect_npc: "{{ enable_network_policy }}"
# List of addresses of peers in the Kubernetes cluster (default is to fetch the
# list from the api-server)
# weave_kube_peers: ~
# Set the initialization mode of the IP Address Manager (defaults to consensus
# amongst the KUBE_PEERS)
# weave_ipalloc_init: ~
# Set the IP address used as a gateway from the Weave network to the host
# network - this is useful if you are configuring the addon as a static pod.
# weave_expose_ip: ~
# Address and port that the Weave Net daemon will serve Prometheus-style
# metrics on (defaults to 0.0.0.0:6782)
# weave_metrics_addr: ~
# Address and port that the Weave Net daemon will serve status requests on
# (defaults to disabled)
# weave_status_addr: ~
# Weave Net defaults to 1376 bytes, but you can set a smaller size if your
# underlying network has a tighter limit, or set a larger size for better
# performance if your network supports jumbo frames (e.g. 8916)
# weave_mtu: 1376
# Set to 1 to preserve the client source IP address when accessing Service
# annotated with `service.spec.externalTrafficPolicy=Local`. The feature works
# only with Weave IPAM (default).
# weave_no_masq_local: true
# set to nft to use nftables backend for iptables (default is iptables)
# weave_iptables_backend: iptables
# Extra variables that passing to launch.sh, useful for enabling seed mode, see
# https://www.weave.works/docs/net/latest/tasks/ipam/ipam/
# weave_extra_args: ~
# Extra variables for weave_npc that passing to launch.sh, useful for change log level, ex --log-level=error
# weave_npc_extra_args: ~

View File

@@ -1,2 +1,2 @@
--- ---
requires_ansible: ">=2.17.3" requires_ansible: '>=2.16.4'

View File

@@ -1,4 +1,4 @@
# Use immutable image tags rather than mutable tags (like ubuntu:22.04) # Use imutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:jammy-20230308 FROM ubuntu:jammy-20230308
# Some tools like yamllint need this # Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible # Pip needs this as well at the moment to install ansible
@@ -47,8 +47,8 @@ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& pip install --no-compile --no-cache-dir pip -U \ && pip install --no-compile --no-cache-dir pip -U \
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \ && pip install --no-compile --no-cache-dir -r tests/requirements.txt \
&& pip install --no-compile --no-cache-dir -r requirements.txt \ && pip install --no-compile --no-cache-dir -r requirements.txt \
&& curl -L https://dl.k8s.io/release/v1.33.7/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \ && curl -L https://dl.k8s.io/release/v1.32.11/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/v1.33.7/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \ && echo $(curl -L https://dl.k8s.io/release/v1.32.11/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \ && chmod a+x /usr/local/bin/kubectl \
# Install Vagrant # Install Vagrant
&& curl -LO https://releases.hashicorp.com/vagrant/${VAGRANT_VERSION}/vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \ && curl -LO https://releases.hashicorp.com/vagrant/${VAGRANT_VERSION}/vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \

View File

@@ -5,8 +5,8 @@
become: false become: false
run_once: true run_once: true
vars: vars:
minimal_ansible_version: 2.17.3 minimal_ansible_version: 2.16.4
maximal_ansible_version: 2.18.0 maximal_ansible_version: 2.17.0
tags: always tags: always
tasks: tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}" - name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"

View File

@@ -6,12 +6,35 @@
# - to ensure we keep compatibility with old style group names # - to ensure we keep compatibility with old style group names
# - to reduce inventory boilerplate (defining parent groups / empty groups) # - to reduce inventory boilerplate (defining parent groups / empty groups)
- name: Inventory setup and validation - name: Define groups for legacy less structured inventories
hosts: all
gather_facts: false
tags: always
tasks:
- name: Match needed groups by their old names or definition
vars:
group_mappings:
kube_control_plane:
- kube-master
kube_node:
- kube-node
calico_rr:
- calico-rr
no_floating:
- no-floating
k8s_cluster:
- kube_node
- kube_control_plane
- calico_rr
group_by:
key: "{{ (group_names | intersect(item.value) | length > 0) | ternary(item.key, '_all') }}"
loop: "{{ group_mappings | dict2items }}"
- name: Check inventory settings
hosts: all hosts: all
gather_facts: false gather_facts: false
tags: always tags: always
roles: roles:
- dynamic_groups
- validate_inventory - validate_inventory
- name: Install bastion ssh config - name: Install bastion ssh config

View File

@@ -3,7 +3,7 @@
import_playbook: boilerplate.yml import_playbook: boilerplate.yml
- name: Gather facts - name: Gather facts
import_playbook: internal_facts.yml import_playbook: facts.yml
- name: Prepare for etcd install - name: Prepare for etcd install
hosts: k8s_cluster:etcd hosts: k8s_cluster:etcd
@@ -38,7 +38,7 @@
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
- { role: kubespray_defaults } - { role: kubespray_defaults }
- { role: kubernetes/control-plane, tags: control-plane } - { role: kubernetes/control-plane, tags: master }
- { role: kubernetes/client, tags: client } - { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles } - { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
@@ -52,7 +52,11 @@
- { role: kubernetes/kubeadm, tags: kubeadm} - { role: kubernetes/kubeadm, tags: kubeadm}
- { role: kubernetes/node-label, tags: node-label } - { role: kubernetes/node-label, tags: node-label }
- { role: kubernetes/node-taint, tags: node-taint } - { role: kubernetes/node-taint, tags: node-taint }
- { role: kubernetes-apps/common_crds } - role: kubernetes-apps/gateway_api
when: gateway_api_enabled
tags: gateway_api
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true
- { role: network_plugin, tags: network } - { role: network_plugin, tags: network }
- name: Install Calico Route Reflector - name: Install Calico Route Reflector
@@ -71,7 +75,7 @@
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
- { role: kubespray_defaults } - { role: kubespray_defaults }
- { role: win_nodes/kubernetes_patch, tags: ["control-plane", "win_nodes"] } - { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"] }
- name: Install Kubernetes apps - name: Install Kubernetes apps
hosts: kube_control_plane hosts: kube_control_plane
@@ -81,6 +85,7 @@
roles: roles:
- { role: kubespray_defaults } - { role: kubespray_defaults }
- { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller } - { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller }
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller } - { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller } - { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner } - { role: kubernetes-apps/external_provisioner, tags: external-provisioner }

View File

@@ -1,6 +1,39 @@
--- ---
- name: Common tasks for every playbooks - name: Bootstrap hosts for Ansible
import_playbook: boilerplate.yml hosts: k8s_cluster:etcd:calico_rr
strategy: linear
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
environment: "{{ proxy_disable_env }}"
roles:
- { role: bootstrap_os, tags: bootstrap_os}
- name: Gather facts - name: Gather facts
import_playbook: internal_facts.yml hosts: k8s_cluster:etcd:calico_rr
gather_facts: false
tags: always
tasks:
- name: Gather and compute network facts
import_role:
name: network_facts
- name: Gather minimal facts
setup:
gather_subset: '!all'
# filter match the following variables:
# ansible_default_ipv4
# ansible_default_ipv6
# ansible_all_ipv4_addresses
# ansible_all_ipv6_addresses
- name: Gather necessary facts (network)
setup:
gather_subset: '!all,!min,network'
filter: "ansible_*_ipv[46]*"
# filter match the following variables:
# ansible_memtotal_mb
# ansible_swaptotal_mb
- name: Gather necessary facts (hardware)
setup:
gather_subset: '!all,!min,hardware'
filter: "ansible_*total_mb"

View File

@@ -1,39 +0,0 @@
---
- name: Bootstrap hosts for Ansible
hosts: k8s_cluster:etcd:calico_rr
strategy: linear
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
environment: "{{ proxy_disable_env }}"
roles:
- { role: bootstrap_os, tags: bootstrap_os}
- name: Gather facts
hosts: k8s_cluster:etcd:calico_rr
gather_facts: false
tags: always
tasks:
- name: Gather and compute network facts
import_role:
name: network_facts
- name: Gather minimal facts
setup:
gather_subset: '!all'
# filter match the following variables:
# ansible_default_ipv4
# ansible_default_ipv6
# ansible_all_ipv4_addresses
# ansible_all_ipv6_addresses
- name: Gather necessary facts (network)
setup:
gather_subset: '!all,!min,network'
filter: "ansible_*_ipv[46]*"
# filter match the following variables:
# ansible_memtotal_mb
# ansible_swaptotal_mb
- name: Gather necessary facts (hardware)
setup:
gather_subset: '!all,!min,hardware'
filter: "ansible_*total_mb"

View File

@@ -1,8 +1,6 @@
--- ---
- name: Validate nodes for removal - name: Validate nodes for removal
hosts: localhost hosts: localhost
gather_facts: false
become: false
tasks: tasks:
- name: Assert that nodes are specified for removal - name: Assert that nodes are specified for removal
assert: assert:
@@ -32,7 +30,7 @@
when: pause_result.user_input | default('yes') != 'yes' when: pause_result.user_input | default('yes') != 'yes'
- name: Gather facts - name: Gather facts
import_playbook: internal_facts.yml import_playbook: facts.yml
when: reset_nodes | default(True) | bool when: reset_nodes | default(True) | bool
- name: Reset node - name: Reset node

View File

@@ -3,7 +3,7 @@
import_playbook: boilerplate.yml import_playbook: boilerplate.yml
- name: Gather facts - name: Gather facts
import_playbook: internal_facts.yml import_playbook: facts.yml
- name: Reset cluster - name: Reset cluster
hosts: etcd:k8s_cluster:calico_rr hosts: etcd:k8s_cluster:calico_rr

View File

@@ -3,7 +3,7 @@
import_playbook: boilerplate.yml import_playbook: boilerplate.yml
- name: Gather facts - name: Gather facts
import_playbook: internal_facts.yml import_playbook: facts.yml
- name: Install etcd - name: Install etcd
vars: vars:

View File

@@ -3,7 +3,7 @@
import_playbook: boilerplate.yml import_playbook: boilerplate.yml
- name: Gather facts - name: Gather facts
import_playbook: internal_facts.yml import_playbook: facts.yml
- name: Download images to ansible host cache via first kube_control_plane node - name: Download images to ansible host cache via first kube_control_plane node
hosts: kube_control_plane[0] hosts: kube_control_plane[0]
@@ -73,6 +73,7 @@
- { role: kubespray_defaults } - { role: kubespray_defaults }
- { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller } - { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller }
- { role: network_plugin, tags: network } - { role: network_plugin, tags: network }
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller } - { role: kubernetes-apps/policy_controller, tags: policy-controller }
- name: Finally handle worker upgrades, based on given batch size - name: Finally handle worker upgrades, based on given batch size

View File

@@ -1,3 +0,0 @@
---
- name: Remove node
ansible.builtin.import_playbook: playbooks/remove_node.yml

View File

@@ -1,6 +1,6 @@
ansible==10.7.0 ansible==9.13.0
# Needed for community.crypto module # Needed for community.crypto module
cryptography==46.0.2 cryptography==45.0.2
# Needed for jinja2 json_query templating # Needed for jinja2 json_query templating
jmespath==1.0.1 jmespath==1.0.1
# Needed for ansible.utils.ipaddr # Needed for ansible.utils.ipaddr

View File

@@ -3,8 +3,8 @@ role_name_check: 1
dependency: dependency:
name: galaxy name: galaxy
platforms: platforms:
- name: ubuntu22 - name: ubuntu20
cloud_image: ubuntu-2204 cloud_image: ubuntu-2004
vm_cpu_cores: 1 vm_cpu_cores: 1
vm_memory: 512 vm_memory: 512
provisioner: provisioner:

View File

@@ -4,7 +4,7 @@ dependency:
name: galaxy name: galaxy
platforms: platforms:
- name: bastion-01 - name: bastion-01
cloud_image: ubuntu-2204 cloud_image: ubuntu-2004
vm_cpu_cores: 1 vm_cpu_cores: 1
vm_memory: 512 vm_memory: 512
provisioner: provisioner:

View File

@@ -3,12 +3,12 @@ role_name_check: 1
dependency: dependency:
name: galaxy name: galaxy
platforms: platforms:
- name: ubuntu22 - name: ubuntu20
cloud_image: ubuntu-2204 cloud_image: ubuntu-2004
vm_cpu_cores: 1 vm_cpu_cores: 1
vm_memory: 512 vm_memory: 512
- name: ubuntu24 - name: ubuntu22
cloud_image: ubuntu-2404 cloud_image: ubuntu-2204
vm_cpu_cores: 1 vm_cpu_cores: 1
vm_memory: 512 vm_memory: 512
- name: almalinux9 - name: almalinux9

View File

@@ -1,3 +0,0 @@
---
- name: Import Centos boostrap for Alma Linux
import_tasks: centos.yml

View File

@@ -17,6 +17,32 @@
when: not skip_http_proxy_on_os_packages when: not skip_http_proxy_on_os_packages
# For Oracle Linux install public repo # For Oracle Linux install public repo
- name: Download Oracle Linux public yum repo
get_url:
url: https://yum.oracle.com/public-yum-ol7.repo
dest: /etc/yum.repos.d/public-yum-ol7.repo
mode: "0644"
when:
- use_oracle_public_repo | default(true)
- '''ID="ol"'' in os_release.stdout_lines'
- (ansible_distribution_version | float) < 7.6
environment: "{{ proxy_env }}"
- name: Enable Oracle Linux repo
community.general.ini_file:
dest: /etc/yum.repos.d/public-yum-ol7.repo
section: "{{ item }}"
option: enabled
value: "1"
mode: "0644"
with_items:
- ol7_latest
- ol7_addons
- ol7_developer_EPEL
when:
- use_oracle_public_repo | default(true)
- '''ID="ol"'' in os_release.stdout_lines'
- (ansible_distribution_version | float) < 7.6
- name: Install EPEL for Oracle Linux repo package - name: Install EPEL for Oracle Linux repo package
package: package:

View File

@@ -0,0 +1,16 @@
---
# ClearLinux ships with Python installed
- name: Install basic package to run containers
package:
name: containers-basic
state: present
- name: Make sure docker service is enabled
systemd_service:
name: docker
masked: false
enabled: true
daemon_reload: true
state: started
become: true

View File

@@ -50,3 +50,15 @@
become: true become: true
when: when:
- need_bootstrap.rc != 0 - need_bootstrap.rc != 0
- name: Update Apt cache
raw: apt-get update --allow-releaseinfo-change
become: true
when:
- os_release_dict['ID'] == 'debian'
- os_release_dict['VERSION_ID'] in ["10", "11"]
register: bootstrap_update_apt_result
changed_when:
- '"changed its" in bootstrap_update_apt_result.stdout'
- '"value from" in bootstrap_update_apt_result.stdout'
ignore_errors: true

View File

@@ -16,7 +16,8 @@
tags: tags:
- facts - facts
with_first_found: with_first_found:
- files: &search_files - &search
files:
- "{{ os_release_dict['ID'] }}-{{ os_release_dict['VARIANT_ID'] }}.yml" - "{{ os_release_dict['ID'] }}-{{ os_release_dict['VARIANT_ID'] }}.yml"
- "{{ os_release_dict['ID'] }}.yml" - "{{ os_release_dict['ID'] }}.yml"
paths: paths:
@@ -25,8 +26,8 @@
- name: Include tasks - name: Include tasks
include_tasks: "{{ included_tasks_file }}" include_tasks: "{{ included_tasks_file }}"
with_first_found: with_first_found:
- files: *search_files - <<: *search
skip: true paths: []
loop_control: loop_control:
loop_var: included_tasks_file loop_var: included_tasks_file

View File

@@ -55,3 +55,31 @@
become: true become: true
tags: tags:
- facts - facts
# Without this package, the get_url module fails when trying to handle https
- name: Install python-cryptography
community.general.zypper:
name: python-cryptography
state: present
update_cache: true
become: true
when:
- ansible_distribution_version is version('15.4', '<')
- name: Install python3-cryptography
community.general.zypper:
name: python3-cryptography
state: present
update_cache: true
become: true
when:
- ansible_distribution_version is version('15.4', '>=')
# Nerdctl needs some basic packages to get an environment up
- name: Install basic dependencies
community.general.zypper:
name:
- iptables
- apparmor-parser
state: present
become: true

View File

@@ -1,3 +0,0 @@
---
- name: Import Centos boostrap for Rocky Linux
import_tasks: centos.yml

View File

@@ -64,8 +64,7 @@ containerd_registries_mirrors:
skip_verify: false skip_verify: false
# ca: ["/etc/certs/mirror.pem"] # ca: ["/etc/certs/mirror.pem"]
# client: [["/etc/certs/client.pem", ""],["/etc/certs/client.cert", "/etc/certs/client.key"]] # client: [["/etc/certs/client.pem", ""],["/etc/certs/client.cert", "/etc/certs/client.key"]]
# header:
# Authorization: "Basic XXX"
containerd_max_container_log_line_size: 16384 containerd_max_container_log_line_size: 16384
# If enabled it will allow non root users to use port numbers <1024 # If enabled it will allow non root users to use port numbers <1024
@@ -84,15 +83,6 @@ containerd_cfg_dir: /etc/containerd
# Extra config to be put in {{ containerd_cfg_dir }}/config.toml literally # Extra config to be put in {{ containerd_cfg_dir }}/config.toml literally
containerd_extra_args: '' containerd_extra_args: ''
# Extra runtime configuration options to be injected into the containerd CRI runtime plugin section
# [plugins."io.containerd.cri.v1.runtime"]. This is useful for adding containerd runtime
# configuration options that aren't explicitly supported by Kubespray's default variables.
# Example:
# containerd_extra_runtime_args:
# device_ownership_from_security_context: true
# another_option: "value"
containerd_extra_runtime_args: {}
# Configure registry auth (if applicable to secure/insecure registries) # Configure registry auth (if applicable to secure/insecure registries)
containerd_registry_auth: [] containerd_registry_auth: []
# - registry: 10.0.0.2:5000 # - registry: 10.0.0.2:5000

View File

@@ -1,16 +1,16 @@
--- ---
role_name_check: 1 role_name_check: 1
platforms: platforms:
- cloud_image: ubuntu-2404 - cloud_image: ubuntu-2004
name: ubuntu24 name: ubuntu20
vm_cpu_cores: 1 vm_cpu_cores: 1
vm_memory: 1024 vm_memory: 1024
node_groups: node_groups:
- kube_control_plane - kube_control_plane
- kube_node - kube_node
- k8s_cluster - k8s_cluster
- cloud_image: debian-12 - cloud_image: debian-11
name: debian12 name: debian11
vm_cpu_cores: 1 vm_cpu_cores: 1
vm_memory: 1024 vm_memory: 1024
node_groups: node_groups:
@@ -35,6 +35,5 @@ provisioner:
timeout: 120 timeout: 120
playbooks: playbooks:
create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml
prepare: ../../../molecule/prepare.yml
verifier: verifier:
name: ansible name: testinfra

View File

@@ -0,0 +1,30 @@
---
- name: Prepare
hosts: all
gather_facts: false
become: true
vars:
ignore_assert_errors: true
roles:
- role: kubespray_defaults
- role: bootstrap_os
- role: network_facts
- role: kubernetes/preinstall
- role: adduser
user: "{{ addusers.kube }}"
tasks:
- name: Download CNI
include_tasks: "../../../../download/tasks/download_file.yml"
vars:
download: "{{ download_defaults | combine(downloads.cni) }}"
- name: Prepare CNI
hosts: all
gather_facts: false
become: true
vars:
ignore_assert_errors: true
kube_network_plugin: cni
roles:
- role: kubespray_defaults
- role: network_plugin/cni

View File

@@ -0,0 +1,55 @@
import os
import pytest
import testinfra.utils.ansible_runner
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
def test_service(host):
svc = host.service("containerd")
assert svc.is_running
assert svc.is_enabled
def test_version(host):
crictl = "/usr/local/bin/crictl"
path = "unix:///var/run/containerd/containerd.sock"
with host.sudo():
cmd = host.command(crictl + " --runtime-endpoint " + path + " version")
assert cmd.rc == 0
assert "RuntimeName: containerd" in cmd.stdout
@pytest.mark.parametrize('image, dest', [
('quay.io/kubespray/hello-world:latest', '/tmp/hello-world.tar')
])
def test_image_pull_save_load(host, image, dest):
nerdctl = "/usr/local/bin/nerdctl"
dest_file = host.file(dest)
with host.sudo():
pull_cmd = host.command(nerdctl + " pull " + image)
assert pull_cmd.rc ==0
with host.sudo():
save_cmd = host.command(nerdctl + " save -o " + dest + " " + image)
assert save_cmd.rc == 0
assert dest_file.exists
with host.sudo():
load_cmd = host.command(nerdctl + " load < " + dest)
assert load_cmd.rc == 0
@pytest.mark.parametrize('image', [
('quay.io/kubespray/hello-world:latest')
])
def test_run(host, image):
nerdctl = "/usr/local/bin/nerdctl"
with host.sudo():
cmd = host.command(nerdctl + " -n k8s.io run " + image)
assert cmd.rc == 0
assert "Hello from Docker" in cmd.stdout

View File

@@ -1,39 +0,0 @@
---
- name: Test containerd CRI
import_playbook: ../../../molecule/test_cri.yml
vars:
container_manager: containerd
cri_socket: unix:///var/run/containerd/containerd.sock
cri_name: containerd
- name: Test nerdctl
hosts: all
gather_facts: false
become: true
tasks:
- name: Get kubespray defaults
import_role:
name: ../../../../../kubespray_defaults
- name: Test nerdctl commands
command: "{{ bin_dir }}/nerdctl {{ item | join(' ') }}"
vars:
image: quay.io/kubespray/hello-world:latest
loop:
- - pull
- "{{ image }}"
- - save
- -o
- /tmp/hello-world.tar
- "{{ image }}"
- - load
- -i
- /tmp/hello-world.tar
- - -n
- k8s.io
- run
- "{{ image }}"
register: nerdctl
- name: Check log from running a container
assert:
that:
- ('Hello from Docker' in nerdctl.results[3].stdout)

View File

@@ -71,8 +71,6 @@
notify: Restart containerd notify: Restart containerd
- name: Containerd | Configure containerd registries - name: Containerd | Configure containerd registries
# mirror configuration can contain sensitive information on headers configuration
no_log: "{{ not (unsafe_show_logs | bool) }}"
block: block:
- name: Containerd | Create registry directories - name: Containerd | Create registry directories
file: file:

View File

@@ -36,15 +36,6 @@ oom_score = {{ containerd_oom_score }}
enable_cdi = true enable_cdi = true
cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"] cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"]
{% endif %} {% endif %}
{% for key, value in containerd_extra_runtime_args.items() %}
{% if value is string %}
{{ key }} = "{{ value }}"
{% elif value is boolean %}
{{ key }} = {{ value | lower }}
{% else %}
{{ key }} = {{ value }}
{% endif %}
{% endfor %}
[plugins."io.containerd.cri.v1.runtime".containerd] [plugins."io.containerd.cri.v1.runtime".containerd]
default_runtime_name = "{{ containerd_default_runtime }}" default_runtime_name = "{{ containerd_default_runtime }}"

View File

@@ -10,10 +10,4 @@ server = "{{ item.server | default("https://" + item.prefix) }}"
{% if mirror.client is defined %} {% if mirror.client is defined %}
client = [{% for pair in mirror.client %}["{{ pair[0] }}", "{{ pair[1] }}"]{% if not loop.last %},{% endif %}{% endfor %}] client = [{% for pair in mirror.client %}["{{ pair[0] }}", "{{ pair[1] }}"]{% if not loop.last %},{% endif %}{% endfor %}]
{% endif %} {% endif %}
{% if mirror.header is defined %}
[host."{{ mirror.host }}".header]
{% for key, value in mirror.header.items() %}
{{ key }} = ["{{ ([ value ] | flatten ) | join('","') }}"]
{% endfor %}
{% endif %}
{% endfor %} {% endfor %}

View File

@@ -7,8 +7,8 @@ platforms:
vm_memory: 1024 vm_memory: 1024
node_groups: node_groups:
- kube_control_plane - kube_control_plane
- name: ubuntu22 - name: ubuntu20
cloud_image: ubuntu-2204 cloud_image: ubuntu-2004
vm_cpu_cores: 1 vm_cpu_cores: 1
vm_memory: 1024 vm_memory: 1024
node_groups: node_groups:
@@ -27,6 +27,5 @@ provisioner:
become: true become: true
playbooks: playbooks:
create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml
prepare: ../../../molecule/prepare.yml
verifier: verifier:
name: ansible name: testinfra

View File

@@ -0,0 +1,48 @@
---
- name: Prepare
hosts: all
become: true
roles:
- role: kubespray_defaults
- role: bootstrap_os
- role: adduser
user: "{{ addusers.kube }}"
tasks:
- name: Download CNI
include_tasks: "../../../../download/tasks/download_file.yml"
vars:
download: "{{ download_defaults | combine(downloads.cni) }}"
- name: Prepare container runtime
hosts: all
become: true
vars:
container_manager: containerd
kube_network_plugin: cni
roles:
- role: kubespray_defaults
- role: network_plugin/cni
tasks:
- name: Copy test container files
copy:
src: "{{ item }}"
dest: "/tmp/{{ item }}"
owner: root
mode: "0644"
with_items:
- container.json
- sandbox.json
- name: Create /etc/cni/net.d directory
file:
path: /etc/cni/net.d
state: directory
owner: "{{ kube_owner }}"
mode: "0755"
- name: Setup CNI
copy:
src: "{{ item }}"
dest: "/etc/cni/net.d/{{ item }}"
owner: root
mode: "0644"
with_items:
- 10-mynet.conf

View File

@@ -0,0 +1,19 @@
import os
import testinfra.utils.ansible_runner
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
def test_run_pod(host):
run_command = "/usr/local/bin/crictl run --with-pull /tmp/container.json /tmp/sandbox.json"
with host.sudo():
cmd = host.command(run_command)
assert cmd.rc == 0
with host.sudo():
log_f = host.file("/tmp/cri-dockerd1.0.log")
assert log_f.exists
assert b"Hello from Docker" in log_f.content

View File

@@ -1,15 +0,0 @@
---
- name: Test cri-dockerd
import_playbook: ../../../molecule/test_cri.yml
vars:
container_manager: cri-dockerd
cri_socket: unix:///var/run/cri-dockerd.sock
cri_name: docker
- name: Test running a container with docker
import_playbook: ../../../molecule/test_runtime.yml
vars:
container_runtime: docker
# cri-dockerd does not support multiple runtime handler before 0.4.0
# https://github.com/Mirantis/cri-dockerd/pull/350
# TODO: check this when we upgrade cri-dockerd

View File

@@ -4,7 +4,6 @@ crio_cgroup_manager: "{{ kubelet_cgroup_driver | default('systemd') }}"
crio_conmon: "{{ bin_dir }}/conmon" crio_conmon: "{{ bin_dir }}/conmon"
crio_default_runtime: "crun" crio_default_runtime: "crun"
crio_libexec_dir: "/usr/libexec/crio" crio_libexec_dir: "/usr/libexec/crio"
crio_runtime_switch: false
crio_enable_metrics: false crio_enable_metrics: false
crio_log_level: "info" crio_log_level: "info"
crio_metrics_port: "9090" crio_metrics_port: "9090"
@@ -45,7 +44,7 @@ crio_root: "/var/lib/containers/storage"
# The crio_runtimes variable defines a list of OCI compatible runtimes. # The crio_runtimes variable defines a list of OCI compatible runtimes.
crio_runtimes: crio_runtimes:
- name: crun - name: crun
path: "{{ crio_runtime_bin_dir }}/crun" # Use crun in cri-o distributions, don't use 'crun' role path: "{{ crio_runtime_bin_dir }}/crun"
type: oci type: oci
root: /run/crun root: /run/crun
@@ -112,4 +111,3 @@ crio_default_capabilities:
- SETPCAP - SETPCAP
- NET_BIND_SERVICE - NET_BIND_SERVICE
- KILL - KILL
crio_additional_mounts: []

View File

@@ -0,0 +1 @@
/usr/share/rhel/secrets:/run/secrets

View File

@@ -1,4 +1,5 @@
--- ---
dependencies: dependencies:
- role: container-engine/crun
- role: container-engine/crictl - role: container-engine/crictl
- role: container-engine/skopeo - role: container-engine/skopeo

View File

@@ -1,10 +1,10 @@
{ {
"metadata": { "metadata": {
"name": "{{ container_runtime }}1" "name": "runc1"
}, },
"image": { "image": {
"image": "quay.io/kubespray/hello-world:latest" "image": "quay.io/kubespray/hello-world:latest"
}, },
"log_path": "{{ container_runtime }}1.0.log", "log_path": "runc1.0.log",
"linux": {} "linux": {}
} }

View File

@@ -1,6 +1,6 @@
{ {
"metadata": { "metadata": {
"name": "{{ container_runtime }}1", "name": "runc1",
"namespace": "default", "namespace": "default",
"attempt": 1, "attempt": 1,
"uid": "hdishd83djaidwnduwk28bcsb" "uid": "hdishd83djaidwnduwk28bcsb"

View File

@@ -1,8 +1,8 @@
--- ---
role_name_check: 1 role_name_check: 1
platforms: platforms:
- name: ubuntu22 - name: ubuntu20
cloud_image: ubuntu-2204 cloud_image: ubuntu-2004
vm_cpu_cores: 2 vm_cpu_cores: 2
vm_memory: 1024 vm_memory: 1024
node_groups: node_groups:
@@ -43,6 +43,5 @@ provisioner:
timeout: 120 timeout: 120
playbooks: playbooks:
create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml
prepare: ../../../molecule/prepare.yml
verifier: verifier:
name: ansible name: testinfra

View File

@@ -6,7 +6,7 @@
vars: vars:
ignore_assert_errors: true ignore_assert_errors: true
roles: roles:
- role: dynamic_groups - role: kubespray_defaults
- role: bootstrap_os - role: bootstrap_os
- role: network_facts - role: network_facts
- role: kubernetes/preinstall - role: kubernetes/preinstall
@@ -14,7 +14,7 @@
user: "{{ addusers.kube }}" user: "{{ addusers.kube }}"
tasks: tasks:
- name: Download CNI - name: Download CNI
include_tasks: "../../download/tasks/download_file.yml" include_tasks: "../../../../download/tasks/download_file.yml"
vars: vars:
download: "{{ download_defaults | combine(downloads.cni) }}" download: "{{ download_defaults | combine(downloads.cni) }}"
@@ -29,15 +29,26 @@
- role: kubespray_defaults - role: kubespray_defaults
- role: network_plugin/cni - role: network_plugin/cni
tasks: tasks:
- name: Copy test container files
copy:
src: "{{ item }}"
dest: "/tmp/{{ item }}"
owner: root
mode: "0644"
with_items:
- container.json
- sandbox.json
- name: Create /etc/cni/net.d directory - name: Create /etc/cni/net.d directory
file: file:
path: /etc/cni/net.d path: /etc/cni/net.d
state: directory state: directory
owner: root owner: "{{ kube_owner }}"
mode: "0755" mode: "0755"
- name: Config bridge host-local CNI - name: Setup CNI
copy: copy:
src: "10-mynet.conf" src: "{{ item }}"
dest: "/etc/cni/net.d/" dest: "/etc/cni/net.d/{{ item }}"
owner: root owner: root
mode: "0644" mode: "0644"
with_items:
- 10-mynet.conf

View File

@@ -0,0 +1,35 @@
import os
import testinfra.utils.ansible_runner
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
def test_service(host):
svc = host.service("crio")
assert svc.is_running
assert svc.is_enabled
def test_run(host):
crictl = "/usr/local/bin/crictl"
path = "unix:///var/run/crio/crio.sock"
with host.sudo():
cmd = host.command(crictl + " --runtime-endpoint " + path + " version")
assert cmd.rc == 0
assert "RuntimeName: cri-o" in cmd.stdout
def test_run_pod(host):
runtime = "crun"
run_command = "/usr/local/bin/crictl run --with-pull --runtime {} /tmp/container.json /tmp/sandbox.json".format(runtime)
with host.sudo():
cmd = host.command(run_command)
assert cmd.rc == 0
with host.sudo():
log_f = host.file("/tmp/runc1.0.log")
assert log_f.exists
assert b"Hello from Docker" in log_f.content

View File

@@ -1,11 +0,0 @@
---
- name: Test CRI-O cri
import_playbook: ../../../molecule/test_cri.yml
vars:
container_manager: crio
cri_socket: unix:///var/run/crio/crio.sock
cri_name: cri-o
- name: Test running a container with crun
import_playbook: ../../../molecule/test_runtime.yml
vars:
container_runtime: crun

View File

@@ -55,46 +55,6 @@
when: when:
- youki_enabled - youki_enabled
- name: Cri-o | Stop kubelet service if running
service:
name: kubelet
state: stopped
when:
- crio_runtime_switch
- ansible_facts.services['kubelet.service'] is defined and ansible_facts.services['kubelet.service'].state == 'running'
- name: Cri-o | Get all pods
ansible.builtin.command: "{{ bin_dir }}/crictl pods -o json"
changed_when: false
register: crio_pods
when:
- crio_runtime_switch
- ansible_facts.services['crio.service'] is defined
- name: Cri-o | Stop and remove pods not on host network
ansible.builtin.command: "{{ bin_dir }}/crictl rmp -f {{ item.id }}"
loop: "{{ (crio_pods.stdout | from_json).items | default([]) | selectattr('metadata.namespace', 'ne', 'NODE') }}"
changed_when: true
when:
- crio_runtime_switch
- ansible_facts.services['crio.service'] is defined
- crio_pods.stdout is defined
- name: Cri-o | Stop and remove all remaining pods
ansible.builtin.command: "{{ bin_dir }}/crictl rmp -fa"
changed_when: true
when:
- crio_runtime_switch
- ansible_facts.services['crio.service'] is defined
- name: Cri-o | stop crio service if running
service:
name: crio
state: stopped
when:
- crio_runtime_switch
- ansible_facts.services['crio.service'] is defined and ansible_facts.services['crio.service'].state == 'running'
- name: Cri-o | make sure needed folders exist in the system - name: Cri-o | make sure needed folders exist in the system
with_items: with_items:
- /etc/crio - /etc/crio
@@ -184,8 +144,8 @@
notify: Restart crio notify: Restart crio
- name: Cri-o | copy mounts.conf - name: Cri-o | copy mounts.conf
template: copy:
src: mounts.conf.j2 src: mounts.conf
dest: /etc/containers/mounts.conf dest: /etc/containers/mounts.conf
mode: "0644" mode: "0644"
when: when:
@@ -290,16 +250,3 @@
changed_when: false changed_when: false
retries: 5 retries: 5
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
# The kubelet service status can be 'not-found' if something depends on it.
# This check prevents failures when the service is in this indeterminate state,
# which can occur when adding new nodes to a cluster.
# See: https://superuser.com/questions/1755211/cleaning-debugging-services/1755215#1755215
- name: Cri-o | ensure kubelet service is started if present and stopped
service:
name: kubelet
state: started
when:
- crio_runtime_switch
- ansible_facts.services['kubelet.service'] is defined and ansible_facts.services['kubelet.service']['status'] != 'not-found'

View File

@@ -204,7 +204,7 @@ default_mounts = [
# limit is never exceeded. # limit is never exceeded.
log_size_max = -1 log_size_max = -1
# Whether container output should be logged to journald in addition to the kubernetes log file # Whether container output should be logged to journald in addition to the kuberentes log file
log_to_journald = false log_to_journald = false
# Path to directory in which container exit files are written to by conmon. # Path to directory in which container exit files are written to by conmon.

View File

@@ -1,4 +0,0 @@
/usr/share/rhel/secrets:/run/secrets
{% for mount in crio_additional_mounts %}
{{ mount }}
{% endfor %}

View File

@@ -1,5 +1,5 @@
--- ---
docker_version: '28.3' docker_version: '28.0'
docker_cli_version: "{{ docker_version }}" docker_cli_version: "{{ docker_version }}"
docker_package_info: docker_package_info:

View File

@@ -60,17 +60,16 @@
environment: "{{ proxy_env }}" environment: "{{ proxy_env }}"
when: ansible_pkg_mgr == 'apt' when: ansible_pkg_mgr == 'apt'
# ref to https://github.com/kubernetes-sigs/kubespray/issues/11086 & 12424 # ref to https://github.com/kubernetes-sigs/kubespray/issues/11086
- name: Convert -backports sources to archive.debian.org for bullseye and older - name: Remove the archived debian apt repository
replace: lineinfile:
path: "{{ item }}" path: /etc/apt/sources.list
regexp: '^(deb(?:-src)?\s+)(?:https?://)?(?:[^ ]+debian\.org)?([^ ]*/debian)(\s+{{ ansible_distribution_release }}-backports\b.*)' regexp: 'buster-backports'
replace: '\1http://archive.debian.org/debian\3' state: absent
backup: true backup: true
loop: "{{ query('fileglob', '/etc/apt/sources.list') }}"
when: when:
- ansible_os_family == 'Debian' - ansible_os_family == 'Debian'
- ansible_distribution_release in ['bullseye', 'buster'] - ansible_distribution_release == "buster"
- name: Ensure docker-ce repository is enabled - name: Ensure docker-ce repository is enabled
apt_repository: apt_repository:

View File

@@ -34,10 +34,8 @@ containerd_versioned_pkg:
'1.7.23': "{{ containerd_package }}=1.7.23-1" '1.7.23': "{{ containerd_package }}=1.7.23-1"
'1.7.24': "{{ containerd_package }}=1.7.24-1" '1.7.24': "{{ containerd_package }}=1.7.24-1"
'1.7.25': "{{ containerd_package }}=1.7.25-1" '1.7.25': "{{ containerd_package }}=1.7.25-1"
'1.7.26': "{{ containerd_package }}=1.7.26-1" 'stable': "{{ containerd_package }}=1.7.25-1"
'1.7.27': "{{ containerd_package }}=1.7.27-1" 'edge': "{{ containerd_package }}=1.7.25-1"
'stable': "{{ containerd_package }}=1.7.27-1"
'edge': "{{ containerd_package }}=1.7.27-1"
# https://download.docker.com/linux/debian/ # https://download.docker.com/linux/debian/
docker_versioned_pkg: docker_versioned_pkg:
@@ -55,13 +53,10 @@ docker_versioned_pkg:
'27.2': docker-ce=5:27.2.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} '27.2': docker-ce=5:27.2.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'27.3': docker-ce=5:27.3.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} '27.3': docker-ce=5:27.3.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'27.4': docker-ce=5:27.4.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} '27.4': docker-ce=5:27.4.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'27.5': docker-ce=5:27.5.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} '27.5': docker-ce=5:27.5.4-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'28.0': docker-ce=5:28.0.4-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} '28.0': docker-ce=5:28.0.2-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'28.1': docker-ce=5:28.1.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} 'stable': docker-ce=5:28.0.2-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'28.2': docker-ce=5:28.2.2-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} 'edge': docker-ce=5:28.0.2-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'28.3': docker-ce=5:28.3.3-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'stable': docker-ce=5:28.3.3-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'edge': docker-ce=5:28.3.3-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
docker_cli_versioned_pkg: docker_cli_versioned_pkg:
'latest': docker-ce-cli 'latest': docker-ce-cli
@@ -78,13 +73,10 @@ docker_cli_versioned_pkg:
'27.2': docker-ce-cli=5:27.2.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} '27.2': docker-ce-cli=5:27.2.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'27.3': docker-ce-cli=5:27.3.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} '27.3': docker-ce-cli=5:27.3.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'27.4': docker-ce-cli=5:27.4.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} '27.4': docker-ce-cli=5:27.4.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'27.5': docker-ce-cli=5:27.5.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} '27.5': docker-ce-cli=5:27.5.4-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'28.0': docker-ce-cli=5:28.0.4-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} '28.0': docker-ce-cli=5:28.0.2-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'28.1': docker-ce-cli=5:28.1.1-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} 'stable': docker-ce-cli=5:28.0.2-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'28.2': docker-ce-cli=5:28.2.2-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }} 'edge': docker-ce-cli=5:28.0.2-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'28.3': docker-ce-cli=5:28.3.3-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'stable': docker-ce-cli=5:28.3.3-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
'edge': docker-ce-cli=5:28.3.3-1~debian.{{ ansible_distribution_major_version }}~{{ ansible_distribution_release | lower }}
docker_package_info: docker_package_info:
pkgs: pkgs:

Some files were not shown because too many files have changed in this diff Show More