Compare commits

..

150 Commits

Author SHA1 Message Date
Meza
8bd5045ecf cleanup: Deprecate Ingress-Nginx from kubernetes-apps (#12767)
* [docs] Remove ingress-nginx references in docs and scripts jinja

Signed-off-by: Meza <meza-xyz@proton.me>

* Remove ingress-nginx doc and remove references in readme and sidebar

Signed-off-by: Meza <meza-xyz@proton.me>

* Delete ingress-nginx dir from kubernetes-apps

Signed-off-by: Meza <meza-xyz@proton.me>

* Delete ingress-nginx from inventory addons

Signed-off-by: Meza <meza-xyz@proton.me>

* Delete ingress_nginx_enabled from default main

Signed-off-by: Meza <meza-xyz@proton.me>

* Delete ingress_nginx from download

Signed-off-by: Meza <meza-xyz@proton.me>

* Delete ingress_nginx from dependencies

Signed-off-by: Meza <meza-xyz@proton.me>

* Remove ingress_nginx from registry task

Signed-off-by: Meza <meza-xyz@proton.me>

---------

Signed-off-by: Meza <meza-xyz@proton.me>
2026-02-10 20:22:04 +05:30
Micke Nordin
8f73dc9c2f Add services RBAC for calico-kube-controllers in KDD mode (#12928)
Commit 5fb85dc added service permissions for etcd datastore mode,
but the same permissions are needed for KDD (Kubernetes datastore) mode.

Signed-off-by: Micke Nordin <kano@sunet.se>
2026-02-10 19:52:02 +05:30
Ali Afsharzadeh
cc05dd4d14 Upgrade ansible from 10.7.0 to 11.13.0 (#12903)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-02-10 19:48:07 +05:30
Mark Tsai
9582ab3dcd image_updates: update openstack-cloud-controller to v1.35.0 (#12972) 2026-02-10 14:58:01 +05:30
Mohamed Omar Zaian
a77221d12b [kubernetes] Support Kubernetes v1.35.0 (#12812) 2026-02-10 14:54:02 +05:30
Max Gautier
57364f4085 Patch versions updates (#12973)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-09 21:13:25 +05:30
Max Gautier
34f43d21e3 Revert "kubelet: conditionalize staticPodPath location (#12433)" (#12970)
* Revert "kubelet: conditionalize staticPodPath location (#12433)"

This reverts commit 082507cff2.

* Add kubelet_static_pod_path to removed variables
2026-02-09 07:31:09 +05:30
Srishti Jaiswal
052846aa28 removed deprecated containerd_registries from test file (#12969) 2026-02-08 11:11:08 +05:30
neo
a563431c68 Remove Kubernetes Dashboard support (#12858) 2026-02-07 22:49:08 +05:30
Max Gautier
3aa0c0cc64 coredns: allow to customize service name (#12951) 2026-02-06 09:52:29 +05:30
chun
9bbef44e32 Bump: Prometheus Operator CRD to 0.88.1 (#12968)
Signed-off-by: hcc429 <dev.hcc29@gmail.com>
2026-02-06 08:36:30 +05:30
Srishti Jaiswal
03cfdbf2a9 add removed var validation to validate_inventory (#12942) 2026-02-05 15:34:31 +05:30
Jordan Liggitt
b5b599ecf8 Clean up unused nodes/proxy permission from node-feature-discovery-gc (#12955) 2026-02-05 15:30:34 +05:30
Max Gautier
4245ddcee8 Make etcd node removal idempotent (#12949) 2026-02-05 11:40:28 +05:30
Joshua N Haupt
422e7366ec Fix Gluster image_id and update openstack_blockstorage_volume_v3 (#12910)
This fixes the Terraform Gluster Compute image_id bug and updates the openstack_blockstorage_volume_v2 to
openstack_blockstorage_volume_v3.

Resolves:
[Bug] OpenStack Compute variable handling of image_id and image_name for Gluster nodes is broken

https://github.com/kubernetes-sigs/kubespray/issues/12902

Update openstack_blockstorage_volume_v2 to openstack_blockstorage_volume_v3

https://github.com/kubernetes-sigs/kubespray/issues/12901

Signed-off-by: Joshua Nathaniel Haupt <joshua@hauptj.com>
2026-02-04 11:08:26 +05:30
Tushar240503
bf69e67240 refactor/dynamic-role-loading-network (#12933)
Signed-off-by: Tushar Sharma <tusharkumargzb6@gmail.com>
2026-02-03 21:58:29 +05:30
Tushar240503
c5c2cf16a0 Move inline defaults to defaults/main.yml (#12926) 2026-02-03 14:14:29 +05:30
Ali Afsharzadeh
69e042bd9e Remove software-properties-common from pipeline.Dockerfile (#12945)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-02-02 20:04:32 +05:30
dependabot[bot]
20da3bb1b0 build(deps): bump cryptography from 46.0.3 to 46.0.4 (#12944)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.3 to 46.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.3...46.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-02 09:26:30 +05:30
Ieere Song
4d4058ee8e fix: typo in validate_inventory task name (missing backtick) (#12940)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-01-31 20:02:24 +05:30
Tushar240503
f071fccc33 updated prometheus-operator crd checksum autobump (#12939)
* updated prometheus-operator crd checksum autobump

Signed-off-by: Tushar Sharma <tusharkumargzb6@gmail.com>

* updated to Next-Gen format

Signed-off-by: Tushar Sharma <tusharkumargzb6@gmail.com>

---------

Signed-off-by: Tushar Sharma <tusharkumargzb6@gmail.com>
2026-01-31 19:44:24 +05:30
Eugene Shutov
70daea701a local_path_provisioner: add resources (#12548)
* local_path_provisioner: add resources

* Update roles/kubernetes-apps/external_provisioner/local_path_provisioner/templates/local-path-storage-deployment.yml.j2

Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-31 00:08:25 +05:30
Ali Afsharzadeh
3e42b84e94 Upgrade Dockerfile base image from Ubuntu 22.04 to 24.04 (#12935)
* Upgrade Dockerfile base image from Ubuntu 22.04 to 24.04

* Add --break-system-packages flag to testcases_run.sh file
2026-01-30 19:57:44 +05:30
Max Gautier
868ff3cea9 Auto-bump checksums on last 3 branches (#12934)
We now have all supported release branches (last 3) using the new
checksums format, which means they all work with the auto-bump tooling.
2026-01-30 15:39:44 +05:30
Max Gautier
0b69a18e35 Remove nifcloud terraform provider support (it is no longer available) (#12936)
The nifcloud terraform provider has been deleted, so remove support and
CI.
2026-01-30 15:05:44 +05:30
ChengHao Yang
e30076016c Releng: Galaxy version upgrade to 2.31.0 (#12909)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-30 13:35:43 +05:30
ChengHao Yang
f4ccdb5e72 Docs: update 2.29.0 to 2.30.0 (#12899)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-29 23:45:50 +05:30
Max Gautier
fcecaf6943 wait for control plane node to become ready after joining (#12794)
When joining a control plane node and "upgrading" the cluster setup (for
example, to update etcd addresses after adding a new etcd) in the same
playbook run, the node can take a bit of time to become ready after
joining.
This triggers a kubeadm preflight check (ControlPlaneNodesReady) in
kubeadm upgrade, which is run directly after the join tasks.

Add a configurable wait for the control plane node to become Ready to
fix this race condition.
2026-01-28 22:15:51 +05:30
Max Gautier
37f7a86014 etcd-certs: only change necessary permissions (#12908)
We currently **recursively** set the permissions of /etc/ssl/etcd/ssl
(default path) to 700. But this removes group permission from the files
under it, and certain composents (like calio with etcd datastore) rely
on it ; thus, the upgrade of a cluster can fail because the
calico-kube-controller can't access the certs, and thus the etcd.

This works in other case because as far as I can tell, the apiserver
which do access the etcd run as root (the owner of the files, not just
the "group owner")

We also for some reasons do this twice.

Only create the etcd cert directory with the correct permissions once,
not recursively.
2026-01-27 20:25:52 +05:30
Max Gautier
fff7f10a85 Patch versions updates (#12912)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-27 20:21:53 +05:30
ChengHao Yang
dc09298f7e Docs: cilium_kube_proxy_replacement change boolean (#12898)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-27 16:43:48 +05:30
dependabot[bot]
680db0c921 build(deps): bump jmespath from 1.0.1 to 1.1.0 (#12905)
Bumps [jmespath](https://github.com/jmespath/jmespath.py) from 1.0.1 to 1.1.0.
- [Changelog](https://github.com/jmespath/jmespath.py/blob/develop/CHANGELOG.rst)
- [Commits](https://github.com/jmespath/jmespath.py/compare/1.0.1...1.1.0)

---
updated-dependencies:
- dependency-name: jmespath
  dependency-version: 1.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-27 16:39:49 +05:30
dependabot[bot]
9977d4dc10 build(deps): bump actions/checkout from 6.0.1 to 6.0.2 (#12906)
Bumps [actions/checkout](https://github.com/actions/checkout) from 6.0.1 to 6.0.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](8e8c483db8...de0fac2e45)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 6.0.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 20:41:53 +05:30
dependabot[bot]
1b6129566b build(deps): bump peter-evans/create-pull-request from 8.0.0 to 8.1.0 (#12907)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 8.0.0 to 8.1.0.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](98357b18bf...c0f553fe54)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 8.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 20:37:51 +05:30
Ali Afsharzadeh
c3404c3685 Upgrade cilium from 1.18.5 to 1.18.6 (#12900)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-01-26 20:21:50 +05:30
Max Gautier
fba8708486 RELEASE.md: fix minor typo (#12891) 2026-01-22 16:43:29 +05:30
accuROAMC
8dacb9cd16 cri-o: fix duplicate top-level "auths" keys in registry config template (#12845)
The config.json.j2 template was generating invalid JSON when multiple
crio_registry_auth entries were defined, resulting in multiple top-level
"auths" objects being rendered, e.g.:

{
  "auths": { "registry1": { "auth": "xxxx" } },
  "auths": { "registry2": { "auth": "yyyy" } }
}

This change moves the loop inside the "auths" object so that all registries
are rendered as siblings under a single "auths" key, producing valid JSON:

{
  "auths": {
    "registry1": { "auth": "xxxx" },
    "registry2": { "auth": "yyyy" }
  }
}
2026-01-20 19:20:50 +05:30
Max Gautier
df3f0a2341 k8s-certs-renew: fix broken script (#12876)
Unproquer quoting of variable assignment make the shell interpret it as
a command ; since the variable is unused anyway, just delete it.
2026-01-19 22:57:47 +05:30
Kubernetes Prow Robot
62e90b3122 Merge pull request #12872 from VannTen/fix/defaut_lb_address
Use loadbalancer IP as default apiserver endpoint if no LB hostname is used
2026-01-19 21:45:50 +05:30
Max Gautier
6b5cc5bdfb Fix defaults for apiserver_loadbalancer_domain_name
Since we're not longer injecting pseudo DNS into /etc/hosts,
'lb-apiserver.kubernetes.local' (the previous default) won't resolve to
anything.

Instead, default to the loadbalancer IP if defined, or to the node local
loadbalancer if it's in use.

Make the necessary adjustements in use site to deal with ip addresses as
well as hostnames.
2026-01-19 09:43:48 +01:00
dependabot[bot]
a277cfdee7 build(deps): bump stefanbuck/github-issue-parser from 3.2.2 to 3.2.3 (#12874)
Bumps [stefanbuck/github-issue-parser](https://github.com/stefanbuck/github-issue-parser) from 3.2.2 to 3.2.3.
- [Release notes](https://github.com/stefanbuck/github-issue-parser/releases)
- [Commits](25f1485edf...10dcc54158)

---
updated-dependencies:
- dependency-name: stefanbuck/github-issue-parser
  dependency-version: 3.2.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-19 09:35:10 +05:30
Max Gautier
bc5528f585 Patch versions updates (#12854)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-17 23:57:09 +05:30
Max Gautier
2740c13c0c Do not use apiserver LB in etcd certificates
etcd does not use the apiserver load balancer, there is no reason to
include it's DNS into etcd certificates.
2026-01-15 16:50:45 +01:00
Bas
52b68bccad Fix: ansible_facts.selinux.status added. (#12861) 2026-01-14 23:31:40 +05:30
Will Xiang
82c4c0afdf fix syntax in haproxy.cfg.j2 for IPv6 binding (#12862) 2026-01-14 12:33:35 +05:30
Kirill Statsenko
63a43cf6db add metallb_namespace default value (#12860) 2026-01-13 20:55:43 +05:30
Ali Afsharzadeh
666a3a9500 Upgrade containerd and nerdctl from 2.1.6 to 2.2.1 (#12825) 2026-01-12 15:24:10 +05:30
Max Gautier
28f9c126bf ansible-lint: disable jinja[spacing] warning (#12848)
This pollutes ansible-lint output and force us to scroll to check what
the actuall issues are.
The spacing issues are minor and very opinionated, so it's no great
loss.
2026-01-12 13:42:07 +05:30
Sivaram Singana
d41b629be3 updated elastx_ubuntu20 to ubuntu24 (#12844)
* Updated the job name to elastx_ubuntu24 and ci matrix and test file

Signed-off-by: sivaram <singana.sivaram.naidu@ibm.com>

* remove unused OVH CI tf file (tf-ovh_ubuntu20-calico.yml)

Signed-off-by: sivaram <singana.sivaram.naidu@ibm.com>

* remove ubuntu20 for pre-commit fix

Signed-off-by: sivaram <singana.sivaram.naidu@ibm.com>

---------

Signed-off-by: sivaram <singana.sivaram.naidu@ibm.com>
2026-01-10 23:35:56 +05:30
Ali Afsharzadeh
851abbc2e3 Disable discard_unpacked_layers for containerd >= 2.1 (#12821)
Only set `discard_unpacked_layers` in the CRI image config for containerd
versions earlier than 2.1.0.

Starting with containerd v2.1, the CRI plugin uses the Transfer Service for
image pulls by default. The `discard_unpacked_layers` option is incompatible
with the Transfer Service and triggers containerd to fall back to local
image pulls, logging a warning.

This change prevents unsupported configuration from being applied on newer
containerd versions, avoiding runtime warnings and ensuring default image
pull behavior.

Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-01-08 19:39:40 +05:30
Qasim Mehmood
17c72367bc kube-vip: Fix template, drop all capabilities and use kube_vip_version (#12835)
* Drop capabilities in kube-vip and use kube_vip_version

* Preserve trailing newline for kube_vip_cidr env var
2026-01-07 07:43:38 +05:30
Max Gautier
d91c7d7576 Fix ansible-lint config error (#12842) 2026-01-06 20:15:11 +05:30
Kubernetes Prow Robot
14b20ad2a2 Merge pull request #12832 from VannTen/cleanup/network_facts
network_facts: streamline set_fact and setup calls
2026-01-06 15:01:10 +05:30
Max Gautier
72cb1356ef ci: make opentofu elastx not optionnal 2026-01-05 15:55:01 +01:00
Max Gautier
51304d57e2 network_facts: streamline set_fact and setup calls
- invoke setup module only once to gather ipv4 and ipv6 addresses
- eliminate remaining use of `fallback_ip` and `fallback_ip6`, allowing
  us to define (with `set_fact` all the "computed" IPs variable in one
  go, since there is no longer a dependency between them.
2026-01-05 15:54:56 +01:00
Goutham K
a0d7bef90e Remove deprecated kubelet flag (#12639) 2026-01-05 18:56:42 +05:30
Max Gautier
a1ec88e290 openstack-cleanup: delete old keypairs as well (#12833)
* openstack-cleanup: format and logging

* openstack-cleanup: delete old keypairs as well
2026-01-05 17:42:37 +05:30
Kubernetes Prow Robot
c9ff62944e Merge pull request #12355 from tico88612/feat/rocky-10-support
RockyLinux 10 support (experimental)
2026-01-05 14:32:37 +05:30
LawiK974
20ab9179af Update kube-vip to v1.0.3 (#12815) 2026-01-04 22:52:37 +05:30
ChengHao Yang
5be35c811a Docs: Rocky Linux 10 experimental description
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-03 15:40:01 +08:00
ChengHao Yang
ad522d4aab Docs: add rockylinux-10-extra description
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-03 00:52:30 +08:00
ChengHao Yang
9c511069cc CI: change rockylinux 10 image with kernel-module-extra
How to build RockyLinux 10 + `kernel-module-extra` with dib
https://github.com/kubernetes-sigs/kubespray/pull/12355#issuecomment-3705400093

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-02 23:10:41 +08:00
ChengHao Yang
ed270fcab4 Docs: update support system RHEL-based variants
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-02 23:10:41 +08:00
ChengHao Yang
0615929727 CI: add cilium test for rockylinux 10
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-02 23:10:41 +08:00
ChengHao Yang
48c25d9ebf CI: add calico test for rockylinux 10
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-02 23:10:40 +08:00
LawiK974
0bffcacbe7 Add rbac for calico kube-controllers to access services (#12828) 2026-01-02 20:04:35 +05:30
R. P. Taylor
c857252225 terraform openstack: allow ICMPv6 by default (#12805) 2026-01-02 14:50:38 +05:30
Ali Afsharzadeh
a0f00761ac Removed deprecated keys from containerd config (#12820)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-01-02 14:26:35 +05:30
r3m8
3a3e5d6954 fix(cilium): add dynamic api server endpoint configuration (#12624) 2026-01-01 17:26:34 +05:30
ChengHao Yang
2d6e508084 Fix: molecule 25.12.0 test (#12808)
* Bump molecule to 25.12.0

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Fixed ansible role not found in molecule after 25.2.0

Signed-off-by: ChengHao Yang
<17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Signed-off-by: ChengHao Yang
2025-12-31 15:12:34 +05:30
Ali Afsharzadeh
6d850a0dc5 Update pause image to 3.10.1 for Kubernetes 1.34 (#12827) 2025-12-31 13:48:35 +05:30
Max Gautier
6a517e165e Fix kubeadm init retry (#12785)
We currently always fail on the kubeadm init retry, because of the
remnants of the first try.

Ignore the related errors in the retry to unblock it.
2025-12-25 15:14:31 +05:30
Qasim Mehmood
aaaf82f308 Fix cilium portmap (#12814) 2025-12-24 18:12:31 +05:30
Bas
e80087df93 Fix for #12435 - setting timezone under SELinux. (#12436)
Signed-off-by: Bas Meijer <bas.meijer@enexis.nl>
2025-12-24 16:40:31 +05:30
Max Gautier
b7491b957b CI: move vagrant tests to ci-full (#12803)
Those tests are unfortunately broken, move them to ci-full
2025-12-21 19:16:32 -08:00
Max Gautier
5cf8f3eefc Patch versions updates (#12799)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-20 02:54:31 -08:00
Ali Afsharzadeh
1cbccf40a5 Upgrade cilium from 1.18.4 to 1.18.5 (#12796)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-12-19 07:24:32 -08:00
Max Gautier
bcdd702e19 Enable hash updates on 2.28 (2 latest releases) (#12801) 2025-12-19 01:30:36 -08:00
Max Gautier
20693afe82 cleanup tags deprecated tags 'master' (#12795) 2025-12-17 03:34:08 -08:00
dependabot[bot]
1bbcfd8dd6 build(deps): bump peter-evans/create-pull-request from 7.0.11 to 8.0.0 (#12792)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.11 to 8.0.0.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](22a9089034...98357b18bf)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 8.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-16 07:12:12 -08:00
dependabot[bot]
8d948f918f build(deps): bump actions/cache from 4 to 5 (#12791)
Bumps [actions/cache](https://github.com/actions/cache) from 4 to 5.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-16 06:00:11 -08:00
dependabot[bot]
4d8d1b8aff build(deps): bump octokit/graphql-action from 3.0.0 to 3.0.2 (#12790)
Bumps [octokit/graphql-action](https://github.com/octokit/graphql-action) from 3.0.0 to 3.0.2.
- [Release notes](https://github.com/octokit/graphql-action/releases)
- [Commits](abaeca7ba4...ddde8ebb24)

---
updated-dependencies:
- dependency-name: octokit/graphql-action
  dependency-version: 3.0.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-16 05:56:18 -08:00
r3m8
d80318301d docs(cilium): update documentation for unprivileged agent configuration (#12628) 2025-12-13 19:39:44 -08:00
Ali Afsharzadeh
31cce09fbc Check calico version in check mode (#12586)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-12-10 00:39:29 -08:00
Max Gautier
9a90c9d6c8 Upgrade elastx ubuntu version (#12776)
This could fix the glibc version error we're currently having in CI.
2025-12-09 01:19:32 -08:00
dependabot[bot]
b9e1e8577f build(deps): bump peter-evans/create-pull-request from 7.0.9 to 7.0.11 (#12774)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.9 to 7.0.11.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](84ae59a2cd...22a9089034)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 7.0.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 04:29:33 -08:00
dependabot[bot]
5d1dd83b07 build(deps): bump stefanbuck/github-issue-parser from 3.2.1 to 3.2.2 (#12773)
Bumps [stefanbuck/github-issue-parser](https://github.com/stefanbuck/github-issue-parser) from 3.2.1 to 3.2.2.
- [Release notes](https://github.com/stefanbuck/github-issue-parser/releases)
- [Commits](2ea9b35a8c...25f1485edf)

---
updated-dependencies:
- dependency-name: stefanbuck/github-issue-parser
  dependency-version: 3.2.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-07 21:37:27 -08:00
dependabot[bot]
b203586d6b build(deps): bump actions/checkout from 6.0.0 to 6.0.1 (#12772)
Bumps [actions/checkout](https://github.com/actions/checkout) from 6.0.0 to 6.0.1.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](1af3b93b68...8e8c483db8)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 6.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-07 21:25:28 -08:00
Max Gautier
88df61357b Use run_once for kubeadm init phase upload-certs (#12759)
Before "5ca23e3bf (Changed to use first_kube_control_plane to parse
kubeadm_certificate_key (#11875), 2025-01-14)", kubespray would have
problem adding new control planes when the order of the nodes in kubectl
output and the ansible inventory were not the same.

But the underlying problem is that the operation is fundamentally
something that should be done only once, and recorded for all host in
play.

Since `register` and `sef_fact` when used with `run_once` set the
variable for all the hosts, use it. Also allows to use the variable
directly instead of relying on hostvars to make the task more readable.
2025-12-07 21:07:27 -08:00
AFz
2edf176294 docs: fixed rendering issue by using generic markdown format (#12652)
* docs: fix github markdown style

* docs: use generic markdown format for quotes
2025-12-07 12:38:57 -08:00
Ali Afsharzadeh
39744146b4 Remove legacy cilium_remove_old_resources task (#12771)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-12-07 07:36:55 -08:00
Max Gautier
118b2dce02 Remove checksums for old versions of various components (#12735)
We only keep 3 minor versions for most stuff
2025-12-05 06:30:59 -08:00
Ali Afsharzadeh
4c5eda9f1e Remove legacy tasks that were scheduled for cleanup (#12765)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-12-04 23:50:58 -08:00
Max Gautier
2512e0c50c Patch versions updates (#12762)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-04 06:44:57 -08:00
Max Gautier
633d39448e Add a default (empty) value for supplementary_addresses_in_ssl_keys (#12761)
Most variables should have a default instead of relying on the default
filter.

(Note that the variable is misnomed, this should be certs and not keys,
but it's not worth breaking compat).
2025-12-04 05:28:57 -08:00
Max Gautier
4d87ac1032 Simplify collection of SubjectAlternativeNames for apiserver (#12507)
Remove a bunch of intermediate variables, which fixes a
"'UndefinedMarker' concatenation" error in ansible-lint v25.8.1.
2025-12-04 02:06:57 -08:00
Chris Ricker
2342d0cd57 Calico: populate kubernetes-services-endpoint for localhost LB (#12598)
When loadbalancer_apiserver_localhost is enabled, Calico falls back to the
Kubernetes service IP because the kubernetes-services-endpoint ConfigMap is
empty. CNI then fails to reach the API server even though an nginx proxy is
listening on localhost.

Update kube_apiserver_global_endpoint to always reference the localhost load
balancer (respecting the configured port) and populate the ConfigMap for both
eBPF and localhost LB modes.
2025-12-03 07:22:19 -08:00
Azhan Latif
e6a5266bad feat: add noCompileLinks option to Docsify configuration (#12751) 2025-12-02 08:24:26 -08:00
dependabot[bot]
57f7c44718 build(deps): bump redhat-plumbers-in-action/advanced-issue-labeler (#12756)
Bumps [redhat-plumbers-in-action/advanced-issue-labeler](https://github.com/redhat-plumbers-in-action/advanced-issue-labeler) from 3.2.3 to 3.2.4.
- [Release notes](https://github.com/redhat-plumbers-in-action/advanced-issue-labeler/releases)
- [Commits](e38e6809c5...b80ae64e3e)

---
updated-dependencies:
- dependency-name: redhat-plumbers-in-action/advanced-issue-labeler
  dependency-version: 3.2.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 06:54:26 -08:00
Seena Fallah
5789dc839c control-plane: fix first_kube_control_plane delegation with kube_override_hostname (#12636)
* control-plane: fix first_kube_control_plane delegation with kube_override_hostname

When kube_override_hostname is configured, the node names reported by
`kubectl get nodes` differ from the inventory_hostname known to Ansible.
This causes delegation failures in subsequent tasks since Ansible cannot
resolve the hostname from kubectl output to an inventory host.

Signed-off-by: Seena Fallah <seenafallah@gmail.com>

* control-plane: remove fragile first_control_plane selection logic

Current implementation breaks with kube_override_hostname and has
multiple edge cases. Drop until proper kubectl-based node lookup
can be implemented.

Signed-off-by: Seena Fallah <seenafallah@gmail.com>

---------

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2025-11-25 08:10:38 -08:00
Max Gautier
3de6fa7220 Patch versions updates (#12743)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-25 01:04:37 -08:00
dependabot[bot]
9a9e8814e6 build(deps): bump peter-evans/create-pull-request from 7.0.8 to 7.0.9 (#12741)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.8 to 7.0.9.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](271a8d0340...84ae59a2cd)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 7.0.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 02:58:38 -08:00
dependabot[bot]
87a4f61d76 build(deps): bump actions/checkout from 5.0.0 to 6.0.0 (#12740)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5.0.0 to 6.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](08c6903cd8...1af3b93b68)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 02:46:36 -08:00
Max Gautier
9975b5d525 Remove download support for old calico versions (#12724)
we no longer deploy those versions
2025-11-20 04:56:01 -08:00
Max Gautier
9d06ce1a8d CI: enable unsafe_show_logs == true by default (#12702)
* CI: enable unsafe_show_logs == true by default

* Deduplicate defaults vars (unsafe_show_logs)
2025-11-19 23:10:00 -08:00
Ali Afsharzadeh
bce107ce3d Upgrade cilium from 1.18.3 to 1.18.4 (#12717)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-11-18 19:51:59 -08:00
Kubernetes Prow Robot
7d7a42d931 Merge pull request #12723 from VannTen/molecule_var_in_inventory
Put molecule variables in molecule inventories
2025-11-18 19:47:58 -08:00
Max Gautier
5183679a89 crio: molecule: move variables to inventory
Fix download/file (which needs the variable to determine the correct
binaries)
2025-11-18 15:44:09 +01:00
Max Gautier
b4fe577203 gvisor: molecule: move variables to inventory
Fix download/file (which needs the variable to determine the correct
binaries)
2025-11-18 15:44:08 +01:00
Max Gautier
bde51ebddf youki: molecule: move variables to inventory
Fix download/file (which needs the variable to determine the correct
binaries)
2025-11-18 15:44:06 +01:00
Max Gautier
381426d6d5 cri-docker: molecule: move container_manager to inventory var 2025-11-18 15:44:05 +01:00
Ali Afsharzadeh
b3ee6d6b75 Adjust hubble export values for cilium 1.18 schema change (#12665)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-11-18 00:07:37 -08:00
ChengHao Yang
7436d63faa Patch versions updates (#12678)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-17 05:01:39 -08:00
Max Gautier
6138c6a1a2 CI: use a dedicated disk for releases (#12692)
This should make 'no space left on device' problems easier to handle

Use /tmp/releases as local_release_dir CI created machine, while keeping
the same folder on the runner (needed for gitlab-ci runner pods)
2025-11-17 02:57:39 -08:00
Max Gautier
6115eba3c3 CI: label VirtualMachineInstance with PR id and pipeline ids (#12716)
Helps with CI debuggability
2025-11-17 02:21:39 -08:00
Kubernetes Prow Robot
1c008d79b1 Merge pull request #12714 from tico88612/feat/gateway-api-auto-bump
Feat: Gateway API auto bump
2025-11-16 06:27:37 -08:00
ChengHao Yang
b4bbec6772 Feat: Gateway API version always get latest
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-11-16 22:11:48 +08:00
ChengHao Yang
5c6ee4852a Bump: Gateway API to 1.4.0 and set latest version
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-11-16 21:53:59 +08:00
ChengHao Yang
8190f952c1 Feat: add Gateway API component hash update
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-11-16 21:49:43 +08:00
ChengHao Yang
3edc3d7a36 Style: components.py argocd indent fix
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-11-16 21:48:32 +08:00
peterw
2f3f1d7e65 crio: add option pull_progress_timeout (#12555) 2025-11-15 19:53:37 -08:00
Max Gautier
71c69ec12c CI: Try a full ssh connection on hosts instead of only checking the port (#12416)
* CI: Try a full ssh connection on hosts instead of only checking the port

If we only try the port, we can try to connect in the playbook which is
executed next even though the managed node has not yet completed it's
boot-up sequence ("System is booting up. Unprivileged users are not
permitted to log in yet. Please come back later. For technical details,
see pam_nologin(8).")

This does not account for python-less hosts, but we don't use those in
CI anyway (for now, at least).

* CI: Remove connection method override when creating VMs

This prevented wait_for_connection to work correctly by hijacking the
connection to localhost, thus bypassing the connection check.
2025-11-15 08:37:37 -08:00
R. P. Taylor
dab0947150 change kubectl_node_schedulable var (#12661) 2025-11-15 07:01:37 -08:00
Max Gautier
5488e7d805 Update pre-commit hooks (#12707) 2025-11-14 07:51:41 -08:00
Max Gautier
ca9873cfcb crictl: remove useless layer of include_tasks (#12656) 2025-11-14 06:57:39 -08:00
Bas
65f33c3ef0 Install the clone as collection in SemaphoreUI if airgapped. (#12660)
Signed-off-by: Bas Meijer <bas.meijer@enexis.nl>
2025-11-14 06:45:40 -08:00
Anurag Ojha
5eccf9ea6c fix(cilium):correct loadBalancer.mode rendering in values.yaml (#12701) 2025-11-14 06:39:38 -08:00
Max Gautier
db599b3475 Patch version updates (#12696) 2025-11-14 04:41:45 -08:00
Chris Ricker
47140083dc Update Calico apiserver RBAC for Kubernetes 1.33+ (#12654)
Add missing RBAC permissions for Calico apiserver to function correctly
with Kubernetes 1.33+

Changes:

1. Add K8s 1.33 ValidatingAdmissionPolicy resources to calico-webhook-reader
   - validatingadmissionpolicies
   - validatingadmissionpolicybindings

Kubernetes 1.33 introduced ValidatingAdmissionPolicy resources (KEP-3488)
that require explicit RBAC permissions. Without these changes, Calico
apiserver on k8s 1.33+ will not work and needless errors are logged
2025-11-14 00:23:38 -08:00
ChengHao Yang
2d179879a0 Bump Sonobuoy to 0.57.3 (#12673)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-11-11 19:00:56 -08:00
Max Gautier
61b8e4ce84 Test the correct version when testing collection + upgrade (#12675)
If we don't rebuild the collection and remove the previous archive we'll
test the first built.
2025-11-11 18:56:56 -08:00
Max Gautier
97a3776d8e Remove etcd member by peerURLs (#12682)
* Remove etcd member by peerURLs

The way to obtain the IP of a particular member is convoluted and depend
on multiple variables. The match is also textual and it's not clear
against what we're matching

It's also broken for etcd member which are not also Kubernetes nodes,
because the "Lookup node IP in kubernetes" task will fail and abort the
play.

Instead, match against 'peerURLs', which does not need new variable, and
use json output.

* Add testcase for etcd removal on external etcd

* do not merge

* fixup! Remove etcd member by peerURLs

* fixup! Remove etcd member by peerURLs
2025-11-10 03:52:56 -08:00
Max Gautier
990695de7b Let containerd create storage / state dir (#12681)
Containerd manages by itself, so there is no need to override it and
change permissions.
2025-11-10 03:42:56 -08:00
dependabot[bot]
4059c699dc build(deps): bump octokit/graphql-action from 2.3.2 to 3.0.0 (#12680)
Bumps [octokit/graphql-action](https://github.com/octokit/graphql-action) from 2.3.2 to 3.0.0.
- [Release notes](https://github.com/octokit/graphql-action/releases)
- [Commits](8ad880e4d4...abaeca7ba4)

---
updated-dependencies:
- dependency-name: octokit/graphql-action
  dependency-version: 3.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-10 00:50:56 -08:00
xin053
e22ce15429 Update cinder-csi image tag for offline install (#12627) 2025-11-08 21:38:52 -08:00
Max Gautier
452d4e63e0 Fix the (upgrade/remove_node) + collection test cases (#12672)
The 'old' playbook and the collection use '-' and '_' as separator,
which breaks the logic in scripts/testcases_run.sh.

Add aliases using the old schemes to make the test work and avoid
breaking anything.

Both '-' and '_' variants will be deleted once we switch to supporting
collection only.
2025-11-07 07:22:55 -08:00
Kubernetes Prow Robot
d2a46b4ff8 Merge pull request #12671 from VannTen/prep/kubectl_apply_ssa
Make kubectl_apply_stdin available to other roles (+ SSA support)
2025-11-07 03:42:52 -08:00
Max Gautier
e090c9ee26 Factor kubectl_apply_stdin into separate "vars only" role
This is needed to make it available to other roles than kubernetes-apps
2025-11-07 09:34:57 +01:00
Max Gautier
0d6d3f5828 kubectl_apply_stdin SSA support 2025-11-07 09:34:29 +01:00
Max Gautier
b9662dbd86 cleanup: don't cleanup runc orphan binary on immutable distros (#12669) 2025-11-06 22:16:53 -08:00
Ali Afsharzadeh
f5a480fdc4 Upgrade cilium from 1.18.2 to 1.18.3 (#12649) 2025-11-06 21:42:52 -08:00
Albin Björk
5dce75d29b upcloud: updated terraform provider version (#12642) 2025-10-24 00:53:34 -07:00
Max Gautier
5acde6cfe2 Get conf checksum directly for localhost CP loadbalancer (#12632)
There is no need to stat the templated file, because the template module
already returns a checksum.
2025-10-23 22:57:36 -07:00
Meza
c6926eb2f9 fix(calico): Add missed rbac verb for hostendpoints (#12641)
Signed-off-by: Meza <meza-xyz@proton.me>
2025-10-23 09:29:34 -07:00
Meza
1930ab7ed6 [docs] Fix typos found in the docs (#12638)
Signed-off-by: Meza <meza-xyz@proton.me>
2025-10-22 20:22:38 -07:00
dependabot[bot]
3edc979384 build(deps): bump cryptography from 46.0.2 to 46.0.3 (#12635)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.2 to 46.0.3.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.2...46.0.3)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-19 23:52:12 -07:00
Max Gautier
cde7b2b022 Remove leftover docs fragment about mitogen (#12630)
This was left behind from 1fb14b746 (docs: remove outdated mitogen
documentation. (#12619), 2025-10-14)
2025-10-17 08:44:43 -07:00
i-yasuda
0d88532f3d [kubernetes] Support kubernetes 1.34 (#12549)
* [kubernetes] Support kubernetes 1.34.0

Update hashes for kubernetes 1.34.0 except for cri-o

* [kubernetes] Support kubernetes 1.34.1

Update hashes for kubernetes 1.34.1

* [cri-o] Update cri-o to 1.34.1

---------

Co-authored-by: Takuya Murakami <tmurakam@tmurakam.org>
2025-10-17 01:56:42 -07:00
Goutham K
1fb14b7463 docs: remove outdated mitogen documentation. (#12619) 2025-10-14 05:39:38 -07:00
ChengHao Yang
a66d00a535 Releng: bump galaxy version 2.30.0 (#12622)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-10-14 02:09:35 -07:00
156 changed files with 673 additions and 2625 deletions

View File

@@ -33,6 +33,8 @@ skip_list:
# Disable run-once check with free strategy # Disable run-once check with free strategy
# (Disabled in June 2023 after ansible upgrade; FIXME) # (Disabled in June 2023 after ansible upgrade; FIXME)
- 'run-once[task]' - 'run-once[task]'
- 'jinja[spacing]'
exclude_paths: exclude_paths:
# Generated files # Generated files
- tests/files/custom_cni/cilium.yaml - tests/files/custom_cni/cilium.yaml

View File

@@ -13,16 +13,16 @@ jobs:
issues: write issues: write
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd
- name: Parse issue form - name: Parse issue form
uses: stefanbuck/github-issue-parser@2ea9b35a8c584529ed00891a8f7e41dc46d0441e uses: stefanbuck/github-issue-parser@10dcc54158ba4c137713d9d69d70a2da63b6bda3
id: issue-parser id: issue-parser
with: with:
template-path: .github/ISSUE_TEMPLATE/bug-report.yaml template-path: .github/ISSUE_TEMPLATE/bug-report.yaml
- name: Set labels based on OS field - name: Set labels based on OS field
uses: redhat-plumbers-in-action/advanced-issue-labeler@e38e6809c5420d038eed380d49ee9a6ca7c92dbf uses: redhat-plumbers-in-action/advanced-issue-labeler@b80ae64e3e156e9c111b075bfa04b295d54e8e2e
with: with:
issue-form: ${{ steps.issue-parser.outputs.jsonString }} issue-form: ${{ steps.issue-parser.outputs.jsonString }}
section: os section: os

View File

@@ -13,14 +13,14 @@ jobs:
outputs: outputs:
branches: ${{ steps.get-branches.outputs.data }} branches: ${{ steps.get-branches.outputs.data }}
steps: steps:
- uses: octokit/graphql-action@8ad880e4d437783ea2ab17010324de1075228110 - uses: octokit/graphql-action@ddde8ebb2493e79f390e6449c725c21663a67505
id: get-branches id: get-branches
with: with:
query: | query: |
query get_release_branches($owner:String!, $name:String!) { query get_release_branches($owner:String!, $name:String!) {
repository(owner:$owner, name:$name) { repository(owner:$owner, name:$name) {
refs(refPrefix: "refs/heads/", refs(refPrefix: "refs/heads/",
first: 1, # TODO increment once we have release branch with the new checksums format first: 3,
query: "release-", query: "release-",
orderBy: { orderBy: {
field: ALPHABETICAL, field: ALPHABETICAL,

View File

@@ -11,7 +11,7 @@ jobs:
update-patch-versions: update-patch-versions:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd
with: with:
ref: ${{ inputs.branch }} ref: ${{ inputs.branch }}
- uses: actions/setup-python@v6 - uses: actions/setup-python@v6
@@ -22,14 +22,14 @@ jobs:
- run: update-hashes - run: update-hashes
env: env:
API_KEY: ${{ secrets.GITHUB_TOKEN }} API_KEY: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/cache@v4 - uses: actions/cache@v5
with: with:
key: pre-commit-hook-propagate key: pre-commit-hook-propagate
path: | path: |
~/.cache/pre-commit ~/.cache/pre-commit
- run: pre-commit run --all-files propagate-ansible-variables - run: pre-commit run --all-files propagate-ansible-variables
continue-on-error: true continue-on-error: true
- uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e - uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0
with: with:
commit-message: Patch versions updates commit-message: Patch versions updates
title: Patch versions updates - ${{ inputs.branch }} title: Patch versions updates - ${{ inputs.branch }}

View File

@@ -43,6 +43,7 @@ pr:
- fedora39-kube-router - fedora39-kube-router
- openeuler24-calico - openeuler24-calico
- rockylinux9-cilium - rockylinux9-cilium
- rockylinux10-cilium
- ubuntu22-calico-all-in-one - ubuntu22-calico-all-in-one
- ubuntu22-calico-all-in-one-upgrade - ubuntu22-calico-all-in-one-upgrade
- ubuntu24-calico-etcd-datastore - ubuntu24-calico-etcd-datastore
@@ -127,6 +128,7 @@ pr_extended:
- debian12-docker - debian12-docker
- debian13-calico - debian13-calico
- rockylinux9-calico - rockylinux9-calico
- rockylinux10-calico
- ubuntu22-all-in-one-docker - ubuntu22-all-in-one-docker
- ubuntu24-all-in-one-docker - ubuntu24-all-in-one-docker
- ubuntu24-calico-all-in-one - ubuntu24-calico-all-in-one

View File

@@ -88,11 +88,10 @@ tf-elastx_cleanup:
- ./scripts/openstack-cleanup/main.py - ./scripts/openstack-cleanup/main.py
allow_failure: true allow_failure: true
tf-elastx_ubuntu20-calico: tf-elastx_ubuntu24-calico:
extends: .terraform_apply extends: .terraform_apply
stage: deploy-part1 stage: deploy-part1
when: on_success when: on_success
allow_failure: true
variables: variables:
<<: *elastx_variables <<: *elastx_variables
PROVIDER: openstack PROVIDER: openstack
@@ -115,5 +114,5 @@ tf-elastx_ubuntu20-calico:
TF_VAR_az_list_node: '["sto1"]' TF_VAR_az_list_node: '["sto1"]'
TF_VAR_flavor_k8s_master: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2 TF_VAR_flavor_k8s_master: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2 TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_image: ubuntu-20.04-server-latest TF_VAR_image: ubuntu-24.04-server-latest
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]' TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'

View File

@@ -36,7 +36,7 @@ vagrant:
policy: pull-push # TODO: change to "pull" when not on main policy: pull-push # TODO: change to "pull" when not on main
stage: deploy-extended stage: deploy-extended
rules: rules:
- if: $PR_LABELS =~ /.*(ci-extended|ci-full).*/ - if: $PR_LABELS =~ /.*ci-full.*/
when: on_success when: on_success
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci" - if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
when: on_success when: on_success

View File

@@ -1,7 +1,7 @@
# syntax=docker/dockerfile:1 # syntax=docker/dockerfile:1
# Use immutable image tags rather than mutable tags (like ubuntu:22.04) # Use immutable image tags rather than mutable tags (like ubuntu:24.04)
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37 FROM ubuntu:noble-20260113@sha256:cd1dba651b3080c3686ecf4e3c4220f026b521fb76978881737d24f200828b2b
# Some tools like yamllint need this # Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible # Pip needs this as well at the moment to install ansible
@@ -29,14 +29,14 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \ RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \ --mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
pip install --no-compile --no-cache-dir -r requirements.txt \ pip install --break-system-packages --no-compile --no-cache-dir -r requirements.txt \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \; && find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
SHELL ["/bin/bash", "-o", "pipefail", "-c"] SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN OS_ARCHITECTURE=$(dpkg --print-architecture) \ RUN OS_ARCHITECTURE=$(dpkg --print-architecture) \
&& curl -L "https://dl.k8s.io/release/v1.33.8/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \ && curl -L "https://dl.k8s.io/release/v1.35.0/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/v1.33.8/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \ && echo "$(curl -L "https://dl.k8s.io/release/v1.35.0/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl && chmod a+x /usr/local/bin/kubectl
COPY *.yml ./ COPY *.yml ./

View File

@@ -22,7 +22,7 @@ Ensure you have installed Docker then
```ShellSession ```ShellSession
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \ docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \ --mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.29.0 bash quay.io/kubespray/kubespray:v2.30.0 bash
# Inside the container you may now run the kubespray playbooks: # Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
``` ```
@@ -89,13 +89,13 @@ vagrant up
- **Flatcar Container Linux by Kinvolk** - **Flatcar Container Linux by Kinvolk**
- **Debian** Bookworm, Bullseye, Trixie - **Debian** Bookworm, Bullseye, Trixie
- **Ubuntu** 22.04, 24.04 - **Ubuntu** 22.04, 24.04
- **CentOS/RHEL** [8, 9](docs/operating_systems/rhel.md#rhel-8) - **CentOS Stream / RHEL** [9, 10](docs/operating_systems/rhel.md#rhel-8)
- **Fedora** 39, 40 - **Fedora** 39, 40
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md)) - **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed - **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8) - **Oracle Linux** [9, 10](docs/operating_systems/rhel.md#rhel-8)
- **Alma Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8) - **Alma Linux** [9, 10](docs/operating_systems/rhel.md#rhel-8)
- **Rocky Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8) - **Rocky Linux** [9, 10](docs/operating_systems/rhel.md#rhel-8) (experimental in 10: see [Rocky Linux 10 notes](docs/operating_systems/rhel.md#rocky-linux-10))
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/operating_systems/kylinlinux.md)) - **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/operating_systems/kylinlinux.md))
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/operating_systems/amazonlinux.md)) - **Amazon Linux 2** (experimental: see [amazon linux notes](docs/operating_systems/amazonlinux.md))
- **UOS Linux** (experimental: see [uos linux notes](docs/operating_systems/uoslinux.md)) - **UOS Linux** (experimental: see [uos linux notes](docs/operating_systems/uoslinux.md))
@@ -111,24 +111,23 @@ Note:
<!-- BEGIN ANSIBLE MANAGED BLOCK --> <!-- BEGIN ANSIBLE MANAGED BLOCK -->
- Core - Core
- [kubernetes](https://github.com/kubernetes/kubernetes) 1.33.8 - [kubernetes](https://github.com/kubernetes/kubernetes) 1.35.0
- [etcd](https://github.com/etcd-io/etcd) 3.5.26 - [etcd](https://github.com/etcd-io/etcd) 3.5.26
- [docker](https://www.docker.com/) 28.3 - [docker](https://www.docker.com/) 28.3
- [containerd](https://containerd.io/) 2.1.6 - [containerd](https://containerd.io/) 2.2.1
- [cri-o](http://cri-o.io/) 1.33.9 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS) - [cri-o](http://cri-o.io/) 1.35.0 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin - Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) 1.8.0 - [cni-plugins](https://github.com/containernetworking/plugins) 1.8.0
- [calico](https://github.com/projectcalico/calico) 3.30.6 - [calico](https://github.com/projectcalico/calico) 3.30.6
- [cilium](https://github.com/cilium/cilium) 1.18.5 - [cilium](https://github.com/cilium/cilium) 1.18.6
- [flannel](https://github.com/flannel-io/flannel) 0.27.3 - [flannel](https://github.com/flannel-io/flannel) 0.27.3
- [kube-ovn](https://github.com/alauda/kube-ovn) 1.12.21 - [kube-ovn](https://github.com/alauda/kube-ovn) 1.12.21
- [kube-router](https://github.com/cloudnativelabs/kube-router) 2.1.1 - [kube-router](https://github.com/cloudnativelabs/kube-router) 2.1.1
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) 4.2.2 - [multus](https://github.com/k8snetworkplumbingwg/multus-cni) 4.2.2
- [kube-vip](https://github.com/kube-vip/kube-vip) 0.8.0 - [kube-vip](https://github.com/kube-vip/kube-vip) 1.0.3
- Application - Application
- [cert-manager](https://github.com/jetstack/cert-manager) 1.15.3 - [cert-manager](https://github.com/jetstack/cert-manager) 1.15.3
- [coredns](https://github.com/coredns/coredns) 1.12.0 - [coredns](https://github.com/coredns/coredns) 1.12.4
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) 1.13.3
- [argocd](https://argoproj.github.io/) 2.14.5 - [argocd](https://argoproj.github.io/) 2.14.5
- [helm](https://helm.sh/) 3.18.4 - [helm](https://helm.sh/) 3.18.4
- [metallb](https://metallb.universe.tf/) 0.13.9 - [metallb](https://metallb.universe.tf/) 0.13.9
@@ -202,8 +201,6 @@ See also [Network checker](docs/advanced/netcheck.md).
## Ingress Plugins ## Ingress Plugins
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
- [metallb](docs/ingress/metallb.md): the MetalLB bare-metal service LoadBalancer provider. - [metallb](docs/ingress/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
## Community docs and resources ## Community docs and resources

View File

@@ -15,7 +15,7 @@ The Kubespray Project is released on an as-needed basis. The process is as follo
1. The release issue is closed 1. The release issue is closed
1. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released` 1. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
1. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...` 1. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
1. Create/Update Issue for upgradeing kubernetes and [k8s-conformance](https://github.com/cncf/k8s-conformance) 1. Create/Update Issue for upgrading kubernetes and [k8s-conformance](https://github.com/cncf/k8s-conformance)
## Major/minor releases and milestones ## Major/minor releases and milestones

9
contrib/collection.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash -eux
# Install collection from source assuming dependencies are present.
# Run in SemaphoreUI this bash script can install Kubespray from the repo
NAMESPACE=kubernetes_sigs
COLLECTION=kubespray
MY_VER=$(grep '^version:' galaxy.yml|cut -d: -f2|sed 's/ //')
ansible-galaxy collection build --force --output-path .
ansible-galaxy collection install --offline --force $NAMESPACE-$COLLECTION-$MY_VER.tar.gz

View File

@@ -20,7 +20,6 @@ function create_container_image_tar() {
kubectl describe cronjobs,jobs,pods --all-namespaces | grep " Image:" | awk '{print $2}' | sort | uniq > "${IMAGES}" kubectl describe cronjobs,jobs,pods --all-namespaces | grep " Image:" | awk '{print $2}' | sort | uniq > "${IMAGES}"
# NOTE: etcd and pause cannot be seen as pods. # NOTE: etcd and pause cannot be seen as pods.
# The pause image is used for --pod-infra-container-image option of kubelet.
kubectl cluster-info dump | grep -E "quay.io/coreos/etcd:|registry.k8s.io/pause:" | sed s@\"@@g >> "${IMAGES}" kubectl cluster-info dump | grep -E "quay.io/coreos/etcd:|registry.k8s.io/pause:" | sed s@\"@@g >> "${IMAGES}"
else else
echo "Getting images from file \"${IMAGES_FROM_FILE}\"" echo "Getting images from file \"${IMAGES_FROM_FILE}\""

View File

@@ -281,9 +281,9 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`k8s_allowed_remote_ips_ipv6` | List of IPv6 CIDR allowed to initiate a SSH connection, empty by default | |`k8s_allowed_remote_ips_ipv6` | List of IPv6 CIDR allowed to initiate a SSH connection, empty by default |
|`k8s_allowed_egress_ipv6_ips` | List of IPv6 CIDRs allowed for egress traffic, `["::/0"]` by default | |`k8s_allowed_egress_ipv6_ips` | List of IPv6 CIDRs allowed for egress traffic, `["::/0"]` by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default | |`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|`worker_allowed_ports_ipv6` | List of ports to open on worker nodes for IPv6 CIDR blocks, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "::/0"}]` by default | |`worker_allowed_ports_ipv6` | List of ports to open on worker nodes for IPv6 CIDR blocks, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "::/0"}, { "protocol" = "ipv6-icmp", "port_range_min" = 0, "port_range_max" = 0, "remote_ip_prefix" = "::/0"}]` by default |
|`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default | |`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default |
|`master_allowed_ports_ipv6` | List of ports to open on master nodes for IPv6 CIDR blocks, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "::/0"}]`, empty by default | |`master_allowed_ports_ipv6` | List of ports to open on master nodes for IPv6 CIDR blocks, `[{ "protocol" = "ipv6-icmp", "port_range_min" = 0, "port_range_max" = 0, "remote_ip_prefix" = "::/0"}]` by default |
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage | |`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage | |`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default | |`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default |

View File

@@ -1006,7 +1006,7 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}" name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}"
count = var.number_of_gfs_nodes_no_floating_ip count = var.number_of_gfs_nodes_no_floating_ip
availability_zone = element(var.az_list, count.index) availability_zone = element(var.az_list, count.index)
image_name = var.gfs_root_volume_size_in_gb == 0 ? local.image_to_use_gfs : null image_id = var.gfs_root_volume_size_in_gb == 0 ? local.image_to_use_gfs : null
flavor_id = var.flavor_gfs_node flavor_id = var.flavor_gfs_node
key_pair = openstack_compute_keypair_v2.k8s.name key_pair = openstack_compute_keypair_v2.k8s.name
@@ -1078,7 +1078,7 @@ resource "openstack_networking_floatingip_associate_v2" "k8s_nodes" {
port_id = openstack_networking_port_v2.k8s_nodes_port[each.key].id port_id = openstack_networking_port_v2.k8s_nodes_port[each.key].id
} }
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" { resource "openstack_blockstorage_volume_v3" "glusterfs_volume" {
name = "${var.cluster_name}-glusterfs_volume-${count.index + 1}" name = "${var.cluster_name}-glusterfs_volume-${count.index + 1}"
count = var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0 count = var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0
description = "Non-ephemeral volume for GlusterFS" description = "Non-ephemeral volume for GlusterFS"
@@ -1088,5 +1088,5 @@ resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" { resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {
count = var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0 count = var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0
instance_id = element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip.*.id, count.index) instance_id = element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip.*.id, count.index)
volume_id = element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index) volume_id = element(openstack_blockstorage_volume_v3.glusterfs_volume.*.id, count.index)
} }

View File

@@ -271,7 +271,14 @@ variable "master_allowed_ports" {
variable "master_allowed_ports_ipv6" { variable "master_allowed_ports_ipv6" {
type = list(any) type = list(any)
default = [] default = [
{
"protocol" = "ipv6-icmp"
"port_range_min" = 0
"port_range_max" = 0
"remote_ip_prefix" = "::/0"
},
]
} }
variable "worker_allowed_ports" { variable "worker_allowed_ports" {
@@ -297,6 +304,12 @@ variable "worker_allowed_ports_ipv6" {
"port_range_max" = 32767 "port_range_max" = 32767
"remote_ip_prefix" = "::/0" "remote_ip_prefix" = "::/0"
}, },
{
"protocol" = "ipv6-icmp"
"port_range_min" = 0
"port_range_max" = 0
"remote_ip_prefix" = "::/0"
},
] ]
} }

View File

@@ -3,7 +3,7 @@ terraform {
required_providers { required_providers {
upcloud = { upcloud = {
source = "UpCloudLtd/upcloud" source = "UpCloudLtd/upcloud"
version = "~>5.9.0" version = "~>5.29.1"
} }
} }
required_version = ">= 0.13" required_version = ">= 0.13"

View File

@@ -3,7 +3,7 @@ terraform {
required_providers { required_providers {
upcloud = { upcloud = {
source = "UpCloudLtd/upcloud" source = "UpCloudLtd/upcloud"
version = "~>5.9.0" version = "~>5.29.1"
} }
} }
required_version = ">= 0.13" required_version = ">= 0.13"

View File

@@ -1,5 +1,13 @@
# Cilium # Cilium
## Unprivileged agent configuration
By default, Cilium is installed with `securityContext.privileged: false`. You need to set the `kube_owner` variable to `root` in the inventory:
```yml
kube_owner: root
```
## IP Address Management (IPAM) ## IP Address Management (IPAM)
IP Address Management (IPAM) is responsible for the allocation and management of IP addresses used by network endpoints (container and others) managed by Cilium. The default mode is "Cluster Scope". IP Address Management (IPAM) is responsible for the allocation and management of IP addresses used by network endpoints (container and others) managed by Cilium. The default mode is "Cluster Scope".
@@ -237,7 +245,7 @@ cilium_operator_extra_volume_mounts:
## Choose Cilium version ## Choose Cilium version
```yml ```yml
cilium_version: "1.18.5" cilium_version: "1.18.6"
``` ```
## Add variable to config ## Add variable to config

View File

@@ -32,7 +32,7 @@ add `kube_proxy_masquerade_all: true` in `group_vars/all/all.yml`
* Disable nodelocaldns * Disable nodelocaldns
The nodelocal dns IP is not reacheable. The nodelocal dns IP is not reachable.
Disable it in `sample/group_vars/k8s_cluster/k8s_cluster.yml` Disable it in `sample/group_vars/k8s_cluster/k8s_cluster.yml`

View File

@@ -65,9 +65,8 @@ In kubespray, the default runtime name is "runc", and it can be configured with
containerd_runc_runtime: containerd_runc_runtime:
name: runc name: runc
type: "io.containerd.runc.v2" type: "io.containerd.runc.v2"
engine: ""
root: ""
options: options:
Root: ""
SystemdCgroup: "false" SystemdCgroup: "false"
BinaryName: /usr/local/bin/my-runc BinaryName: /usr/local/bin/my-runc
base_runtime_spec: cri-base.json base_runtime_spec: cri-base.json

View File

@@ -80,7 +80,7 @@ The `crio_remap_enable` configures the `/etc/subuid` and `/etc/subgid` files to
By default, 16M uids and gids are reserved for user namespaces (256 pods * 65536 uids/gids) at the end of the uid/gid space. By default, 16M uids and gids are reserved for user namespaces (256 pods * 65536 uids/gids) at the end of the uid/gid space.
The `crio_default_capabilities` configure the default containers capabilities for the crio. The `crio_default_capabilities` configure the default containers capabilities for the crio.
Defaults capabilties are: Defaults capabilities are:
```yaml ```yaml
crio_default_capabilities: crio_default_capabilities:

2
docs/_sidebar.md generated
View File

@@ -6,7 +6,6 @@
* [Downloads](/docs/advanced/downloads.md) * [Downloads](/docs/advanced/downloads.md)
* [Gcp-lb](/docs/advanced/gcp-lb.md) * [Gcp-lb](/docs/advanced/gcp-lb.md)
* [Kubernetes-reliability](/docs/advanced/kubernetes-reliability.md) * [Kubernetes-reliability](/docs/advanced/kubernetes-reliability.md)
* [Mitogen](/docs/advanced/mitogen.md)
* [Netcheck](/docs/advanced/netcheck.md) * [Netcheck](/docs/advanced/netcheck.md)
* [Ntp](/docs/advanced/ntp.md) * [Ntp](/docs/advanced/ntp.md)
* [Proxy](/docs/advanced/proxy.md) * [Proxy](/docs/advanced/proxy.md)
@@ -58,7 +57,6 @@
* [Setting-up-your-first-cluster](/docs/getting_started/setting-up-your-first-cluster.md) * [Setting-up-your-first-cluster](/docs/getting_started/setting-up-your-first-cluster.md)
* Ingress * Ingress
* [Alb Ingress Controller](/docs/ingress/alb_ingress_controller.md) * [Alb Ingress Controller](/docs/ingress/alb_ingress_controller.md)
* [Ingress Nginx](/docs/ingress/ingress_nginx.md)
* [Kube-vip](/docs/ingress/kube-vip.md) * [Kube-vip](/docs/ingress/kube-vip.md)
* [Metallb](/docs/ingress/metallb.md) * [Metallb](/docs/ingress/metallb.md)
* Operating Systems * Operating Systems

View File

@@ -6,7 +6,7 @@
- [Create New TLS Root CA Certificate and Key](#create-new-tls-root-ca-certificate-and-key) - [Create New TLS Root CA Certificate and Key](#create-new-tls-root-ca-certificate-and-key)
- [Install Cloudflare PKI/TLS `cfssl` Toolkit.](#install-cloudflare-pkitls-cfssl-toolkit) - [Install Cloudflare PKI/TLS `cfssl` Toolkit.](#install-cloudflare-pkitls-cfssl-toolkit)
- [Create Root Certificate Authority (CA) Configuration File](#create-root-certificate-authority-ca-configuration-file) - [Create Root Certificate Authority (CA) Configuration File](#create-root-certificate-authority-ca-configuration-file)
- [Create Certficate Signing Request (CSR) Configuration File](#create-certficate-signing-request-csr-configuration-file) - [Create Certificate Signing Request (CSR) Configuration File](#create-certificate-signing-request-csr-configuration-file)
- [Create TLS Root CA Certificate and Key](#create-tls-root-ca-certificate-and-key) - [Create TLS Root CA Certificate and Key](#create-tls-root-ca-certificate-and-key)
Cert-Manager is a native Kubernetes certificate management controller. It can help with issuing certificates from a variety of sources, such as Lets Encrypt, HashiCorp Vault, Venafi, a simple signing key pair, or self signed. It will ensure certificates are valid and up to date, and attempt to renew certificates at a configured time before expiry. Cert-Manager is a native Kubernetes certificate management controller. It can help with issuing certificates from a variety of sources, such as Lets Encrypt, HashiCorp Vault, Venafi, a simple signing key pair, or self signed. It will ensure certificates are valid and up to date, and attempt to renew certificates at a configured time before expiry.
@@ -30,14 +30,7 @@ If you don't have a TLS Root CA certificate and key available, you can create th
A common use-case for cert-manager is requesting TLS signed certificates to secure your ingress resources. This can be done by simply adding annotations to your Ingress resources and cert-manager will facilitate creating the Certificate resource for you. A small sub-component of cert-manager, ingress-shim, is responsible for this. A common use-case for cert-manager is requesting TLS signed certificates to secure your ingress resources. This can be done by simply adding annotations to your Ingress resources and cert-manager will facilitate creating the Certificate resource for you. A small sub-component of cert-manager, ingress-shim, is responsible for this.
To enable the Nginx Ingress controller as part of your Kubespray deployment, simply edit your K8s cluster addons inventory e.g. `inventory\sample\group_vars\k8s_cluster\addons.yml` and set `ingress_nginx_enabled` to true. For example, if you're using the Traefik ingress controller, you can secure the Prometheus ingress by adding the annotation `cert-manager.io/cluster-issuer: ca-issuer` and the `spec.tls` section to the `Ingress` resource definition.
```ini
# Nginx ingress controller deployment
ingress_nginx_enabled: true
```
For example, if you're using the Nginx ingress controller, you can secure the Prometheus ingress by adding the annotation `cert-manager.io/cluster-issuer: ca-issuer` and the `spec.tls` section to the `Ingress` resource definition.
```yaml ```yaml
apiVersion: networking.k8s.io/v1 apiVersion: networking.k8s.io/v1
@@ -48,9 +41,9 @@ metadata:
labels: labels:
prometheus: k8s prometheus: k8s
annotations: annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: ca-issuer cert-manager.io/cluster-issuer: ca-issuer
spec: spec:
ingressClassName: "traefik"
tls: tls:
- hosts: - hosts:
- prometheus.example.com - prometheus.example.com
@@ -72,8 +65,8 @@ Once deployed to your K8s cluster, every 3 months cert-manager will automaticall
Please consult the official upstream documentation: Please consult the official upstream documentation:
- [cert-manager Ingress Usage](https://cert-manager.io/v1.5-docs/usage/ingress/) - [cert-manager Ingress Usage](https://cert-manager.io/usage/ingress/)
- [cert-manager Ingress Tutorial](https://cert-manager.io/v1.5-docs/tutorials/acme/ingress/#step-3-assign-a-dns-name) - [cert-manager Ingress Tutorial](https://cert-manager.io/tutorials/acme/ingress/#step-3-assign-a-dns-name)
### ACME ### ACME
@@ -81,12 +74,12 @@ The ACME Issuer type represents a single account registered with the Automated C
Certificates issued by public ACME servers are typically trusted by clients computers by default. This means that, for example, visiting a website that is backed by an ACME certificate issued for that URL, will be trusted by default by most clients web browsers. ACME certificates are typically free. Certificates issued by public ACME servers are typically trusted by clients computers by default. This means that, for example, visiting a website that is backed by an ACME certificate issued for that URL, will be trusted by default by most clients web browsers. ACME certificates are typically free.
- [ACME Configuration](https://cert-manager.io/v1.5-docs/configuration/acme/) - [ACME Configuration](https://cert-manager.io/docs/configuration/acme/)
- [ACME HTTP Validation](https://cert-manager.io/v1.5-docs/tutorials/acme/http-validation/) - [ACME HTTP Validation](https://cert-manager.io/docs/tutorials/acme/http-validation/)
- [HTTP01 Challenges](https://cert-manager.io/v1.5-docs/configuration/acme/http01/) - [HTTP01 Challenges](https://cert-manager.io/docs/configuration/acme/http01/)
- [ACME DNS Validation](https://cert-manager.io/v1.5-docs/tutorials/acme/dns-validation/) - [ACME DNS Validation](https://cert-manager.io/docs/tutorials/acme/dns-validation/)
- [DNS01 Challenges](https://cert-manager.io/v1.5-docs/configuration/acme/dns01/) - [DNS01 Challenges](https://cert-manager.io/docs/configuration/acme/dns01/)
- [ACME FAQ](https://cert-manager.io/v1.5-docs/faq/acme/) - [ACME FAQ](https://cert-manager.io/docs/troubleshooting/acme/)
#### ACME With An Internal Certificate Authority #### ACME With An Internal Certificate Authority
@@ -134,7 +127,7 @@ $ cat > ca-config.json <<EOF
EOF EOF
``` ```
#### Create Certficate Signing Request (CSR) Configuration File #### Create Certificate Signing Request (CSR) Configuration File
The TLS certificate `names` details can be updated to your own specific requirements. The TLS certificate `names` details can be updated to your own specific requirements.

View File

@@ -1,4 +1,4 @@
# GCP Load Balancers for type=LoadBalacer of Kubernetes Services # GCP Load Balancers for type=LoadBalancer of Kubernetes Services
> **Removed**: Since v1.31 (the Kubespray counterpart is v2.27), Kubernetes no longer supports `cloud_provider`. (except external cloud provider) > **Removed**: Since v1.31 (the Kubespray counterpart is v2.27), Kubernetes no longer supports `cloud_provider`. (except external cloud provider)

View File

@@ -1,30 +0,0 @@
# Mitogen
*Warning:* Mitogen support is now deprecated in kubespray due to upstream not releasing an updated version to support ansible 4.x (ansible-base 2.11.x) and above. The CI support has been stripped for mitogen and we are no longer validating any support or regressions for it. The supporting mitogen install playbook and integration documentation will be removed in a later version.
[Mitogen for Ansible](https://mitogen.networkgenomics.com/ansible_detailed.html) allow a 1.25x - 7x speedup and a CPU usage reduction of at least 2x, depending on network conditions, modules executed, and time already spent by targets on useful work. Mitogen cannot improve a module once it is executing, it can only ensure the module executes as quickly as possible.
## Install
```ShellSession
ansible-playbook contrib/mitogen/mitogen.yml
```
The above playbook sets the ansible `strategy` and `strategy_plugins` in `ansible.cfg` but you can also enable them if you use your own `ansible.cfg` by setting the environment varialbles:
```ShellSession
export ANSIBLE_STRATEGY=mitogen_linear
export ANSIBLE_STRATEGY_PLUGINS=plugins/mitogen/ansible_mitogen/plugins/strategy
```
... or `ansible.cfg` setup:
```ini
[defaults]
strategy_plugins = plugins/mitogen/ansible_mitogen/plugins/strategy
strategy=mitogen_linear
```
## Limitation
If you are experiencing problems, please see the [documentation](https://mitogen.networkgenomics.com/ansible_detailed.html#noteworthy-differences).

View File

@@ -30,9 +30,9 @@ If the latest version supported according to pip is 6.7.0 it means you are runni
Based on the table below and the available python version for your ansible host you should choose the appropriate ansible version to use with kubespray. Based on the table below and the available python version for your ansible host you should choose the appropriate ansible version to use with kubespray.
| Ansible Version | Python Version | | Ansible Version | Python Version |
|-----------------|----------------| |-------------------|----------------|
| >= 2.17.3 | 3.10-3.12 | | >=2.18.0, <2.19.0 | 3.11-3.13 |
## Customize Ansible vars ## Customize Ansible vars
@@ -42,13 +42,10 @@ Kubespray expects users to use one of the following variables sources for settin
|----------------------------------------|------------------------------------------------------------------------------| |----------------------------------------|------------------------------------------------------------------------------|
| inventory vars | | | inventory vars | |
| - **inventory group_vars** | most used | | - **inventory group_vars** | most used |
| - inventory host_vars | host specifc vars overrides, group_vars is usually more practical | | - inventory host_vars | host specific vars overrides, group_vars is usually more practical |
| **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` | | **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` |
[!IMPORTANT] > Extra vars are best used to override kubespray internal variables, for instances, roles/vars/. Those vars are usually **not expected** (by Kubespray developers) to be modified by end users, and not part of Kubespray interface. Thus they can change, disappear, or break stuff unexpectedly.
Extra vars are best used to override kubespray internal variables, for instances, roles/vars/.
Those vars are usually **not expected** (by Kubespray developers) to be modified by end users, and not part of Kubespray
interface. Thus they can change, disappear, or break stuff unexpectedly.
## Ansible tags ## Ansible tags
@@ -81,7 +78,6 @@ The following tags are defined in playbooks:
| crio | Configuring crio container engine for hosts | | crio | Configuring crio container engine for hosts |
| crun | Configuring crun runtime | | crun | Configuring crun runtime |
| csi-driver | Configuring csi driver | | csi-driver | Configuring csi driver |
| dashboard | Installing and configuring the Kubernetes Dashboard |
| dns | Remove dns entries when resetting | | dns | Remove dns entries when resetting |
| docker | Configuring docker engine runtime for hosts | | docker | Configuring docker engine runtime for hosts |
| download | Fetching container images to a delegate host | | download | Fetching container images to a delegate host |
@@ -122,7 +118,7 @@ The following tags are defined in playbooks:
| metrics_server | Configuring metrics_server | | metrics_server | Configuring metrics_server |
| netchecker | Installing netchecker K8s app | | netchecker | Installing netchecker K8s app |
| network | Configuring networking plugins for K8s | | network | Configuring networking plugins for K8s |
| mounts | Umount kubelet dirs when reseting | | mounts | Umount kubelet dirs when resetting |
| multus | Network plugin multus | | multus | Network plugin multus |
| nginx | Configuring LB for kube-apiserver instances | | nginx | Configuring LB for kube-apiserver instances |
| node | Configuring K8s minion (compute) node role | | node | Configuring K8s minion (compute) node role |
@@ -181,17 +177,13 @@ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
Note: use `--tags` and `--skip-tags` wisely and only if you're 100% sure what you're doing. Note: use `--tags` and `--skip-tags` wisely and only if you're 100% sure what you're doing.
## Mitogen
Mitogen support is deprecated, please see [mitogen related docs](/docs/advanced/mitogen.md) for usage and reasons for deprecation.
## Troubleshooting Ansible issues ## Troubleshooting Ansible issues
Having the wrong version of ansible, ansible collections or python dependencies can cause issue. Having the wrong version of ansible, ansible collections or python dependencies can cause issue.
In particular, Kubespray ship custom modules which Ansible needs to find, for which you should specify [ANSIBLE_LIBRAY](https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html#adding-a-module-or-plugin-outside-of-a-collection) In particular, Kubespray ship custom modules which Ansible needs to find, for which you should specify [ANSIBLE_LIBRARY](https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html#adding-a-module-or-plugin-outside-of-a-collection)
```ShellSession ```ShellSession
export ANSIBLE_LIBRAY=<kubespray_dir>/library` export ANSIBLE_LIBRARY=<kubespray_dir>/library`
``` ```
A simple way to ensure you get all the correct version of Ansible is to use A simple way to ensure you get all the correct version of Ansible is to use
@@ -200,11 +192,11 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
to access the inventory and SSH key in the container, like this: to access the inventory and SSH key in the container, like this:
```ShellSession ```ShellSession
git checkout v2.29.0 git checkout v2.30.0
docker pull quay.io/kubespray/kubespray:v2.29.0 docker pull quay.io/kubespray/kubespray:v2.30.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \ docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \ --mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.29.0 bash quay.io/kubespray/kubespray:v2.30.0 bash
# Inside the container you may now run the kubespray playbooks: # Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
``` ```

View File

@@ -6,7 +6,7 @@ See [.gitlab-ci.yml](/.gitlab-ci.yml) and the included files for an overview.
## Runners ## Runners
Kubespray has 2 types of GitLab runners, both deployed on the Kubespray CI cluster (hosted on Oracle Cloud Infrastucture): Kubespray has 2 types of GitLab runners, both deployed on the Kubespray CI cluster (hosted on Oracle Cloud Infrastructure):
- pods: use the [gitlab-ci kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/) - pods: use the [gitlab-ci kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/)
- vagrant: custom executor running in pods with access to the libvirt socket on the nodes - vagrant: custom executor running in pods with access to the libvirt socket on the nodes
@@ -145,7 +145,6 @@ upstream_dns_servers:
- 1.0.0.1 - 1.0.0.1
# Extensions # Extensions
ingress_nginx_enabled: True
helm_enabled: True helm_enabled: True
cert_manager_enabled: True cert_manager_enabled: True
metrics_server_enabled: True metrics_server_enabled: True
@@ -156,7 +155,7 @@ kube_feature_gates:
- "NodeSwap=True" - "NodeSwap=True"
``` ```
## Aditional files ## Additional files
This section documents additional files used to complete a deployment of the kubespray CI, these files sit on the control-plane node and assume a working kubernetes cluster. This section documents additional files used to complete a deployment of the kubespray CI, these files sit on the control-plane node and assume a working kubernetes cluster.

View File

@@ -15,8 +15,8 @@ fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x
fedora40 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | fedora40 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | flatcar4081 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | openeuler24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux10 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | rockylinux9 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :white_check_mark: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | ubuntu24 | :white_check_mark: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: |
@@ -33,8 +33,8 @@ fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | flatcar4081 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux10 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
@@ -51,7 +51,7 @@ fedora39 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | flatcar4081 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux10 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |

View File

@@ -83,32 +83,6 @@ authentication. One can get a kubeconfig from kube_control_plane hosts
For more information on kubeconfig and accessing a Kubernetes cluster, refer to For more information on kubeconfig and accessing a Kubernetes cluster, refer to
the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
## Accessing Kubernetes Dashboard
Supported version is kubernetes-dashboard v2.0.x :
- Login option : token/kubeconfig by default
- Deployed by default in "kube-system" namespace, can be overridden with `dashboard_namespace: kubernetes-dashboard` in inventory,
- Only serves over https
Access is described in [dashboard docs](https://github.com/kubernetes/dashboard/tree/master/docs/user/accessing-dashboard). With kubespray's default deployment in kube-system namespace, instead of kubernetes-dashboard :
- Proxy URL is <http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/login>
- kubectl commands must be run with "-n kube-system"
Accessing through Ingress is highly recommended. For proxy access, please note that proxy must listen to [localhost](https://github.com/kubernetes/dashboard/issues/692#issuecomment-220492484) (`proxy --address="x.x.x.x"` will not work)
For token authentication, guide to create Service Account is provided in [dashboard sample user](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md) doc. Still take care of default namespace.
Access can also by achieved via ssh tunnel on a control plane :
```bash
# localhost:8081 will be sent to control-plane-1's own localhost:8081
ssh -L8001:localhost:8001 user@control-plane-1
sudo -i
kubectl proxy
```
## Accessing Kubernetes API ## Accessing Kubernetes API
The main client of Kubernetes is `kubectl`. It is installed on each kube_control_plane The main client of Kubernetes is `kubectl`. It is installed on each kube_control_plane

View File

@@ -1,203 +0,0 @@
# Installation Guide
## Contents
- [Prerequisite Generic Deployment Command](#prerequisite-generic-deployment-command)
- [Provider Specific Steps](#provider-specific-steps)
- [Docker for Mac](#docker-for-mac)
- [minikube](#minikube)
- [AWS](#aws)
- [GCE - GKE](#gce-gke)
- [Azure](#azure)
- [Bare-metal](#bare-metal)
- [Verify installation](#verify-installation)
- [Detect installed version](#detect-installed-version)
- [Using Helm](#using-helm)
## Prerequisite Generic Deployment Command
!!! attention
The default configuration watches Ingress object from *all the namespaces*.
To change this behavior use the flag `--watch-namespace` to limit the scope to a particular namespace.
!!! warning
If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
!!! attention
If you're using GKE you need to initialize your user as a cluster-admin with the following command:
```console
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
```
The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version.
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.13.3/deploy/static/provider/cloud/deploy.yaml
```
### Provider Specific Steps
There are cloud provider specific yaml files.
#### Docker for Mac
Kubernetes is available in Docker for Mac (from [version 18.06.0-ce](https://docs.docker.com/docker-for-mac/release-notes/#stable-releases-of-2018))
First you need to [enable kubernetes](https://docs.docker.com/docker-for-mac/#kubernetes).
Then you have to create a service:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
```
#### minikube
For standard usage:
```console
minikube addons enable ingress
```
For development:
1. Disable the ingress addon:
```console
minikube addons disable ingress
```
1. Execute `make dev-env`
1. Confirm the `nginx-ingress-controller` deployment exists:
```console
$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
default-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s
nginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s
```
#### AWS
In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of `Type=LoadBalancer`.
Since Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB)
Please check the [elastic load balancing AWS details page](https://aws.amazon.com/elasticloadbalancing/details/)
##### Elastic Load Balancer - ELB
This setup requires to choose in which layer (L4 or L7) we want to configure the Load Balancer:
- [Layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_Layer): Use an Network Load Balancer (NLB) with TCP as the listener protocol for ports 80 and 443.
- [Layer 7](https://en.wikipedia.org/wiki/OSI_model#Layer_7:_Application_Layer): Use an Elastic Load Balancer (ELB) with HTTP as the listener protocol for port 80 and terminate TLS in the ELB
For L4:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/deploy.yaml
```
For L7:
Change the value of `service.beta.kubernetes.io/aws-load-balancer-ssl-cert` in the file `provider/aws/deploy-tls-termination.yaml` replacing the dummy id with a valid one. The dummy value is `"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"`
Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the [ELB Idle Timeouts section](#elb-idle-timeouts) for additional information. If a change is required, users will need to update the value of `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` in `provider/aws/deploy-tls-termination.yaml`
Then execute:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/deploy-tls-termination.yaml
```
This example creates an ELB with just two listeners, one in port 80 and another in port 443
![Listeners](https://github.com/kubernetes/ingress-nginx/blob/main/docs/images/elb-l7-listener.png)
##### ELB Idle Timeouts
In some scenarios users will need to modify the value of the ELB idle timeout.
Users need to ensure the idle timeout is less than the [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) that is configured for NGINX.
By default NGINX `keepalive_timeout` is set to `75s`.
The default ELB idle timeout will work for most scenarios, unless the NGINX [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) has been modified,
in which case `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` will need to be modified to ensure it is less than the `keepalive_timeout` the user has configured.
*Please Note: An idle timeout of `3600s` is recommended when using WebSockets.*
More information with regards to idle timeouts for your Load Balancer can be found in the [official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html).
##### Network Load Balancer (NLB)
This type of load balancer is supported since v1.10.0 as an ALPHA feature.
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-nlb.yaml
```
#### GCE-GKE
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
```
**Important Note:** proxy protocol is not supported in GCE/GKE
#### Azure
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
```
#### Bare-metal
Using [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport):
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml
```
!!! tip
For extended notes regarding deployments on bare-metal, see [Bare-metal considerations](https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/baremetal.md).
### Verify installation
To check if the ingress controller pods have started, run the following command:
```console
kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
```
Once the operator pods are running, you can cancel the above command by typing `Ctrl+C`.
Now, you are ready to create your first ingress.
### Detect installed version
To detect which version of the ingress controller is running, exec into the pod and run `nginx-ingress-controller version` command.
```console
POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/component=controller -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
```
## Using Helm
NGINX Ingress controller can be installed via [Helm](https://helm.sh/) using the chart [ingress-nginx/ingress-nginx](https://kubernetes.github.io/ingress-nginx).
Official documentation is [here](https://kubernetes.github.io/ingress-nginx/deploy/#using-helm)
To install the chart with the release name `my-nginx`:
```console
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install my-nginx ingress-nginx/ingress-nginx
```
Detect installed version:
```console
POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version
```

View File

@@ -21,6 +21,12 @@ metallb_enabled: true
metallb_speaker_enabled: true metallb_speaker_enabled: true
``` ```
By default, MetalLB resources are deployed into the `metallb-system` namespace. You can override this namespace using a variable.
```yaml
metallb_namespace: woodenlb-system
```
By default only the MetalLB BGP speaker is allowed to run on control plane nodes. If you have a single node cluster or a cluster where control plane are also worker nodes you may need to enable tolerations for the MetalLB controller: By default only the MetalLB BGP speaker is allowed to run on control plane nodes. If you have a single node cluster or a cluster where control plane are also worker nodes you may need to enable tolerations for the MetalLB controller:
```yaml ```yaml
@@ -35,7 +41,7 @@ metallb_config:
effect: "NoSchedule" effect: "NoSchedule"
``` ```
If you'd like to set additional nodeSelector and tolerations values, you can do so in the following fasion: If you'd like to set additional nodeSelector and tolerations values, you can do so in the following fashion:
```yaml ```yaml
metallb_config: metallb_config:

View File

@@ -37,4 +37,12 @@ If you have containers that are using iptables in the host network namespace (`h
you need to ensure they are using iptables-nft. you need to ensure they are using iptables-nft.
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966) An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)
The kernel version is lower than the kubenretes 1.32 system validation, please refer to the [kernel requirements](../operations/kernel-requirements.md). The kernel version is lower than the kubernetes 1.32 system validation, please refer to the [kernel requirements](../operations/kernel-requirements.md).
## Rocky Linux 10
(Experimental in Kubespray CI)
The official Rocky Linux 10 cloud image does not include `kernel-module-extra`. Both Kube Proxy and CNI rely on this package, and since it relates to kernel version compatibility (which may require VM reboots, etc.), we haven't found an ideal solution.
However, some users report that it doesn't affect them (minimal version). Therefore, the Kubespray CI Rocky Linux 10 image is built by Kubespray maintainers using `diskimage-builder`. For detailed methods, please refer to [the comments](https://github.com/kubernetes-sigs/kubespray/pull/12355#issuecomment-3705400093).

View File

@@ -100,8 +100,6 @@ kubelet_make_iptables_util_chains: true
kubelet_feature_gates: ["RotateKubeletServerCertificate=true"] kubelet_feature_gates: ["RotateKubeletServerCertificate=true"]
kubelet_seccomp_default: true kubelet_seccomp_default: true
kubelet_systemd_hardening: true kubelet_systemd_hardening: true
# To disable kubelet's staticPodPath (for nodes that don't use static pods like worker nodes)
kubelet_static_pod_path: ""
# In case you have multiple interfaces in your # In case you have multiple interfaces in your
# control plane nodes and you want to specify the right # control plane nodes and you want to specify the right
# IP addresses, kubelet_secure_addresses allows you # IP addresses, kubelet_secure_addresses allows you

View File

@@ -11,7 +11,7 @@ kubeadm_ignore_preflight_errors:
The Kernel Version Matrixs: The Kernel Version Matrixs:
| OS Verion | Kernel Verion | Kernel >=4.19 | | OS Version | Kernel Version | Kernel >=4.19 |
|--- | --- | --- | |--- | --- | --- |
| RHEL 9 | 5.14 | :white_check_mark: | | RHEL 9 | 5.14 | :white_check_mark: |
| RHEL 8 | 4.18 | :x: | | RHEL 8 | 4.18 | :x: |

View File

@@ -31,6 +31,8 @@ That's it.
Append the new host to the inventory and run `cluster.yml`. You can NOT use `scale.yml` for that. Append the new host to the inventory and run `cluster.yml`. You can NOT use `scale.yml` for that.
**Note:** When adding new control plane nodes, always append them to the end of the `kube_control_plane` group in your inventory. Adding control plane nodes in the first position is not supported and will cause the playbook to fail.
### 2) Restart kube-system/nginx-proxy ### 2) Restart kube-system/nginx-proxy
In all hosts, restart nginx-proxy pod. This pod is a local proxy for the apiserver. Kubespray will update its static config, but it needs to be restarted in order to reload. In all hosts, restart nginx-proxy pod. This pod is a local proxy for the apiserver. Kubespray will update its static config, but it needs to be restarted in order to reload.

View File

@@ -2,7 +2,7 @@
namespace: kubernetes_sigs namespace: kubernetes_sigs
description: Deploy a production ready Kubernetes cluster description: Deploy a production ready Kubernetes cluster
name: kubespray name: kubespray
version: 2.29.2 version: 2.31.0
readme: README.md readme: README.md
authors: authors:
- The Kubespray maintainers (https://kubernetes.slack.com/channels/kubespray) - The Kubespray maintainers (https://kubernetes.slack.com/channels/kubespray)

View File

@@ -38,6 +38,7 @@
loadSidebar: 'docs/_sidebar.md', loadSidebar: 'docs/_sidebar.md',
repo: 'https://github.com/kubernetes-sigs/kubespray', repo: 'https://github.com/kubernetes-sigs/kubespray',
auto2top: true, auto2top: true,
noCompileLinks: ['.*\.ini'],
logo: '/logo/logo-clear.png' logo: '/logo/logo-clear.png'
} }
</script> </script>

View File

@@ -11,15 +11,15 @@
# containerd_runc_runtime: # containerd_runc_runtime:
# name: runc # name: runc
# type: "io.containerd.runc.v2" # type: "io.containerd.runc.v2"
# engine: "" # options:
# root: "" # Root: ""
# containerd_additional_runtimes: # containerd_additional_runtimes:
# Example for Kata Containers as additional runtime: # Example for Kata Containers as additional runtime:
# - name: kata # - name: kata
# type: "io.containerd.kata.v2" # type: "io.containerd.kata.v2"
# engine: "" # options:
# root: "" # Root: ""
# containerd_grpc_max_recv_message_size: 16777216 # containerd_grpc_max_recv_message_size: 16777216
# containerd_grpc_max_send_message_size: 16777216 # containerd_grpc_max_send_message_size: 16777216

View File

@@ -1,8 +1,4 @@
--- ---
# Kubernetes dashboard
# RBAC required. see docs/getting-started.md for access details.
# dashboard_enabled: false
# Helm deployment # Helm deployment
helm_enabled: false helm_enabled: false
@@ -67,39 +63,6 @@ local_volume_provisioner_enabled: false
# Gateway API CRDs # Gateway API CRDs
gateway_api_enabled: false gateway_api_enabled: false
# Nginx ingress controller deployment
ingress_nginx_enabled: false
# ingress_nginx_host_network: false
# ingress_nginx_service_type: LoadBalancer
# ingress_nginx_service_annotations:
# example.io/loadbalancerIPs: 1.2.3.4
# ingress_nginx_service_nodeport_http: 30080
# ingress_nginx_service_nodeport_https: 30081
ingress_publish_status_address: ""
# ingress_nginx_nodeselector:
# kubernetes.io/os: "linux"
# ingress_nginx_tolerations:
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# ingress_nginx_namespace: "ingress-nginx"
# ingress_nginx_insecure_port: 80
# ingress_nginx_secure_port: 443
# ingress_nginx_configmap:
# map-hash-bucket-size: "128"
# ssl-protocols: "TLSv1.2 TLSv1.3"
# ingress_nginx_configmap_tcp_services:
# 9000: "default/example-go:8080"
# ingress_nginx_configmap_udp_services:
# 53: "kube-system/coredns:53"
# ingress_nginx_extra_args:
# - --default-ssl-certificate=default/foo-tls
# ingress_nginx_termination_grace_period_seconds: 300
# ingress_nginx_class: nginx
# ingress_nginx_without_class: true
# ingress_nginx_default: false
# ALB ingress controller deployment # ALB ingress controller deployment
ingress_alb_enabled: false ingress_alb_enabled: false
# alb_ingress_aws_region: "us-east-1" # alb_ingress_aws_region: "us-east-1"

View File

@@ -22,7 +22,8 @@ local_release_dir: "/tmp/releases"
# Random shifts for retrying failed ops like pushing/downloading # Random shifts for retrying failed ops like pushing/downloading
retry_stagger: 5 retry_stagger: 5
# This is the user that owns tha cluster installation. # This is the user that owns the cluster installation.
# Note: cilium needs to set kube_owner to root https://kubespray.io/#/docs/CNI/cilium?id=unprivileged-agent-configuration
kube_owner: kube kube_owner: kube
# This is the group that the cert creation scripts chgrp the # This is the group that the cert creation scripts chgrp the

View File

@@ -1,2 +1,2 @@
--- ---
requires_ansible: ">=2.17.3" requires_ansible: ">=2.18.0,<2.19.0"

View File

@@ -1,5 +1,5 @@
# Use immutable image tags rather than mutable tags (like ubuntu:22.04) # Use immutable image tags rather than mutable tags (like ubuntu:24.04)
FROM ubuntu:jammy-20230308 FROM ubuntu:noble-20260113@sha256:cd1dba651b3080c3686ecf4e3c4220f026b521fb76978881737d24f200828b2b
# Some tools like yamllint need this # Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible # Pip needs this as well at the moment to install ansible
# (and potentially other packages) # (and potentially other packages)
@@ -27,14 +27,14 @@ RUN apt update -q \
ca-certificates \ ca-certificates \
curl \ curl \
gnupg2 \ gnupg2 \
software-properties-common \
unzip \ unzip \
libvirt-clients \ libvirt-clients \
qemu-utils \ qemu-utils \
qemu-kvm \ qemu-kvm \
dnsmasq \ dnsmasq \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \ && curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc \
&& add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \ && echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | tee /etc/apt/sources.list.d/docker.list \
&& apt update -q \ && apt update -q \
&& apt install --no-install-recommends -yq docker-ce \ && apt install --no-install-recommends -yq docker-ce \
&& apt autoremove -yqq --purge && apt clean && rm -rf /var/lib/apt/lists/* /var/log/* && apt autoremove -yqq --purge && apt clean && rm -rf /var/lib/apt/lists/* /var/log/*
@@ -44,11 +44,10 @@ ADD ./requirements.txt /kubespray/requirements.txt
ADD ./tests/requirements.txt /kubespray/tests/requirements.txt ADD ./tests/requirements.txt /kubespray/tests/requirements.txt
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& pip install --no-compile --no-cache-dir pip -U \ && pip install --break-system-packages --ignore-installed --no-compile --no-cache-dir pip -U \
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \ && pip install --break-system-packages --no-compile --no-cache-dir -r tests/requirements.txt \
&& pip install --no-compile --no-cache-dir -r requirements.txt \ && curl -L https://dl.k8s.io/release/v1.35.0/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& curl -L https://dl.k8s.io/release/v1.33.8/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \ && echo $(curl -L https://dl.k8s.io/release/v1.35.0/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& echo $(curl -L https://dl.k8s.io/release/v1.33.8/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \ && chmod a+x /usr/local/bin/kubectl \
# Install Vagrant # Install Vagrant
&& curl -LO https://releases.hashicorp.com/vagrant/${VAGRANT_VERSION}/vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \ && curl -LO https://releases.hashicorp.com/vagrant/${VAGRANT_VERSION}/vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \
@@ -56,5 +55,5 @@ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& rm vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \ && rm vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \
&& vagrant plugin install vagrant-libvirt \ && vagrant plugin install vagrant-libvirt \
# Install Kubernetes collections # Install Kubernetes collections
&& pip install --no-compile --no-cache-dir kubernetes \ && pip install --break-system-packages --no-compile --no-cache-dir kubernetes \
&& ansible-galaxy collection install kubernetes.core && ansible-galaxy collection install kubernetes.core

View File

@@ -5,8 +5,8 @@
become: false become: false
run_once: true run_once: true
vars: vars:
minimal_ansible_version: 2.17.3 minimal_ansible_version: 2.18.0
maximal_ansible_version: 2.18.0 maximal_ansible_version: 2.19.0
tags: always tags: always
tasks: tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}" - name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"

View File

@@ -55,7 +55,7 @@
- { role: kubernetes-apps/kubelet-csr-approver, tags: kubelet-csr-approver } - { role: kubernetes-apps/kubelet-csr-approver, tags: kubelet-csr-approver }
- { role: container-engine, tags: "container-engine", when: deploy_container_engine } - { role: container-engine, tags: "container-engine", when: deploy_container_engine }
- { role: kubernetes/node, tags: node } - { role: kubernetes/node, tags: node }
- { role: kubernetes/control-plane, tags: master, upgrade_cluster_setup: true } - { role: kubernetes/control-plane, tags: control-plane, upgrade_cluster_setup: true }
- { role: kubernetes/client, tags: client } - { role: kubernetes/client, tags: client }
- { role: kubernetes/node-label, tags: node-label } - { role: kubernetes/node-label, tags: node-label }
- { role: kubernetes/node-taint, tags: node-taint } - { role: kubernetes/node-taint, tags: node-taint }
@@ -100,7 +100,7 @@
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
- { role: kubespray_defaults } - { role: kubespray_defaults }
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"] } - { role: win_nodes/kubernetes_patch, tags: ["control-plane", "win_nodes"] }
- name: Install Calico Route Reflector - name: Install Calico Route Reflector
hosts: calico_rr hosts: calico_rr

View File

@@ -1,7 +1,7 @@
ansible==10.7.0 ansible==11.13.0
# Needed for community.crypto module # Needed for community.crypto module
cryptography==46.0.2 cryptography==46.0.4
# Needed for jinja2 json_query templating # Needed for jinja2 json_query templating
jmespath==1.0.1 jmespath==1.1.0
# Needed for ansible.utils.ipaddr # Needed for ansible.utils.ipaddr
netaddr==1.3.0 netaddr==1.3.0

View File

@@ -9,6 +9,8 @@ platforms:
vm_memory: 512 vm_memory: 512
provisioner: provisioner:
name: ansible name: ansible
env:
ANSIBLE_ROLES_PATH: ../../../
config_options: config_options:
defaults: defaults:
callbacks_enabled: profile_tasks callbacks_enabled: profile_tasks

View File

@@ -9,6 +9,8 @@ platforms:
vm_memory: 512 vm_memory: 512
provisioner: provisioner:
name: ansible name: ansible
env:
ANSIBLE_ROLES_PATH: ../../../
config_options: config_options:
defaults: defaults:
callbacks_enabled: profile_tasks callbacks_enabled: profile_tasks

View File

@@ -21,6 +21,8 @@ platforms:
vm_memory: 512 vm_memory: 512
provisioner: provisioner:
name: ansible name: ansible
env:
ANSIBLE_ROLES_PATH: ../../../
config_options: config_options:
defaults: defaults:
callbacks_enabled: profile_tasks callbacks_enabled: profile_tasks

View File

@@ -13,10 +13,9 @@ containerd_snapshotter: "overlayfs"
containerd_runc_runtime: containerd_runc_runtime:
name: runc name: runc
type: "io.containerd.runc.v2" type: "io.containerd.runc.v2"
engine: ""
root: ""
base_runtime_spec: cri-base.json base_runtime_spec: cri-base.json
options: options:
Root: ""
SystemdCgroup: "{{ containerd_use_systemd_cgroup | ternary('true', 'false') }}" SystemdCgroup: "{{ containerd_use_systemd_cgroup | ternary('true', 'false') }}"
BinaryName: "{{ bin_dir }}/runc" BinaryName: "{{ bin_dir }}/runc"
@@ -24,8 +23,8 @@ containerd_additional_runtimes: []
# Example for Kata Containers as additional runtime: # Example for Kata Containers as additional runtime:
# - name: kata # - name: kata
# type: "io.containerd.kata.v2" # type: "io.containerd.kata.v2"
# engine: "" # options:
# root: "" # Root: ""
containerd_base_runtime_spec_rlimit_nofile: 65535 containerd_base_runtime_spec_rlimit_nofile: 65535
@@ -36,8 +35,8 @@ containerd_default_base_runtime_spec_patch:
hard: "{{ containerd_base_runtime_spec_rlimit_nofile }}" hard: "{{ containerd_base_runtime_spec_rlimit_nofile }}"
soft: "{{ containerd_base_runtime_spec_rlimit_nofile }}" soft: "{{ containerd_base_runtime_spec_rlimit_nofile }}"
# Can help reduce disk usage # Only for containerd < 2.1; discard unpacked layers to save disk space
# https://github.com/containerd/containerd/discussions/6295 # https://github.com/containerd/containerd/blob/release/2.1/docs/cri/config.md#image-pull-configuration-since-containerd-v21
containerd_discard_unpacked_layers: true containerd_discard_unpacked_layers: true
containerd_base_runtime_specs: containerd_base_runtime_specs:

View File

@@ -52,8 +52,6 @@ oom_score = {{ containerd_oom_score }}
{% for runtime in [containerd_runc_runtime] + containerd_additional_runtimes %} {% for runtime in [containerd_runc_runtime] + containerd_additional_runtimes %}
[plugins."io.containerd.cri.v1.runtime".containerd.runtimes.{{ runtime.name }}] [plugins."io.containerd.cri.v1.runtime".containerd.runtimes.{{ runtime.name }}]
runtime_type = "{{ runtime.type }}" runtime_type = "{{ runtime.type }}"
runtime_engine = "{{ runtime.engine }}"
runtime_root = "{{ runtime.root }}"
{% if runtime.base_runtime_spec is defined %} {% if runtime.base_runtime_spec is defined %}
base_runtime_spec = "{{ containerd_cfg_dir }}/{{ runtime.base_runtime_spec }}" base_runtime_spec = "{{ containerd_cfg_dir }}/{{ runtime.base_runtime_spec }}"
{% endif %} {% endif %}
@@ -78,7 +76,9 @@ oom_score = {{ containerd_oom_score }}
[plugins."io.containerd.cri.v1.images"] [plugins."io.containerd.cri.v1.images"]
snapshotter = "{{ containerd_snapshotter }}" snapshotter = "{{ containerd_snapshotter }}"
{% if containerd_discard_unpacked_layers and containerd_version is version('2.1.0', '<') %}
discard_unpacked_layers = {{ containerd_discard_unpacked_layers | lower }} discard_unpacked_layers = {{ containerd_discard_unpacked_layers | lower }}
{% endif %}
image_pull_progress_timeout = "{{ containerd_image_pull_progress_timeout }}" image_pull_progress_timeout = "{{ containerd_image_pull_progress_timeout }}"
[plugins."io.containerd.cri.v1.images".pinned_images] [plugins."io.containerd.cri.v1.images".pinned_images]
sandbox = "{{ pod_infra_image_repo }}:{{ pod_infra_image_tag }}" sandbox = "{{ pod_infra_image_repo }}:{{ pod_infra_image_tag }}"

View File

@@ -25,6 +25,8 @@ provisioner:
group_vars: group_vars:
all: all:
become: true become: true
k8s_cluster:
container_manager: docker
playbooks: playbooks:
create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml
prepare: ../../../molecule/prepare.yml prepare: ../../../molecule/prepare.yml

View File

@@ -32,6 +32,8 @@ crio_registry_auth: []
crio_seccomp_profile: "" crio_seccomp_profile: ""
crio_selinux: "{{ (preinstall_selinux_state == 'enforcing') | lower }}" crio_selinux: "{{ (preinstall_selinux_state == 'enforcing') | lower }}"
crio_signature_policy: "{% if ansible_os_family == 'ClearLinux' %}/usr/share/defaults/crio/policy.json{% endif %}" crio_signature_policy: "{% if ansible_os_family == 'ClearLinux' %}/usr/share/defaults/crio/policy.json{% endif %}"
# Set the pull progress timeout
crio_pull_progress_timeout: "10s"
# Override system default for storage driver # Override system default for storage driver
# crio_storage_driver: "overlay" # crio_storage_driver: "overlay"

View File

@@ -2,8 +2,6 @@
- name: Converge - name: Converge
hosts: all hosts: all
become: true become: true
vars:
container_manager: crio
roles: roles:
- role: kubespray_defaults - role: kubespray_defaults
- role: container-engine/cri-o - role: container-engine/cri-o

View File

@@ -41,6 +41,10 @@ provisioner:
defaults: defaults:
callbacks_enabled: profile_tasks callbacks_enabled: profile_tasks
timeout: 120 timeout: 120
inventory:
group_vars:
k8s_cluster:
container_manager: crio
playbooks: playbooks:
create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml
prepare: ../../../molecule/prepare.yml prepare: ../../../molecule/prepare.yml

View File

@@ -2,7 +2,6 @@
- name: Test CRI-O cri - name: Test CRI-O cri
import_playbook: ../../../molecule/test_cri.yml import_playbook: ../../../molecule/test_cri.yml
vars: vars:
container_manager: crio
cri_socket: unix:///var/run/crio/crio.sock cri_socket: unix:///var/run/crio/crio.sock
cri_name: cri-o cri_name: cri-o
- name: Test running a container with crun - name: Test running a container with crun

View File

@@ -348,6 +348,12 @@ signature_policy = "{{ crio_signature_policy }}"
# ignore; the latter will ignore volumes entirely. # ignore; the latter will ignore volumes entirely.
image_volumes = "mkdir" image_volumes = "mkdir"
# The timeout for an image pull to make progress until the pull operation gets
# canceled. This value will be also used for calculating the pull progress interval
# to pull_progress_timeout / 10. Can be set to 0 to disable the timeout as well as
# the progress output.
pull_progress_timeout = "{{ crio_pull_progress_timeout }}"
# The crio.network table containers settings pertaining to the management of # The crio.network table containers settings pertaining to the management of
# CNI plugins. # CNI plugins.
[crio.network] [crio.network]

View File

@@ -1,22 +0,0 @@
---
- name: Crictl | Download crictl
include_tasks: "../../../download/tasks/download_file.yml"
vars:
download: "{{ download_defaults | combine(downloads.crictl) }}"
- name: Install crictl config
template:
src: crictl.yaml.j2
dest: /etc/crictl.yaml
owner: root
mode: "0644"
- name: Copy crictl binary from download dir
copy:
src: "{{ local_release_dir }}/crictl"
dest: "{{ bin_dir }}/crictl"
mode: "0755"
remote_src: true
notify:
- Get crictl completion
- Install crictl completion

View File

@@ -1,3 +1,22 @@
--- ---
- name: Install crictl - name: Crictl | Download crictl
include_tasks: crictl.yml include_tasks: "../../../download/tasks/download_file.yml"
vars:
download: "{{ download_defaults | combine(downloads.crictl) }}"
- name: Install crictl config
template:
src: crictl.yaml.j2
dest: /etc/crictl.yaml
owner: root
mode: "0644"
- name: Copy crictl binary from download dir
copy:
src: "{{ local_release_dir }}/crictl"
dest: "{{ bin_dir }}/crictl"
mode: "0755"
remote_src: true
notify:
- Get crictl completion
- Install crictl completion

View File

@@ -55,7 +55,7 @@
register: keyserver_task_result register: keyserver_task_result
until: keyserver_task_result is succeeded until: keyserver_task_result is succeeded
retries: 4 retries: 4
delay: "{{ retry_stagger | d(3) }}" delay: "{{ retry_stagger }}"
with_items: "{{ docker_repo_key_info.repo_keys }}" with_items: "{{ docker_repo_key_info.repo_keys }}"
environment: "{{ proxy_env }}" environment: "{{ proxy_env }}"
when: ansible_pkg_mgr == 'apt' when: ansible_pkg_mgr == 'apt'
@@ -128,7 +128,7 @@
register: docker_task_result register: docker_task_result
until: docker_task_result is succeeded until: docker_task_result is succeeded
retries: 4 retries: 4
delay: "{{ retry_stagger | d(3) }}" delay: "{{ retry_stagger }}"
notify: Restart docker notify: Restart docker
when: when:
- not ansible_os_family in ["Flatcar", "Flatcar Container Linux by Kinvolk"] - not ansible_os_family in ["Flatcar", "Flatcar Container Linux by Kinvolk"]

View File

@@ -21,6 +21,11 @@ provisioner:
defaults: defaults:
callbacks_enabled: profile_tasks callbacks_enabled: profile_tasks
timeout: 120 timeout: 120
inventory:
group_vars:
k8s_cluster:
gvisor_enabled: true
container_manager: containerd
playbooks: playbooks:
create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml
prepare: ../../../molecule/prepare.yml prepare: ../../../molecule/prepare.yml

View File

@@ -12,11 +12,20 @@
is_ostree: "{{ ostree.stat.exists }}" is_ostree: "{{ ostree.stat.exists }}"
- name: Runc | Uninstall runc package managed by package manager - name: Runc | Uninstall runc package managed by package manager
package:
name: "{{ runc_package_name }}"
state: absent
when: when:
- not (is_ostree or (ansible_distribution == "Flatcar Container Linux by Kinvolk") or (ansible_distribution == "Flatcar")) - not is_ostree
- ansible_distribution != "Flatcar Container Linux by Kinvolk"
- ansible_distribution != "Flatcar"
block:
- name: Runc | Remove package
package:
name: "{{ runc_package_name }}"
state: absent
- name: Runc | Remove orphaned binary
file:
path: /usr/bin/runc
state: absent
when: runc_bin_dir != "/usr/bin"
- name: Runc | Download runc binary - name: Runc | Download runc binary
include_tasks: "../../../download/tasks/download_file.yml" include_tasks: "../../../download/tasks/download_file.yml"
@@ -29,10 +38,3 @@
dest: "{{ runc_bin_dir }}/runc" dest: "{{ runc_bin_dir }}/runc"
mode: "0755" mode: "0755"
remote_src: true remote_src: true
- name: Runc | Remove orphaned binary
file:
path: /usr/bin/runc
state: absent
when: runc_bin_dir != "/usr/bin"
ignore_errors: true # noqa ignore-errors

View File

@@ -21,6 +21,11 @@ provisioner:
defaults: defaults:
callbacks_enabled: profile_tasks callbacks_enabled: profile_tasks
timeout: 120 timeout: 120
inventory:
group_vars:
k8s_cluster:
youki_enabled: true
container_manager: crio
playbooks: playbooks:
create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml create: ../../../../../tests/cloud_playbooks/create-kubevirt.yml
prepare: ../../../molecule/prepare.yml prepare: ../../../molecule/prepare.yml

View File

@@ -11,6 +11,7 @@ dns_nodes_per_replica: 16
dns_cores_per_replica: 256 dns_cores_per_replica: 256
dns_prevent_single_point_failure: "{{ 'true' if dns_min_replicas | int > 1 else 'false' }}" dns_prevent_single_point_failure: "{{ 'true' if dns_min_replicas | int > 1 else 'false' }}"
enable_coredns_reverse_dns_lookups: true enable_coredns_reverse_dns_lookups: true
coredns_svc_name: "coredns"
coredns_ordinal_suffix: "" coredns_ordinal_suffix: ""
# dns_extra_tolerations: [{effect: NoSchedule, operator: "Exists"}] # dns_extra_tolerations: [{effect: NoSchedule, operator: "Exists"}]
coredns_affinity: coredns_affinity:
@@ -118,29 +119,5 @@ netchecker_agent_log_level: 5
netchecker_server_log_level: 5 netchecker_server_log_level: 5
netchecker_etcd_log_level: info netchecker_etcd_log_level: info
# Dashboard
dashboard_replicas: 1
# Namespace for dashboard
dashboard_namespace: kube-system
# Limits for dashboard
dashboard_cpu_limit: 100m
dashboard_memory_limit: 256M
dashboard_cpu_requests: 50m
dashboard_memory_requests: 64M
# Set dashboard_use_custom_certs to true if overriding dashboard_certs_secret_name with a secret that
# contains dashboard_tls_key_file and dashboard_tls_cert_file instead of using the initContainer provisioned certs
dashboard_use_custom_certs: false
dashboard_certs_secret_name: kubernetes-dashboard-certs
dashboard_tls_key_file: dashboard.key
dashboard_tls_cert_file: dashboard.crt
dashboard_master_toleration: true
# Override dashboard default settings
dashboard_token_ttl: 900
dashboard_skip_login: false
# Policy Controllers # Policy Controllers
# policy_controller_extra_tolerations: [{effect: NoSchedule, operator: "Exists"}] # policy_controller_extra_tolerations: [{effect: NoSchedule, operator: "Exists"}]

View File

@@ -109,15 +109,3 @@
- netchecker-server-clusterrolebinding.yml.j2 - netchecker-server-clusterrolebinding.yml.j2
- netchecker-server-deployment.yml.j2 - netchecker-server-deployment.yml.j2
- netchecker-server-svc.yml.j2 - netchecker-server-svc.yml.j2
- name: Kubernetes Apps | Dashboard
command:
cmd: "{{ kubectl_apply_stdin }}"
stdin: "{{ lookup('template', 'dashboard.yml.j2') }}"
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true
vars:
k8s_namespace: "{{ dashboard_namespace }}"
when: dashboard_enabled
tags:
- dashboard

View File

@@ -2,7 +2,7 @@
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: coredns{{ coredns_ordinal_suffix }} name: {{ coredns_svc_name }}{{ coredns_ordinal_suffix }}
namespace: kube-system namespace: kube-system
labels: labels:
k8s-app: kube-dns{{ coredns_ordinal_suffix }} k8s-app: kube-dns{{ coredns_ordinal_suffix }}

View File

@@ -1,323 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
{% if k8s_namespace != 'kube-system' %}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ k8s_namespace }}
labels:
name: {{ k8s_namespace }}
{% endif %}
---
# ------------------- Dashboard Secrets ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
type: Opaque
---
# ------------------- Dashboard ConfigMap ------------------- #
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: {{ k8s_namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: {{ k8s_namespace }}
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
spec:
replicas: {{ dashboard_replicas }}
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
priorityClassName: system-cluster-critical
containers:
- name: kubernetes-dashboard
image: {{ dashboard_image_repo }}:{{ dashboard_image_tag }}
imagePullPolicy: {{ k8s_image_pull_policy }}
resources:
limits:
cpu: {{ dashboard_cpu_limit }}
memory: {{ dashboard_memory_limit }}
requests:
cpu: {{ dashboard_cpu_requests }}
memory: {{ dashboard_memory_requests }}
ports:
- containerPort: 8443
protocol: TCP
args:
- --namespace={{ k8s_namespace }}
{% if dashboard_use_custom_certs %}
- --tls-key-file={{ dashboard_tls_key_file }}
- --tls-cert-file={{ dashboard_tls_cert_file }}
{% else %}
- --auto-generate-certificates
{% endif %}
{% if dashboard_skip_login %}
- --enable-skip-login
{% endif %}
- --authentication-mode=token
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
- --token-ttl={{ dashboard_token_ttl }}
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: {{ dashboard_certs_secret_name }}
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
{% if dashboard_master_toleration %}
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
{% endif %}
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
# ------------------- Metrics Scraper Service Account ------------------- #
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
# ------------------- Metrics Scraper Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-metrics-scraper
name: dashboard-metrics-scraper
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: kubernetes-metrics-scraper
---
# ------------------- Metrics Scraper Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-metrics-scraper
name: kubernetes-metrics-scraper
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-metrics-scraper
template:
metadata:
labels:
k8s-app: kubernetes-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
priorityClassName: system-cluster-critical
containers:
- name: kubernetes-metrics-scraper
image: {{ dashboard_metrics_scraper_repo }}:{{ dashboard_metrics_scraper_tag }}
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumeMounts:
- mountPath: /tmp
name: tmp-volume
serviceAccountName: kubernetes-dashboard
volumes:
- name: tmp-volume
emptyDir: {}
{% if dashboard_master_toleration %}
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
{% endif %}

View File

@@ -1,6 +1,5 @@
--- ---
gateway_api_enabled: false gateway_api_enabled: false
gateway_api_version: 1.2.1
# `gateway_api_channel` default is "standard". # `gateway_api_channel` default is "standard".
# "standard" release channel includes all resources that have graduated to GA or beta, including GatewayClass, Gateway, HTTPRoute, and ReferenceGrant. # "standard" release channel includes all resources that have graduated to GA or beta, including GatewayClass, Gateway, HTTPRoute, and ReferenceGrant.

View File

@@ -21,7 +21,7 @@ external_openstack_cacert: "{{ lookup('env', 'OS_CACERT') }}"
## arg1: "value1" ## arg1: "value1"
## arg2: "value2" ## arg2: "value2"
external_openstack_cloud_controller_extra_args: {} external_openstack_cloud_controller_extra_args: {}
external_openstack_cloud_controller_image_tag: "v1.32.0" external_openstack_cloud_controller_image_tag: "v1.35.0"
external_openstack_cloud_controller_bind_address: 127.0.0.1 external_openstack_cloud_controller_bind_address: 127.0.0.1
external_openstack_cloud_controller_dns_policy: ClusterFirst external_openstack_cloud_controller_dns_policy: ClusterFirst

View File

@@ -8,3 +8,4 @@ local_path_provisioner_is_default_storageclass: "true"
local_path_provisioner_debug: false local_path_provisioner_debug: false
local_path_provisioner_helper_image_repo: "busybox" local_path_provisioner_helper_image_repo: "busybox"
local_path_provisioner_helper_image_tag: "latest" local_path_provisioner_helper_image_tag: "latest"
local_path_provisioner_resources: {}

View File

@@ -35,6 +35,10 @@ spec:
valueFrom: valueFrom:
fieldRef: fieldRef:
fieldPath: metadata.namespace fieldPath: metadata.namespace
{% if local_path_provisioner_resources %}
resources:
{{ local_path_provisioner_resources | to_nice_yaml | indent(10) | trim }}
{% endif %}
volumes: volumes:
- name: config-volume - name: config-volume
configMap: configMap:

View File

@@ -1,28 +0,0 @@
---
ingress_nginx_namespace: "ingress-nginx"
ingress_nginx_host_network: false
ingress_nginx_service_type: LoadBalancer
ingress_nginx_service_nodeport_http: ""
ingress_nginx_service_nodeport_https: ""
ingress_nginx_service_annotations: {}
ingress_publish_status_address: ""
ingress_nginx_publish_service: "{{ ingress_nginx_namespace }}/ingress-nginx"
ingress_nginx_nodeselector:
kubernetes.io/os: "linux"
ingress_nginx_tolerations: []
ingress_nginx_insecure_port: 80
ingress_nginx_secure_port: 443
ingress_nginx_metrics_port: 10254
ingress_nginx_configmap: {}
ingress_nginx_configmap_tcp_services: {}
ingress_nginx_configmap_udp_services: {}
ingress_nginx_extra_args: []
ingress_nginx_termination_grace_period_seconds: 300
ingress_nginx_class: nginx
ingress_nginx_without_class: true
ingress_nginx_default: false
ingress_nginx_webhook_enabled: false
ingress_nginx_webhook_job_ttl: 1800
ingress_nginx_opentelemetry_enabled: false
ingress_nginx_probe_initial_delay_seconds: 10

View File

@@ -1,69 +0,0 @@
---
- name: NGINX Ingress Controller | Create addon dir
file:
path: "{{ kube_config_dir }}/addons/ingress_nginx"
state: directory
owner: root
group: root
mode: "0755"
when:
- inventory_hostname == groups['kube_control_plane'][0]
- name: NGINX Ingress Controller | Templates list
set_fact:
ingress_nginx_templates:
- { name: 00-namespace, file: 00-namespace.yml, type: ns }
- { name: cm-ingress-nginx, file: cm-ingress-nginx.yml, type: cm }
- { name: cm-tcp-services, file: cm-tcp-services.yml, type: cm }
- { name: cm-udp-services, file: cm-udp-services.yml, type: cm }
- { name: sa-ingress-nginx, file: sa-ingress-nginx.yml, type: sa }
- { name: clusterrole-ingress-nginx, file: clusterrole-ingress-nginx.yml, type: clusterrole }
- { name: clusterrolebinding-ingress-nginx, file: clusterrolebinding-ingress-nginx.yml, type: clusterrolebinding }
- { name: role-ingress-nginx, file: role-ingress-nginx.yml, type: role }
- { name: rolebinding-ingress-nginx, file: rolebinding-ingress-nginx.yml, type: rolebinding }
- { name: ingressclass-nginx, file: ingressclass-nginx.yml, type: ingressclass }
- { name: ds-ingress-nginx-controller, file: ds-ingress-nginx-controller.yml, type: ds }
ingress_nginx_template_for_service:
- { name: svc-ingress-nginx, file: svc-ingress-nginx.yml, type: svc }
ingress_nginx_templates_for_webhook:
- { name: admission-webhook-configuration, file: admission-webhook-configuration.yml, type: sa }
- { name: sa-admission-webhook, file: sa-admission-webhook.yml, type: sa }
- { name: clusterrole-admission-webhook, file: clusterrole-admission-webhook.yml, type: clusterrole }
- { name: clusterrolebinding-admission-webhook, file: clusterrolebinding-admission-webhook.yml, type: clusterrolebinding }
- { name: role-admission-webhook, file: role-admission-webhook.yml, type: role }
- { name: rolebinding-admission-webhook, file: rolebinding-admission-webhook.yml, type: rolebinding }
- { name: admission-webhook-job, file: admission-webhook-job.yml, type: job }
- { name: svc-ingress-nginx-controller-admission, file: svc-ingress-nginx-controller-admission.yml, type: svc }
- name: NGINX Ingress Controller | Append extra templates to NGINX Ingress Template list for service
set_fact:
ingress_nginx_templates: "{{ ingress_nginx_templates + ingress_nginx_template_for_service }}"
when: not ingress_nginx_host_network
- name: NGINX Ingress Controller | Append extra templates to NGINX Ingress Templates list for webhook
set_fact:
ingress_nginx_templates: "{{ ingress_nginx_templates + ingress_nginx_templates_for_webhook }}"
when: ingress_nginx_webhook_enabled
- name: NGINX Ingress Controller | Create manifests
template:
src: "{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/addons/ingress_nginx/{{ item.file }}"
mode: "0644"
with_items: "{{ ingress_nginx_templates }}"
register: ingress_nginx_manifests
when:
- inventory_hostname == groups['kube_control_plane'][0]
- name: NGINX Ingress Controller | Apply manifests
kube:
name: "{{ item.item.name }}"
namespace: "{{ ingress_nginx_namespace }}"
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.item.type }}"
filename: "{{ kube_config_dir }}/addons/ingress_nginx/{{ item.item.file }}"
state: "latest"
with_items: "{{ ingress_nginx_manifests.results }}"
when:
- inventory_hostname == groups['kube_control_plane'][0]

View File

@@ -1,7 +0,0 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ ingress_nginx_namespace }}
labels:
name: {{ ingress_nginx_namespace }}

View File

@@ -1,30 +0,0 @@
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: ingress-nginx-controller-admission
namespace: {{ ingress_nginx_namespace }}
path: /networking/v1/ingresses
port: 443
failurePolicy: Fail
matchPolicy: Equivalent
name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
sideEffects: None

View File

@@ -1,96 +0,0 @@
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission-create
namespace: {{ ingress_nginx_namespace }}
spec:
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission-create
spec:
containers:
- args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: "{{ ingress_nginx_kube_webhook_certgen_image_repo }}:{{ ingress_nginx_kube_webhook_certgen_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
name: create
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
seccompProfile:
type: RuntimeDefault
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
ttlSecondsAfterFinished: {{ ingress_nginx_webhook_job_ttl }}
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission-patch
namespace: {{ ingress_nginx_namespace }}
spec:
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission-patch
spec:
containers:
- args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: "{{ ingress_nginx_kube_webhook_certgen_image_repo }}:{{ ingress_nginx_kube_webhook_certgen_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
name: patch
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
seccompProfile:
type: RuntimeDefault
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
ttlSecondsAfterFinished: {{ ingress_nginx_webhook_job_ttl }}

View File

@@ -1,15 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update

View File

@@ -1,36 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups: [""]
resources: ["configmaps", "endpoints", "nodes", "pods", "secrets", "namespaces"]
verbs: ["list", "watch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses/status"]
verbs: ["update"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingressclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["list", "watch"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]

View File

@@ -1,16 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: {{ ingress_nginx_namespace }}

View File

@@ -1,16 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}

View File

@@ -1,13 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% if ingress_nginx_configmap %}
data:
{{ ingress_nginx_configmap | to_nice_yaml | indent(2) }}
{%- endif %}

View File

@@ -1,13 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% if ingress_nginx_configmap_tcp_services %}
data:
{{ ingress_nginx_configmap_tcp_services | to_nice_yaml | indent(2) }}
{%- endif %}

View File

@@ -1,13 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% if ingress_nginx_configmap_udp_services %}
data:
{{ ingress_nginx_configmap_udp_services | to_nice_yaml | indent(2) }}
{%- endif %}

View File

@@ -1,201 +0,0 @@
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ingress-nginx-controller
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: {{ ingress_nginx_termination_grace_period_seconds }}
{% if ingress_nginx_opentelemetry_enabled %}
initContainers:
- name: opentelemetry
command:
- /init_module
image: {{ ingress_nginx_opentelemetry_image_repo }}:{{ ingress_nginx_opentelemetry_image_tag }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
readOnlyRootFilesystem: false
runAsGroup: 82
runAsNonRoot: true
runAsUser: 101
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /modules_mount
name: modules
{% endif %}
{% if ingress_nginx_host_network %}
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
{% endif %}
{% if ingress_nginx_nodeselector %}
nodeSelector:
{{ ingress_nginx_nodeselector | to_nice_yaml | indent(width=8) }}
{%- endif %}
{% if ingress_nginx_tolerations %}
tolerations:
{{ ingress_nginx_tolerations | to_nice_yaml(indent=2) | indent(width=8) }}
{% endif %}
priorityClassName: {% if ingress_nginx_namespace == 'kube-system' %}system-node-critical{% else %}k8s-cluster-critical{% endif %}{{ '' }}
containers:
- name: ingress-nginx-controller
image: {{ ingress_nginx_controller_image_repo }}:{{ ingress_nginx_controller_image_tag }}
imagePullPolicy: {{ k8s_image_pull_policy }}
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/ingress-nginx
- --election-id=ingress-controller-leader-{{ ingress_nginx_class }}
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
- --ingress-class={{ ingress_nginx_class }}
{% if ingress_nginx_without_class %}
- --watch-ingress-without-class=true
{% endif %}
{% if ingress_publish_status_address != "" %}
- --publish-status-address={{ ingress_publish_status_address }}
{% elif ingress_nginx_host_network %}
- --report-node-internal-ip-address
{% elif ingress_nginx_publish_service != "" %}
- --publish-service={{ ingress_nginx_publish_service }}
{% endif %}
{% for extra_arg in ingress_nginx_extra_args %}
- {{ extra_arg }}
{% endfor %}
{% if ingress_nginx_webhook_enabled %}
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
{% endif %}
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
readOnlyRootFilesystem: false
runAsGroup: 82
runAsNonRoot: true
runAsUser: 101
seccompProfile:
type: RuntimeDefault
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
ports:
- name: http
containerPort: 80
hostPort: {{ ingress_nginx_insecure_port }}
- name: https
containerPort: 443
hostPort: {{ ingress_nginx_secure_port }}
- name: metrics
containerPort: 10254
{% if not ingress_nginx_host_network %}
hostPort: {{ ingress_nginx_metrics_port }}
{% endif %}
{% if ingress_nginx_configmap_tcp_services %}
{% for port in ingress_nginx_configmap_tcp_services.keys() %}
- name: tcp-port-{{ port }}
containerPort: {{ port | int }}
protocol: TCP
{% if not ingress_nginx_host_network %}
hostPort: {{ port | int }}
{% endif %}
{% endfor %}
{% endif %}
{% if ingress_nginx_configmap_udp_services %}
{% for port in ingress_nginx_configmap_udp_services.keys() %}
- name: udp-port-{{ port }}
containerPort: {{ port | int }}
protocol: UDP
{% if not ingress_nginx_host_network %}
hostPort: {{ port | int }}
{% endif %}
{% endfor %}
{% endif %}
{% if ingress_nginx_webhook_enabled %}
- name: webhook
containerPort: 8443
protocol: TCP
{% endif %}
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: {{ ingress_nginx_probe_initial_delay_seconds }}
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: {{ ingress_nginx_probe_initial_delay_seconds }}
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
{% if ingress_nginx_webhook_enabled or ingress_nginx_opentelemetry_enabled %}
volumeMounts:
{% if ingress_nginx_webhook_enabled %}
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
{% endif %}
{% if ingress_nginx_opentelemetry_enabled %}
- name: modules
mountPath: /modules_mount
{% endif %}
{% endif %}
{% if ingress_nginx_webhook_enabled or ingress_nginx_opentelemetry_enabled %}
volumes:
{% if ingress_nginx_webhook_enabled %}
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
{% endif %}
{% if ingress_nginx_opentelemetry_enabled %}
- name: modules
emptyDir: {}
{% endif %}
{% endif %}

View File

@@ -1,13 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: {{ ingress_nginx_class }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% if ingress_nginx_default %}
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
{% endif %}
spec:
controller: k8s.io/ingress-nginx

View File

@@ -1,17 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission
namespace: {{ ingress_nginx_namespace }}
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create

View File

@@ -1,47 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get"]
- apiGroups: [""]
resources: ["configmaps", "pods", "secrets", "endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses/status"]
verbs: ["update"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingressclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
# Defaults to "<election-id>", defined in
# ds-ingress-nginx-controller.yml.js
# by a command-line argument.
#
# This is the correct behaviour for ingress-controller
# version 1.8.1
resourceNames: ["ingress-controller-leader-{{ ingress_nginx_class }}"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]

View File

@@ -1,17 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission
namespace: {{ ingress_nginx_namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: {{ ingress_nginx_namespace }}

View File

@@ -1,17 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}

View File

@@ -1,8 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

View File

@@ -1,9 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

View File

@@ -1,18 +0,0 @@
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-controller-admission
namespace: {{ ingress_nginx_namespace }}
spec:
type: ClusterIP
ports:
- appProtocol: https
name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

View File

@@ -1,50 +0,0 @@
{% if not ingress_nginx_host_network %}
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% if ingress_nginx_service_annotations %}
annotations:
{{ ingress_nginx_service_annotations | to_nice_yaml(indent=2, width=1337) | indent(width=4) }}
{% endif %}
spec:
type: {{ ingress_nginx_service_type }}
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
{% if (ingress_nginx_service_type == 'NodePort' or ingress_nginx_service_type == 'LoadBalancer') and ingress_nginx_service_nodeport_http %}
nodePort: {{ingress_nginx_service_nodeport_http | int}}
{% endif %}
- name: https
port: 443
targetPort: 443
protocol: TCP
{% if (ingress_nginx_service_type == 'NodePort' or ingress_nginx_service_type == 'LoadBalancer') and ingress_nginx_service_nodeport_https %}
nodePort: {{ingress_nginx_service_nodeport_https | int}}
{% endif %}
{% if ingress_nginx_configmap_tcp_services %}
{% for port in ingress_nginx_configmap_tcp_services.keys() %}
- name: tcp-port-{{ port }}
port: {{ port | int }}
targetPort: {{ port | int }}
protocol: TCP
{% endfor %}
{% endif %}
{% if ingress_nginx_configmap_udp_services %}
{% for port in ingress_nginx_configmap_udp_services.keys() %}
- name: udp-port-{{ port }}
port: {{ port | int }}
targetPort: {{ port | int }}
protocol: UDP
{% endfor %}
{% endif %}
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% endif %}

View File

@@ -1,12 +1,5 @@
--- ---
dependencies: dependencies:
- role: kubernetes-apps/ingress_controller/ingress_nginx
when: ingress_nginx_enabled
tags:
- apps
- ingress-controller
- ingress-nginx
- role: kubernetes-apps/ingress_controller/cert_manager - role: kubernetes-apps/ingress_controller/cert_manager
when: cert_manager_enabled when: cert_manager_enabled
tags: tags:

View File

@@ -1,5 +1,7 @@
--- ---
dependencies: dependencies:
- role: kubernetes-apps/utils
- role: kubernetes-apps/ansible - role: kubernetes-apps/ansible
when: when:
- inventory_hostname == groups['kube_control_plane'][0] - inventory_hostname == groups['kube_control_plane'][0]

View File

@@ -1,6 +1,7 @@
--- ---
metallb_enabled: false metallb_enabled: false
metallb_log_level: info metallb_log_level: info
metallb_namespace: "metallb-system"
metallb_port: "7472" metallb_port: "7472"
metallb_memberlist_port: "7946" metallb_memberlist_port: "7946"
metallb_speaker_enabled: "{{ metallb_enabled }}" metallb_speaker_enabled: "{{ metallb_enabled }}"

View File

@@ -58,12 +58,6 @@ rules:
verbs: verbs:
- list - list
- watch - watch
- apiGroups:
- ""
resources:
- nodes/proxy
verbs:
- get
- apiGroups: - apiGroups:
- topology.node.k8s.io - topology.node.k8s.io
resources: resources:

View File

@@ -43,12 +43,12 @@
- { name: registry-cm, file: registry-cm.yml, type: cm } - { name: registry-cm, file: registry-cm.yml, type: cm }
- { name: registry-rs, file: registry-rs.yml, type: rs } - { name: registry-rs, file: registry-rs.yml, type: rs }
- name: Registry | Append nginx ingress templates to Registry Templates list when ingress enabled - name: Registry | Append ingress templates to Registry Templates list when ALB ingress enabled
set_fact: set_fact:
registry_templates: "{{ registry_templates + [item] }}" registry_templates: "{{ registry_templates + [item] }}"
with_items: with_items:
- [{ name: registry-ing, file: registry-ing.yml, type: ing }] - [{ name: registry-ing, file: registry-ing.yml, type: ing }]
when: ingress_nginx_enabled or ingress_alb_enabled when: ingress_alb_enabled
- name: Registry | Create manifests - name: Registry | Create manifests
template: template:

View File

@@ -0,0 +1,12 @@
---
_kubectl_apply_stdin:
- "{{ kubectl }}"
- apply
- -f
- "-"
- -n
- "{{ k8s_namespace }}"
- --server-side="{{ server_side_apply | lower }}"
# TODO: switch to default SSA
server_side_apply: false
kubectl_apply_stdin: "{{ _kubectl_apply_stdin | join(' ') }}"

View File

@@ -1,2 +0,0 @@
---
kubectl_apply_stdin: "{{ kubectl }} apply -f - -n {{ k8s_namespace }}"

Some files were not shown because too many files have changed in this diff Show More