Rename ansible groups to use _ instead of - (#7552)

* rename ansible groups to use _ instead of -

k8s-cluster -> k8s_cluster
k8s-node -> k8s_node
calico-rr -> calico_rr
no-floating -> no_floating

Note: kube-node,k8s-cluster groups in upgrade CI
      need clean-up after v2.16 is tagged

* ensure old groups are mapped to the new ones
This commit is contained in:
Cristian Calin
2021-04-29 15:20:50 +03:00
committed by GitHub
parent d26191373a
commit 360aff4a57
106 changed files with 403 additions and 373 deletions

View File

@@ -4,28 +4,28 @@
The inventory is composed of 3 groups:
* **kube-node** : list of kubernetes nodes where the pods will run.
* **kube_node** : list of kubernetes nodes where the pods will run.
* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
Note: do not modify the children of _k8s-cluster_, like putting
the _etcd_ group into the _k8s-cluster_, unless you are certain
Note: do not modify the children of _k8s_cluster_, like putting
the _etcd_ group into the _k8s_cluster_, unless you are certain
to do that and you have it fully contained in the latter:
```ShellSession
k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd
k8s_cluster ⊂ etcd => kube_node ∩ etcd = etcd
```
When _kube-node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
If you want it a standalone, make sure those groups do not intersect.
If you want the server to act both as control-plane and node, the server must be defined
on both groups _kube_control_plane_ and _kube-node_. If you want a standalone and
on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
unschedulable master, the server must be defined only in the _kube_control_plane_ and
not _kube-node_.
not _kube_node_.
There are also two special groups:
* **calico-rr** : explained for [advanced Calico networking cases](calico.md)
* **calico_rr** : explained for [advanced Calico networking cases](calico.md)
* **bastion** : configure a bastion host if your nodes are not directly reachable
Below is a complete inventory example:
@@ -49,15 +49,15 @@ node1
node2
node3
[kube-node]
[kube_node]
node2
node3
node4
node5
node6
[k8s-cluster:children]
kube-node
[k8s_cluster:children]
kube_node
kube_control_plane
```
@@ -66,7 +66,7 @@ kube_control_plane
The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
Mandatory variables that are common for at least one role (or a node group) can be found in the
`inventory/sample/group_vars/k8s-cluster.yml`.
`inventory/sample/group_vars/k8s_cluster.yml`.
There are also role vars for docker, kubernetes preinstall and master roles.
According to the [ansible docs](https://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overridden from the group vars. In order to override, one should use
@@ -79,7 +79,7 @@ Layer | Comment
------|--------
**role defaults** | provides best UX to override things for Kubespray deployments
inventory vars | Unused
**inventory group_vars** | Expects users to use ``all.yml``,``k8s-cluster.yml`` etc. to override things
**inventory group_vars** | Expects users to use ``all.yml``,``k8s_cluster.yml`` etc. to override things
inventory host_vars | Unused
playbook group_vars | Unused
playbook host_vars | Unused

View File

@@ -8,7 +8,7 @@ To set the number of replicas for the AWS CSI controller, you can change `aws_eb
Make sure to add a role, for your EC2 instances hosting Kubernetes, that allows it to do the actions necessary to request a volume and attach it: [AWS CSI Policy](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/example-iam-policy.json)
If you want to deploy the AWS EBS storage class used with the CSI Driver, you should set `persistent_volumes_enabled` in `group_vars/k8s-cluster/k8s-cluster.yml` to `true`.
If you want to deploy the AWS EBS storage class used with the CSI Driver, you should set `persistent_volumes_enabled` in `group_vars/k8s_cluster/k8s_cluster.yml` to `true`.
You can now run the kubespray playbook (cluster.yml) to deploy Kubernetes over AWS EC2 with EBS CSI Driver enabled.

View File

@@ -33,16 +33,16 @@ This will produce an inventory that is passed into Ansible that looks like the f
"etcd": [
"ip-172-31-3-xxx.us-east-2.compute.internal"
],
"k8s-cluster": {
"k8s_cluster": {
"children": [
"kube_control_plane",
"kube-node"
"kube_node"
]
},
"kube_control_plane": [
"ip-172-31-3-xxx.us-east-2.compute.internal"
],
"kube-node": [
"kube_node": [
"ip-172-31-8-xxx.us-east-2.compute.internal"
]
}
@@ -51,7 +51,7 @@ This will produce an inventory that is passed into Ansible that looks like the f
Guide:
- Create instances in AWS as needed.
- Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube_control_plane`, `etcd`, or `kube-node`. You can also share roles like `kube_control_plane, etcd`
- Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube_control_plane`, `etcd`, or `kube_node`. You can also share roles like `kube_control_plane, etcd`
- Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory.
- Set the following AWS credentials and info as environment variables in your terminal:

View File

@@ -8,7 +8,7 @@ To deploy Azure Disk CSI driver, uncomment the `azure_csi_enabled` option in `gr
## Azure Disk CSI Storage Class
If you want to deploy the Azure Disk storage class to provision volumes dynamically, you should set `persistent_volumes_enabled` in `group_vars/k8s-cluster/k8s-cluster.yml` to `true`.
If you want to deploy the Azure Disk storage class to provision volumes dynamically, you should set `persistent_volumes_enabled` in `group_vars/k8s_cluster/k8s_cluster.yml` to `true`.
## Parameters

View File

@@ -61,7 +61,7 @@ calico_network_backend: none
### Optional : Define the default pool CIDRs
By default, `kube_pods_subnet` is used as the IP range CIDR for the default IP Pool, and `kube_pods_subnet_ipv6` for IPv6.
In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet` and `kube_pods_subnet_ipv6` ), it starts with the default IP Pools of which IP range CIDRs can by defined in group_vars (k8s-cluster/k8s-net-calico.yml):
In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet` and `kube_pods_subnet_ipv6` ), it starts with the default IP Pools of which IP range CIDRs can by defined in group_vars (k8s_cluster/k8s-net-calico.yml):
```ShellSession
calico_pool_cidr: 10.233.64.0/20
@@ -88,14 +88,14 @@ In order to define peers on a per node basis, the `peers` variable must be defin
NB: Ansible's `hash_behaviour` is by default set to "replace", thus defining both global and per node peers would end up with having only per node peers. If having both global and per node peers defined was meant to happen, global peers would have to be defined in hostvars for each host (as well as per node peers)
Since calico 3.4, Calico supports advertising Kubernetes service cluster IPs over BGP, just as it advertises pod IPs.
This can be enabled by setting the following variable as follow in group_vars (k8s-cluster/k8s-net-calico.yml)
This can be enabled by setting the following variable as follow in group_vars (k8s_cluster/k8s-net-calico.yml)
```yml
calico_advertise_cluster_ips: true
```
Since calico 3.10, Calico supports advertising Kubernetes service ExternalIPs over BGP in addition to cluster IPs advertising.
This can be enabled by setting the following variable in group_vars (k8s-cluster/k8s-net-calico.yml)
This can be enabled by setting the following variable in group_vars (k8s_cluster/k8s-net-calico.yml)
```yml
calico_advertise_service_external_ips:
@@ -121,9 +121,9 @@ recommended here:
You need to edit your inventory and add:
* `calico-rr` group with nodes in it. `calico-rr` can be combined with
`kube-node` and/or `kube_control_plane`. `calico-rr` group also must be a child
group of `k8s-cluster` group.
* `calico_rr` group with nodes in it. `calico_rr` can be combined with
`kube_node` and/or `kube_control_plane`. `calico_rr` group also must be a child
group of `k8s_cluster` group.
* `cluster_id` by route reflector node/group (see details
[here](https://hub.docker.com/r/calico/routereflector/))
@@ -147,18 +147,18 @@ node2
node3
node4
[kube-node]
[kube_node]
node2
node3
node4
node5
[k8s-cluster:children]
kube-node
[k8s_cluster:children]
kube_node
kube_control_plane
calico-rr
calico_rr
[calico-rr]
[calico_rr]
rr0
rr1

View File

@@ -10,7 +10,7 @@ You need to source the OpenStack credentials you use to deploy your machines tha
Make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack. Otherwise [cinder](https://docs.openstack.org/cinder/latest/) won't work as expected.
If you want to deploy the cinder provisioner used with Cinder CSI Driver, you should set `persistent_volumes_enabled` in `group_vars/k8s-cluster/k8s-cluster.yml` to `true`.
If you want to deploy the cinder provisioner used with Cinder CSI Driver, you should set `persistent_volumes_enabled` in `group_vars/k8s_cluster/k8s_cluster.yml` to `true`.
You can now run the kubespray playbook (cluster.yml) to deploy Kubernetes over OpenStack with Cinder CSI Driver enabled.

View File

@@ -5,7 +5,7 @@ Kubespray supports basic functionality for using containerd as the default conta
_To use the containerd container runtime set the following variables:_
## k8s-cluster.yml
## k8s_cluster.yml
```yaml
container_manager: containerd

View File

@@ -16,7 +16,7 @@ skip_downloads: false
etcd_kubeadm_enabled: true
```
## k8s-cluster/k8s-cluster.yml
## k8s_cluster/k8s_cluster.yml
```yaml
container_manager: crio
@@ -52,7 +52,7 @@ This parameter controls not just the number of processes but also the amount of
(since a thread is technically a process with shared memory). See [cri-o#1921]
In order to increase the default `pids_limit` for cri-o based deployments you need to set the `crio_pids_limit`
for your `k8s-cluster` ansible group or per node depending on the use case.
for your `k8s_cluster` ansible group or per node depending on the use case.
```yaml
crio_pids_limit: 4096

View File

@@ -6,7 +6,7 @@ To deploy GCP Persistent Disk CSI driver, uncomment the `gcp_pd_csi_enabled` opt
## GCP Persistent Disk Storage Class
If you want to deploy the GCP Persistent Disk storage class to provision volumes dynamically, you should set `persistent_volumes_enabled` in `group_vars/k8s-cluster/k8s-cluster.yml` to `true`.
If you want to deploy the GCP Persistent Disk storage class to provision volumes dynamically, you should set `persistent_volumes_enabled` in `group_vars/k8s_cluster/k8s_cluster.yml` to `true`.
## GCP credentials

View File

@@ -79,7 +79,7 @@ var in inventory.
By default, Kubespray configures kube_control_plane hosts with insecure access to
kube-apiserver via port 8080. A kubeconfig file is not necessary in this case,
because kubectl will use <http://localhost:8080> to connect. The kubeconfig files
generated will point to localhost (on kube_control_planes) and kube-node hosts will
generated will point to localhost (on kube_control_planes) and kube_node hosts will
connect either to a localhost nginx proxy or to a loadbalancer if configured.
More details on this process are in the [HA guide](/docs/ha-mode.md).

View File

@@ -81,7 +81,7 @@ loadbalancer_apiserver:
port on the VIP address)
This domain name, or default "lb-apiserver.kubernetes.local", will be inserted
into the `/etc/hosts` file of all servers in the `k8s-cluster` group and wired
into the `/etc/hosts` file of all servers in the `k8s_cluster` group and wired
into the generated self-signed TLS/SSL certificates as well. Note that
the HAProxy service should as well be HA and requires a VIP management, which
is out of scope of this doc.

View File

@@ -52,10 +52,10 @@ You could rename *all.yml* config to something else, i.e. *kubespray.yml* and cr
```ini
...
#Kargo groups:
[kube-node:children]
[kube_node:children]
kubenode
[k8s-cluster:children]
[k8s_cluster:children]
kubernetes
[etcd:children]

View File

@@ -10,7 +10,7 @@ _Qemu_ is the only hypervisor supported by Kubespray.
To use Kata Containers, set the following variables:
**k8s-cluster.yml**:
**k8s_cluster.yml**:
```yaml
container_manager: containerd

View File

@@ -12,7 +12,7 @@ kernel version 3.10.0-862 has a nat related bug that will affect ovs function, p
## How to use it
Enable kube-ovn in `group_vars/k8s-cluster/k8s-cluster.yml`
Enable kube-ovn in `group_vars/k8s_cluster/k8s_cluster.yml`
```yml
...

View File

@@ -37,9 +37,9 @@ For a large scaled deployments, consider the following configuration changes:
* Tune network prefix sizes. Those are ``kube_network_node_prefix``,
``kube_service_addresses`` and ``kube_pods_subnet``.
* Add calico-rr nodes if you are deploying with Calico or Canal. Nodes recover
from host/network interruption much quicker with calico-rr. Note that
calico-rr role must be on a host without kube_control_plane or kube-node role (but
* Add calico_rr nodes if you are deploying with Calico or Canal. Nodes recover
from host/network interruption much quicker with calico_rr. Note that
calico_rr role must be on a host without kube_control_plane or kube_node role (but
etcd role is okay).
* Check out the

View File

@@ -2,7 +2,7 @@
## How to use it
* Enable macvlan in `group_vars/k8s-cluster/k8s-cluster.yml`
* Enable macvlan in `group_vars/k8s_cluster/k8s_cluster.yml`
```yml
...
@@ -10,7 +10,7 @@ kube_network_plugin: macvlan
...
```
* Adjust the `macvlan_interface` in `group_vars/k8s-cluster/k8s-net-macvlan.yml` or by host in the `host.yml` file:
* Adjust the `macvlan_interface` in `group_vars/k8s_cluster/k8s-net-macvlan.yml` or by host in the `host.yml` file:
```yml
all:
@@ -34,7 +34,7 @@ add `kube_proxy_masquerade_all: true` in `group_vars/all/all.yml`
The nodelocal dns IP is not reacheable.
Disable it in `sample/group_vars/k8s-cluster/k8s-cluster.yml`
Disable it in `sample/group_vars/k8s_cluster/k8s_cluster.yml`
```yml
enable_nodelocaldns: false

View File

@@ -17,7 +17,7 @@ Modify the order of your master list by pushing your first entry to any other po
node-1:
node-2:
node-3:
kube-node:
kube_node:
hosts:
node-1:
node-2:
@@ -38,7 +38,7 @@ change your inventory to:
node-2:
node-3:
node-1:
kube-node:
kube_node:
hosts:
node-2:
node-3:

View File

@@ -4,7 +4,7 @@ Intro to [ovn4nfv-k8s-plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin)
## How to use it
* Enable ovn4nfv in `group_vars/k8s-cluster/k8s-cluster.yml`
* Enable ovn4nfv in `group_vars/k8s_cluster/k8s_cluster.yml`
```yml
...

View File

@@ -225,7 +225,7 @@ worker-0, worker-1 and worker-2 are worker nodes. Also update the `ip` to the re
remove the `access_ip`.
The main configuration for the cluster is stored in
`inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml`. In this file we
`inventory/mycluster/group_vars/k8s_cluster/k8s_cluster.yml`. In this file we
will update the `supplementary_addresses_in_ssl_keys` with a list of the IP
addresses of the controller nodes. In that way we can access the
kubernetes API server as an administrator from outside the VPC network. You
@@ -234,7 +234,7 @@ The main configuration for the cluster is stored in
Kubespray also offers to easily enable popular kubernetes add-ons. You can
modify the
list of add-ons in `inventory/mycluster/group_vars/k8s-cluster/addons.yml`.
list of add-ons in `inventory/mycluster/group_vars/k8s_cluster/addons.yml`.
Let's enable the metrics server as this is a crucial monitoring element for
the kubernetes cluster, just change the 'false' to 'true' for
`metrics_server_enabled`.

View File

@@ -2,11 +2,11 @@
There are four node layout types: `default`, `separate`, `ha`, and `scale`.
`default` is a non-HA two nodes setup with one separate `kube-node`
`default` is a non-HA two nodes setup with one separate `kube_node`
and the `etcd` group merged with the `kube_control_plane`.
`separate` layout is when there is only node of each type, which includes
a kube_control_plane, kube-node, and etcd cluster member.
a kube_control_plane, kube_node, and etcd cluster member.
`ha` layout consists of two etcd nodes, two masters and a single worker node,
with role intersection.

View File

@@ -68,9 +68,9 @@ If you want to manually control the upgrade procedure, you can use the variables
For instance, if you're on v2.6.0, then check out v2.7.0, run the upgrade, check out the next tag, and run the next upgrade, etc.
Assuming you don't explicitly define a kubernetes version in your k8s-cluster.yml, you simply check out the next tag and run the upgrade-cluster.yml playbook
Assuming you don't explicitly define a kubernetes version in your k8s_cluster.yml, you simply check out the next tag and run the upgrade-cluster.yml playbook
* If you do define kubernetes version in your inventory (e.g. group_vars/k8s-cluster.yml) then either make sure to update it before running upgrade-cluster, or specify the new version you're upgrading to: `ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml -e kube_version=v1.11.3`
* If you do define kubernetes version in your inventory (e.g. group_vars/k8s_cluster.yml) then either make sure to update it before running upgrade-cluster, or specify the new version you're upgrading to: `ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml -e kube_version=v1.11.3`
Otherwise, the upgrade will leave your cluster at the same k8s version defined in your inventory vars.
@@ -232,7 +232,7 @@ Previous HEAD position was 6f97687d Release 2.8 robust san handling (#4478)
HEAD is now at a4e65c7c Upgrade to Ansible >2.7.0 (#4471)
```
:warning: IMPORTANT: Some of the variable formats changed in the k8s-cluster.yml between 2.8.5 and 2.9.0 :warning:
:warning: IMPORTANT: Some of the variable formats changed in the k8s_cluster.yml between 2.8.5 and 2.9.0 :warning:
If you do not keep your inventory copy up to date, **your upgrade will fail** and your first master will be left non-functional until fixed and re-run.

View File

@@ -38,7 +38,7 @@ Some variables of note include:
and access_ip are undefined
* *loadbalancer_apiserver* - If defined, all hosts will connect to this
address instead of localhost for kube_control_planes and kube_control_plane[0] for
kube-nodes. See more details in the
kube_nodes. See more details in the
[HA guide](/docs/ha-mode.md).
* *loadbalancer_apiserver_localhost* - makes all hosts to connect to
the apiserver internally load balanced endpoint. Mutual exclusive to the
@@ -59,14 +59,14 @@ following default cluster parameters:
* *kube_pods_subnet* - Subnet for Pod IPs (default is 10.233.64.0/18). Must not
overlap with kube_service_addresses.
* *kube_network_node_prefix* - Subnet allocated per-node for pod IPs. Remaining
bits in kube_pods_subnet dictates how many kube-nodes can be in cluster. Setting this > 25 will
bits in kube_pods_subnet dictates how many kube_nodes can be in cluster. Setting this > 25 will
raise an assertion in playbooks if the `kubelet_max_pods` var also isn't adjusted accordingly
(assertion not applicable to calico which doesn't use this as a hard limit, see
[Calico IP block sizes](https://docs.projectcalico.org/reference/resources/ippool#block-sizes).
* *enable_dual_stack_networks* - Setting this to true will provision both IPv4 and IPv6 networking for pods and services.
* *kube_service_addresses_ipv6* - Subnet for cluster IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1000/116``). Must not overlap with ``kube_pods_subnet_ipv6``.
* *kube_pods_subnet_ipv6* - Subnet for Pod IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1:0000/112``). Must not overlap with ``kube_service_addresses_ipv6``.
* *kube_network_node_prefix_ipv6* - Subnet allocated per-node for pod IPv6 IPs. Remaining bits in ``kube_pods_subnet_ipv6`` dictates how many kube-nodes can be in cluster.
* *kube_network_node_prefix_ipv6* - Subnet allocated per-node for pod IPv6 IPs. Remaining bits in ``kube_pods_subnet_ipv6`` dictates how many kube_nodes can be in cluster.
* *skydns_server* - Cluster IP for DNS (default is 10.233.0.3)
* *skydns_server_secondary* - Secondary Cluster IP for CoreDNS used with coredns_dual deployment (default is 10.233.0.4)
* *enable_coredns_k8s_external* - If enabled, it configures the [k8s_external plugin](https://coredns.io/plugins/k8s_external/)

View File

@@ -11,7 +11,7 @@ Weave encryption is supported for all communication
* To use Weave encryption, specify a strong password (if no password, no encryption)
```ShellSession
# In file ./inventory/sample/group_vars/k8s-cluster.yml
# In file ./inventory/sample/group_vars/k8s_cluster.yml
weave_password: EnterPasswordHere
```