Replace kube-master with kube_control_plane (#7256)

This replaces kube-master with kube_control_plane because of [1]:

  The Kubernetes project is moving away from wording that is
  considered offensive. A new working group WG Naming was created
  to track this work, and the word "master" was declared as offensive.
  A proposal was formalized for replacing the word "master" with
  "control plane". This means it should be removed from source code,
  documentation, and user-facing configuration from Kubernetes and
  its sub-projects.

NOTE: The reason why this changes it to kube_control_plane not
      kube-control-plane is for valid group names on ansible.

[1]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md#motivation
This commit is contained in:
Kenichi Omichi
2021-03-23 17:26:05 -07:00
committed by GitHub
parent d53fd29e34
commit 486b223e01
159 changed files with 564 additions and 485 deletions

View File

@@ -5,7 +5,7 @@
The inventory is composed of 3 groups:
* **kube-node** : list of kubernetes nodes where the pods will run.
* **kube-master** : list of servers where kubernetes master components (apiserver, scheduler, controller) will run.
* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
Note: do not modify the children of _k8s-cluster_, like putting
@@ -18,9 +18,9 @@ k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd
When _kube-node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
If you want it a standalone, make sure those groups do not intersect.
If you want the server to act both as master and node, the server must be defined
on both groups _kube-master_ and _kube-node_. If you want a standalone and
unschedulable master, the server must be defined only in the _kube-master_ and
If you want the server to act both as control-plane and node, the server must be defined
on both groups _kube_control_plane_ and _kube-node_. If you want a standalone and
unschedulable master, the server must be defined only in the _kube_control_plane_ and
not _kube-node_.
There are also two special groups:
@@ -40,7 +40,7 @@ node4 ansible_host=95.54.0.15 ip=10.3.0.4
node5 ansible_host=95.54.0.16 ip=10.3.0.5
node6 ansible_host=95.54.0.17 ip=10.3.0.6
[kube-master]
[kube_control_plane]
node1
node2
@@ -58,7 +58,7 @@ node6
[k8s-cluster:children]
kube-node
kube-master
kube_control_plane
```
## Group vars and overriding variables precedence

View File

@@ -35,11 +35,11 @@ This will produce an inventory that is passed into Ansible that looks like the f
],
"k8s-cluster": {
"children": [
"kube-master",
"kube_control_plane",
"kube-node"
]
},
"kube-master": [
"kube_control_plane": [
"ip-172-31-3-xxx.us-east-2.compute.internal"
],
"kube-node": [
@@ -51,7 +51,7 @@ This will produce an inventory that is passed into Ansible that looks like the f
Guide:
- Create instances in AWS as needed.
- Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd`
- Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube_control_plane`, `etcd`, or `kube-node`. You can also share roles like `kube_control_plane, etcd`
- Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory.
- Set the following AWS credentials and info as environment variables in your terminal:

View File

@@ -122,7 +122,7 @@ recommended here:
You need to edit your inventory and add:
* `calico-rr` group with nodes in it. `calico-rr` can be combined with
`kube-node` and/or `kube-master`. `calico-rr` group also must be a child
`kube-node` and/or `kube_control_plane`. `calico-rr` group also must be a child
group of `k8s-cluster` group.
* `cluster_id` by route reflector node/group (see details
[here](https://hub.docker.com/r/calico/routereflector/))
@@ -138,7 +138,7 @@ node3 ansible_ssh_host=10.210.1.13 ip=10.210.1.13
node4 ansible_ssh_host=10.210.1.14 ip=10.210.1.14
node5 ansible_ssh_host=10.210.1.15 ip=10.210.1.15
[kube-master]
[kube_control_plane]
node2
node3
@@ -155,7 +155,7 @@ node5
[k8s-cluster:children]
kube-node
kube-master
kube_control_plane
calico-rr
[calico-rr]

View File

@@ -8,7 +8,7 @@ Kubespray supports several download/upload modes. The default is:
There is also a "pull once, push many" mode as well:
* Setting ``download_run_once: True`` will make kubespray download container images and binaries only once and then push them to the cluster nodes. The default download delegate node is the first `kube-master`.
* Setting ``download_run_once: True`` will make kubespray download container images and binaries only once and then push them to the cluster nodes. The default download delegate node is the first `kube_control_plane`.
* Set ``download_localhost: True`` to make localhost the download delegate. This can be useful if cluster nodes cannot access external addresses. To use this requires that docker is installed and running on the ansible master and that the current user is either in the docker group or can do passwordless sudo, to be able to access docker.
NOTE: When `download_run_once` is true and `download_localhost` is false, all downloads will be done on the delegate node, including downloads for container images that are not required on that node. As a consequence, the storage required on that node will probably be more than if download_run_once was false, because all images will be loaded into the docker instance on that node, instead of just the images required for that node.

View File

@@ -76,16 +76,16 @@ var in inventory.
## Connecting to Kubernetes
By default, Kubespray configures kube-master hosts with insecure access to
By default, Kubespray configures kube_control_plane hosts with insecure access to
kube-apiserver via port 8080. A kubeconfig file is not necessary in this case,
because kubectl will use <http://localhost:8080> to connect. The kubeconfig files
generated will point to localhost (on kube-masters) and kube-node hosts will
generated will point to localhost (on kube_control_planes) and kube-node hosts will
connect either to a localhost nginx proxy or to a loadbalancer if configured.
More details on this process are in the [HA guide](/docs/ha-mode.md).
Kubespray permits connecting to the cluster remotely on any IP of any
kube-master host on port 6443 by default. However, this requires
authentication. One can get a kubeconfig from kube-master hosts
kube_control_plane host on port 6443 by default. However, this requires
authentication. One can get a kubeconfig from kube_control_plane hosts
(see [below](#accessing-kubernetes-api)) or connect with a [username and password](/docs/vars.md#user-accounts).
For more information on kubeconfig and accessing a Kubernetes cluster, refer to
@@ -119,7 +119,7 @@ kubectl proxy
## Accessing Kubernetes API
The main client of Kubernetes is `kubectl`. It is installed on each kube-master
The main client of Kubernetes is `kubectl`. It is installed on each kube_control_plane
host and can optionally be configured on your ansible host by setting
`kubectl_localhost: true` and `kubeconfig_localhost: true` in the configuration:

View File

@@ -32,7 +32,7 @@ If you choose to NOT use the local internal loadbalancer, you will need to
configure your own loadbalancer to achieve HA. Note that deploying a
loadbalancer is up to a user and is not covered by ansible roles in Kubespray.
By default, it only configures a non-HA endpoint, which points to the
`access_ip` or IP address of the first server node in the `kube-master` group.
`access_ip` or IP address of the first server node in the `kube_control_plane` group.
It can also configure clients to use endpoints for a given loadbalancer type.
The following diagram shows how traffic to the apiserver is directed.
@@ -102,16 +102,16 @@ exclusive to `loadbalancer_apiserver_localhost`.
Access API endpoints are evaluated automatically, as the following:
| Endpoint type | kube-master | non-master | external |
|------------------------------|------------------|-------------------------|-----------------------|
| Local LB (default) | `https://bip:sp` | `https://lc:nsp` | `https://m[0].aip:sp` |
| Local LB + Unmanaged here LB | `https://bip:sp` | `https://lc:nsp` | `https://ext` |
| External LB, no internal | `https://bip:sp` | `<https://lb:lp>` | `https://lb:lp` |
| No ext/int LB | `https://bip:sp` | `<https://m[0].aip:sp>` | `https://m[0].aip:sp` |
| Endpoint type | kube_control_plane | non-master | external |
|------------------------------|--------------------|-------------------------|-----------------------|
| Local LB (default) | `https://bip:sp` | `https://lc:nsp` | `https://m[0].aip:sp` |
| Local LB + Unmanaged here LB | `https://bip:sp` | `https://lc:nsp` | `https://ext` |
| External LB, no internal | `https://bip:sp` | `<https://lb:lp>` | `https://lb:lp` |
| No ext/int LB | `https://bip:sp` | `<https://m[0].aip:sp>` | `https://m[0].aip:sp` |
Where:
* `m[0]` - the first node in the `kube-master` group;
* `m[0]` - the first node in the `kube_control_plane` group;
* `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`;
* `ext` - Externally load balanced VIP:port and FQDN, not managed by Kubespray;
* `lc` - localhost;

View File

@@ -62,7 +62,7 @@ You could rename *all.yml* config to something else, i.e. *kubespray.yml* and cr
kubemaster
kubemaster-ha
[kube-master:children]
[kube_control_plane:children]
kubemaster
kubemaster-ha

View File

@@ -39,7 +39,7 @@ For a large scaled deployments, consider the following configuration changes:
* Add calico-rr nodes if you are deploying with Calico or Canal. Nodes recover
from host/network interruption much quicker with calico-rr. Note that
calico-rr role must be on a host without kube-master or kube-node role (but
calico-rr role must be on a host without kube_control_plane or kube-node role (but
etcd role is okay).
* Check out the

View File

@@ -2,9 +2,9 @@
Modified from [comments in #3471](https://github.com/kubernetes-sigs/kubespray/issues/3471#issuecomment-530036084)
## Limitation: Removal of first kube-master and etcd-master
## Limitation: Removal of first kube_control_plane and etcd-master
Currently you can't remove the first node in your kube-master and etcd-master list. If you still want to remove this node you have to:
Currently you can't remove the first node in your kube_control_plane and etcd-master list. If you still want to remove this node you have to:
### 1) Change order of current masters
@@ -12,7 +12,7 @@ Modify the order of your master list by pushing your first entry to any other po
```yaml
children:
kube-master:
kube_control_plane:
hosts:
node-1:
node-2:
@@ -33,7 +33,7 @@ change your inventory to:
```yaml
children:
kube-master:
kube_control_plane:
hosts:
node-2:
node-3:
@@ -103,10 +103,10 @@ You need to make sure there are always an odd number of etcd nodes in the cluste
### 1) Add the new node running cluster.yml
Update the inventory and run `cluster.yml` passing `--limit=etcd,kube-master -e ignore_assert_errors=yes`.
Update the inventory and run `cluster.yml` passing `--limit=etcd,kube_control_plane -e ignore_assert_errors=yes`.
If the node you want to add as an etcd node is already a worker or master node in your cluster, you have to remove him first using `remove-node.yml`.
Run `upgrade-cluster.yml` also passing `--limit=etcd,kube-master -e ignore_assert_errors=yes`. This is necessary to update all etcd configuration in the cluster.
Run `upgrade-cluster.yml` also passing `--limit=etcd,kube_control_plane -e ignore_assert_errors=yes`. This is necessary to update all etcd configuration in the cluster.
At this point, you will have an even number of nodes.
Everything should still be working, and you should only have problems if the cluster decides to elect a new etcd leader before you remove a node.

View File

@@ -5,8 +5,8 @@ To recover from broken nodes in the control plane use the "recover\-control\-pla
* Backup what you can
* Provision new nodes to replace the broken ones
* Place the surviving nodes of the control plane first in the "etcd" and "kube-master" groups
* Add the new nodes below the surviving control plane nodes in the "etcd" and "kube-master" groups
* Place the surviving nodes of the control plane first in the "etcd" and "kube\_control\_plane" groups
* Add the new nodes below the surviving control plane nodes in the "etcd" and "kube\_control\_plane" groups
Examples of what broken means in this context:
@@ -20,9 +20,9 @@ __Note that you need at least one functional node to be able to recover using th
## Runbook
* Move any broken etcd nodes into the "broken\_etcd" group, make sure the "etcd\_member\_name" variable is set.
* Move any broken master nodes into the "broken\_kube-master" group.
* Move any broken master nodes into the "broken\_kube\_control\_plane" group.
Then run the playbook with ```--limit etcd,kube-master``` and increase the number of ETCD retries by setting ```-e etcd_retries=10``` or something even larger. The amount of retries required is difficult to predict.
Then run the playbook with ```--limit etcd,kube_control_plane``` and increase the number of ETCD retries by setting ```-e etcd_retries=10``` or something even larger. The amount of retries required is difficult to predict.
When finished you should have a fully working control plane again.

View File

@@ -3,10 +3,10 @@
There are four node layout types: `default`, `separate`, `ha`, and `scale`.
`default` is a non-HA two nodes setup with one separate `kube-node`
and the `etcd` group merged with the `kube-master`.
and the `etcd` group merged with the `kube_control_plane`.
`separate` layout is when there is only node of each type, which includes
a kube-master, kube-node, and etcd cluster member.
a kube_control_plane, kube-node, and etcd cluster member.
`ha` layout consists of two etcd nodes, two masters and a single worker node,
with role intersection.

View File

@@ -41,7 +41,7 @@ The var ```-e upgrade_cluster_setup=true``` is needed to be set in order to migr
Kubespray also supports cordon, drain and uncordoning of nodes when performing
a cluster upgrade. There is a separate playbook used for this purpose. It is
important to note that upgrade-cluster.yml can only be used for upgrading an
existing cluster. That means there must be at least 1 kube-master already
existing cluster. That means there must be at least 1 kube_control_plane already
deployed.
```ShellSession

View File

@@ -36,7 +36,7 @@ Some variables of note include:
* *ansible_default_ipv4.address* - Not Kubespray-specific, but it is used if ip
and access_ip are undefined
* *loadbalancer_apiserver* - If defined, all hosts will connect to this
address instead of localhost for kube-masters and kube-master[0] for
address instead of localhost for kube_control_planes and kube_control_plane[0] for
kube-nodes. See more details in the
[HA guide](/docs/ha-mode.md).
* *loadbalancer_apiserver_localhost* - makes all hosts to connect to