mirror of
https://github.com/kubernetes-sigs/kubespray.git
synced 2026-02-01 09:38:12 -03:30
Added dynamic inventory for AWS as contrib
This commit is contained in:
14
docs/aws.md
14
docs/aws.md
@@ -8,3 +8,17 @@ Prior to creating your instances, you **must** ensure that you have created IAM
|
||||
The next step is to make sure the hostnames in your `inventory` file are identical to your internal hostnames in AWS. This may look something like `ip-111-222-333-444.us-west-2.compute.internal`. You can then specify how Ansible connects to these instances with `ansible_ssh_host` and `ansible_ssh_user`.
|
||||
|
||||
You can now create your cluster!
|
||||
|
||||
### Dynamic Inventory ###
|
||||
There is also a dynamic inventory script for AWS that can be used if desired. However, be aware that it makes some certain assumptions about how you'll create your inventory.
|
||||
Guide:
|
||||
- Create instances in AWS as needed.
|
||||
- Add tags to the instances with a key of `kargo-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd`
|
||||
- Copy the `kargo-aws-inventory.py` script from `kargo/contrib/aws_inventory` to the `kargo/inventory` directory.
|
||||
- Set the following AWS credentials and info as environment variables in your terminal:
|
||||
```
|
||||
export AWS_ACCESS_KEY_ID="xxxxx"
|
||||
export AWS_SECRET_ACCESS_KEY="yyyyy"
|
||||
export REGION="us-east-2"
|
||||
```
|
||||
- We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kargo-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the VPC_VISIBILITY variable to public and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`
|
||||
|
||||
Reference in New Issue
Block a user