Merge pull request #9490 from shanemcd/delete-old-installer

Delete old installer / update INSTALL.md

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
This commit is contained in:
softwarefactory-project-zuul[bot] 2021-03-05 15:31:37 +00:00 committed by GitHub
commit 3fd0c29a95
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
27 changed files with 69 additions and 2751 deletions

View File

@ -1,410 +1,115 @@
Table of Contents
=================
* [Installing AWX](#installing-awx)
* [The AWX Operator](#the-awx-operator)
* [Quickstart with minikube](#quickstart-with-minikube)
* [Starting minikube](#starting-minikube)
* [Deploying the AWX Operator](#deploying-the-awx-operator)
* [Verifying the Operator Deployment](#verifying-the-operator-deployment)
* [Deploy AWX](#deploy-awx)
* [Accessing AWX](#accessing-awx)
* [Installing the AWX CLI](#installing-the-awx-cli)
* [Building the CLI Documentation](#building-the-cli-documentation)
# Installing AWX
This document provides a guide for installing AWX.
## The AWX Operator
## Table of contents
Starting in version 18.0, the [AWX Operator](https://github.com/ansible/awx-operator) is the preferred way to install AWX.
- [Installing AWX](#installing-awx)
* [Getting started](#getting-started)
+ [Clone the repo](#clone-the-repo)
+ [AWX branding](#awx-branding)
+ [Prerequisites](#prerequisites)
+ [System Requirements](#system-requirements)
+ [Choose a deployment platform](#choose-a-deployment-platform)
+ [Official vs Building Images](#official-vs-building-images)
* [OpenShift](#openshift)
+ [Prerequisites](#prerequisites-1)
+ [Pre-install steps](#pre-install-steps)
- [Deploying to Minishift](#deploying-to-minishift)
- [PostgreSQL](#postgresql)
+ [Run the installer](#run-the-installer)
+ [Post-install](#post-install)
+ [Accessing AWX](#accessing-awx)
* [Kubernetes](#kubernetes)
+ [Prerequisites](#prerequisites-2)
+ [Pre-install steps](#pre-install-steps-1)
+ [Configuring Helm](#configuring-helm)
+ [Run the installer](#run-the-installer-1)
+ [Post-install](#post-install-1)
+ [Accessing AWX](#accessing-awx-1)
+ [SSL Termination](#ssl-termination)
- [Installing the AWX CLI](#installing-the-awx-cli)
* [Building the CLI Documentation](#building-the-cli-documentation)
### Quickstart with minikube
If you don't have an existing OpenShift or Kubernetes cluster, minikube is a fast and easy way to get up and running.
## Getting started
To install minikube, follow the steps in their [documentation](https://minikube.sigs.k8s.io/docs/start/).
### Clone the repo
#### Starting minikube
If you have not already done so, you will need to clone, or create a local copy, of the [AWX repo](https://github.com/ansible/awx). We generally recommend that you view the releases page:
https://github.com/ansible/awx/releases
...and clone the latest stable release, e.g.,
`git clone -b x.y.z https://github.com/ansible/awx.git`
Please note that deploying from `HEAD` (or the latest commit) is **not** stable, and that if you want to do this, you should proceed at your own risk (also, see the section #official-vs-building-images for building your own image).
For more on how to clone the repo, view [git clone help](https://git-scm.com/docs/git-clone).
Once you have a local copy, run the commands in the following sections from the root of the project tree.
### AWX branding
You can optionally install the AWX branding assets from the [awx-logos repo](https://github.com/ansible/awx-logos). Prior to installing, please review and agree to the [trademark guidelines](https://github.com/ansible/awx-logos/blob/master/TRADEMARKS.md).
To install the assets, clone the `awx-logos` repo so that it is next to your `awx` clone. As you progress through the installation steps, you'll be setting variables in the [inventory](./installer/inventory) file. To include the assets in the build, set `awx_official=true`.
### Prerequisites
Before you can run a deployment, you'll need the following installed in your local environment:
- [Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html) Requires Version 2.8+
- [Docker](https://docs.docker.com/engine/installation/)
+ A recent version
- [docker](https://pypi.org/project/docker/) Python module
+ This is incompatible with `docker-py`. If you have previously installed `docker-py`, please uninstall it.
+ We use this module instead of `docker-py` because it is what the `docker-compose` Python module requires.
- [community.general.docker_image collection](https://docs.ansible.com/ansible/latest/collections/community/general/docker_image_module.html)
+ This is only required if you are using Ansible >= 2.10
- [GNU Make](https://www.gnu.org/software/make/)
- [Git](https://git-scm.com/) Requires Version 1.8.4+
- Python 3.6+
### System Requirements
The system that runs the AWX service will need to satisfy the following requirements
- At least 4GB of memory
- At least 2 cpu cores
- At least 20GB of space
- Running Docker, Openshift, or Kubernetes
- If you choose to use an external PostgreSQL database, please note that the minimum version is 10+.
### Choose a deployment platform
We currently support running AWX as a containerized application using Docker images deployed to either an OpenShift cluster or a Kubernetes cluster. The remainder of this document will walk you through the process of building the images, and deploying them to either platform.
The [installer](./installer) directory contains an [inventory](./installer/inventory) file, and a playbook, [install.yml](./installer/install.yml). You'll begin by setting variables in the inventory file according to the platform you wish to use, and then you'll start the image build and deployment process by running the playbook.
In the sections below, you'll find deployment details and instructions for each platform:
- [OpenShift](#openshift)
- [Kubernetes](#kubernetes)
### Official vs Building Images
When installing AWX you have the option of building your own image or using the image provided on DockerHub (see [awx](https://hub.docker.com/r/ansible/awx/))
This is controlled by the following variables in the `inventory` file
Once you have installed minikube, run the following command to start it. You may wish to customize these options.
```
dockerhub_base=ansible
dockerhub_version=latest
$ minikube start --cpus=4 --memory=8g --addons=ingress
```
If these variables are present then all deployments will use these hosted images. If the variables are not present then the images will be built during the install.
#### Deploying the AWX Operator
*dockerhub_base*
> The base location on DockerHub where the images are hosted (by default this pulls a container image named `ansible/awx:tag`)
*dockerhub_version*
> Multiple versions are provided. `latest` always pulls the most recent. You may also select version numbers at different granularities: 1, 1.0, 1.0.1, 1.0.0.123
To build your own container use the `build.yml` playbook:
For a comprehensive overview of features, see [README.md](https://github.com/ansible/awx-operator/blob/devel/README.md) in the awx-operator repo. The following steps are the bare minimum to get AWX up and running.
```
ansible-playbook tools/ansible/build.yml -e awx_version=test-build
$ minikube kubectl -- apply -f https://raw.githubusercontent.com/ansible/awx-operator/devel/deploy/awx-operator.yaml
```
The resulting image will automatically be pushed to a registry if `docker_registry` is defined.
##### Verifying the Operator Deployment
## OpenShift
### Prerequisites
To complete a deployment to OpenShift, you will need access to an OpenShift cluster. For demo and testing purposes, you can use [Minishift](https://github.com/minishift/minishift) to create a single node cluster running inside a virtual machine.
When using OpenShift for deploying AWX make sure you have correct privileges to add the security context 'privileged', otherwise the installation will fail. The privileged context is needed because of the use of [the bubblewrap tool](https://github.com/containers/bubblewrap) to add an additional layer of security when using containers.
You will also need to have the `oc` command in your PATH. The `install.yml` playbook will call out to `oc` when logging into, and creating objects on the cluster.
The default resource requests per-deployment requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/roles/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources](https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources)
### Pre-install steps
Before starting the install, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
*openshift_host*
> IP address or hostname of the OpenShift cluster. If you're using Minishift, this will be the value returned by `minishift ip`.
*openshift_skip_tls_verify*
> Boolean. Set to True if using self-signed certs.
*openshift_project*
> Name of the OpenShift project that will be created, and used as the namespace for the AWX app. Defaults to *awx*.
*openshift_user*
> Username of the OpenShift user that will create the project, and deploy the application. Defaults to *developer*.
*openshift_pg_emptydir*
> Boolean. Set to True to use an emptyDir volume when deploying the PostgreSQL pod. Note: This should only be used for demo and testing purposes.
*docker_registry*
> IP address and port, or URL, for accessing a registry that the OpenShift cluster can access. Defaults to *172.30.1.1:5000*, the internal registry delivered with Minishift. This is not needed if you are using official hosted images.
*docker_registry_repository*
> Namespace to use when pushing and pulling images to and from the registry. Generally this will match the project name. It defaults to *awx*. This is not needed if you are using official hosted images.
*docker_registry_username*
> Username of the user that will push images to the registry. Will generally match the *openshift_user* value. Defaults to *developer*. This is not needed if you are using official hosted images.
#### Deploying to Minishift
Install Minishift by following the [installation guide](https://docs.openshift.org/latest/minishift/getting-started/installing.html).
The recommended minimum resources for your Minishift VM:
```bash
$ minishift start --cpus=4 --memory=8GB
```
The Minishift VM contains a Docker daemon, which you can use to build the AWX images. This is generally the approach you should take, and we recommend doing so. To use this instance, run the following command to setup your environment:
```bash
# Set DOCKER environment variable to point to the Minishift VM
$ eval $(minishift docker-env)
```
**Note**
> If you choose to not use the Docker instance running inside the VM, and build the images externally, you will have to enable the OpenShift cluster to access the images. This involves pushing the images to an external Docker registry, and granting the cluster access to it, or exposing the internal registry, and pushing the images into it.
#### PostgreSQL
By default, AWX will deploy a PostgreSQL pod inside of your cluster. You will need to create a [Persistent Volume Claim](https://docs.openshift.org/latest/dev_guide/persistent_volumes.html) which is named `postgresql` by default, and can be overridden by setting the `openshift_pg_pvc_name` variable. For testing and demo purposes, you may set `openshift_pg_emptydir=yes`.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_admin_password`, `pg_database`, and `pg_port` with the connection information. When setting `pg_hostname` the installer will assume you have configured the database in that location and will not launch the postgresql pod.
### Run the installer
To start the install, you will pass two *extra* variables on the command line. The first is *openshift_password*, which is the password for the *openshift_user*, and the second is *docker_registry_password*, which is the password associated with *docker_registry_username*.
If you're using the OpenShift internal registry, then you'll pass an access token for the *docker_registry_password* value, rather than a password. The `oc whoami -t` command will generate the required token, as long as you're logged into the cluster via `oc cluster login`.
Run the following command (docker_registry_password is optional if using official images):
```bash
# Start the install
$ ansible-playbook -i inventory install.yml -e openshift_password=developer -e docker_registry_password=$(oc whoami -t)
```
### Post-install
After the playbook run completes, check the status of the deployment by running `oc get pods`:
```bash
# View the running pods
$ oc get pods
NAME READY STATUS RESTARTS AGE
awx-3886581826-5mv0l 4/4 Running 0 8s
postgresql-1-l85fh 1/1 Running 0 20m
After a few seconds, the operator should be up and running. Verify it by running the following command:
```
In the above example, the name of the AWX pod is `awx-3886581826-5mv0l`. Before accessing the AWX web interface, setup tasks and database migrations need to complete. These tasks are running in the `awx_task` container inside the AWX pod. To monitor their status, tail the container's STDOUT by running the following command, replacing the AWX pod name with the pod name from your environment:
```bash
# Follow the awx_task log output
$ oc logs -f awx-3886581826-5mv0l -c awx-celery
$ minikube kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-operator-7c78bfbfd-xb6th 1/1 Running 0 11s
```
You will see the following indicating that database migrations are running:
#### Deploy AWX
```bash
Using /etc/ansible/ansible.cfg as config file
127.0.0.1 | SUCCESS => {
"changed": false,
"db": "awx"
}
Operations to perform:
Synchronize unmigrated apps: solo, api, staticfiles, messages, channels, django_extensions, ui, rest_framework, polymorphic
Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying taggit.0001_initial... OK
Applying taggit.0002_auto_20150616_2121... OK
...
Once the Operator is running, you can now deploy AWX by creating a simple YAML file:
```
$ cat myawx.yml
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
tower_ingress_type: Ingress
```
When you see output similar to the following, you'll know that database migrations have completed, and you can access the web interface:
And then creating the AWX object in the Kubernetes API:
```bash
Python 2.7.5 (default, Nov 6 2016, 00:28:07)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> <User: admin>
>>> Default organization added.
Demo Credential, Inventory, and Job Template added.
Successfully registered instance awx-3886581826-5mv0l
(changed: True)
Creating instance group tower
Added instance awx-3886581826-5mv0l to tower
```
$ minikube kubectl apply -- -f myawx.yml
awx.awx.ansible.com/awx created
```
Once database migrations complete, the web interface will be accessible.
After creating the AWX object in the Kubernetes API, the operator will begin running its reconciliation loop.
### Accessing AWX
To see what's going on, you can tail the logs of the operator pod (note that your pod name will be different):
The AWX web interface is running in the AWX pod, behind the `awx-web-svc` service. To view the service, and its port value, run the following command:
```bash
# View available services
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-web-svc 172.30.111.74 <nodes> 8052:30083/TCP 37m
postgresql 172.30.102.9 <none> 5432/TCP 38m
```
$ minikube kubectl logs -- -f awx-operator-7c78bfbfd-xb6th
```
The deployment process creates a route, `awx-web-svc`, to expose the service. How the ingres is actually created will vary depending on your environment, and how the cluster is configured. You can view the route, and the external IP address and hostname assigned to it, by running the following command:
After a few seconds, you will see the database and application pods show up. On a fresh system, it may take a few minutes for the container images to download.
```bash
# View available routes
$ oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
awx-web-svc awx-web-svc-awx.192.168.64.2.nip.io awx-web-svc http edge/Allow None
```
$ minikube kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-5ffbfd489c-bvtvf 3/3 Running 0 2m54s
awx-operator-7c78bfbfd-xb6th 1/1 Running 0 6m42s
awx-postgres-0 1/1 Running 0 2m58s
```
The above example is taken from a Minishift instance. From a web browser, use `https` to access the `HOST/PORT` value from your environment. Using the above example, the URL to access the server would be [https://awx-web-svc-awx.192.168.64.2.nip.io](https://awx-web-svc-awx.192.168.64.2.nip.io).
##### Accessing AWX
Once you access the AWX server, you will be prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
To access the AWX UI, you'll need to grab the service url from minikube:
## Kubernetes
### Prerequisites
A Kubernetes deployment will require you to have access to a Kubernetes cluster as well as the following tools:
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- [helm](https://helm.sh/docs/intro/quickstart/)
The installation program will reference `kubectl` directly. `helm` is only necessary if you are letting the installer configure PostgreSQL for you.
The default resource requests per-pod requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/roles/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
### Pre-install steps
Before starting the install process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section uncommenting when necessary. Make sure the openshift and standalone docker sections are commented out:
*kubernetes_context*
> Prior to running the installer, make sure you've configured the context for the cluster you'll be installing to. This is how the installer knows which cluster to connect to and what authentication to use
*kubernetes_namespace*
> Name of the Kubernetes namespace where the AWX resources will be installed. This will be created if it doesn't exist
*docker_registry_*
> These settings should be used if building your own base images. You'll need access to an external registry and are responsible for making sure your kube cluster can talk to it and use it. If these are undefined and the dockerhub_ configuration settings are uncommented then the images will be pulled from dockerhub instead
### Configuring Helm
If you want the AWX installer to manage creating the database pod (rather than installing and configuring postgres on your own). Then you will need to have a working `helm` installation, you can find details here: [https://helm.sh/docs/intro/quickstart/](https://helm.sh/docs/intro/quickstart/).
You do not need to create a [Persistent Volume Claim](https://docs.openshift.org/latest/dev_guide/persistent_volumes.html) as Helm does it for you. However, an existing one may be used by setting the `pg_persistence_existingclaim` variable.
Newer Kubernetes clusters with RBAC enabled will need to make sure a service account is created, make sure to follow the instructions here [https://helm.sh/docs/topics/rbac/](https://helm.sh/docs/topics/rbac/)
### Run the installer
After making changes to the `inventory` file use `ansible-playbook` to begin the install
```bash
$ ansible-playbook -i inventory install.yml
```
$ minikube service awx-service --url
http://192.168.59.2:31868
```
### Post-install
On fresh installs, you will see the "AWX is currently upgrading." page until database migrations finish.
After the playbook run completes, check the status of the deployment by running `kubectl get pods --namespace awx` (replace awx with the namespace you used):
Once you are redirected to the login screen, you can now log in by obtaining the generated admin password (note: do not copy the trailing `%`):
```bash
# View the running pods, it may take a few minutes for everything to be marked in the Running state
$ kubectl get pods --namespace awx
NAME READY STATUS RESTARTS AGE
awx-2558692395-2r8ss 4/4 Running 0 29s
awx-postgresql-355348841-kltkn 1/1 Running 0 1m
```
$ minikube kubectl -- get secret awx-admin-password -o jsonpath='{.data.password}' | base64 --decode
b6ChwVmqEiAsil2KSpH4xGaZPeZvWnWj%
```
### Accessing AWX
The AWX web interface is running in the AWX pod behind the `awx-web-svc` service:
```bash
# View available services
$ kubectl get svc --namespace awx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-postgresql ClusterIP 10.7.250.208 <none> 5432/TCP 2m
awx-web-svc NodePort 10.7.241.35 <none> 80:30177/TCP 1m
```
The deployment process creates an `Ingress` named `awx-web-svc` also. Some kubernetes cloud providers will automatically handle routing configuration when an Ingress is created others may require that you more explicitly configure it. You can see what kubernetes knows about things with:
```bash
kubectl get ing --namespace awx
NAME HOSTS ADDRESS PORTS AGE
awx-web-svc * 35.227.x.y 80 3m
```
If your provider is able to allocate an IP Address from the Ingress controller then you can navigate to the address and access the AWX interface. For some providers it can take a few minutes to allocate and make this accessible. For other providers it may require you to manually intervene.
### SSL Termination
Unlike Openshift's `Route` the Kubernetes `Ingress` doesn't yet handle SSL termination. As such the default configuration will only expose AWX through HTTP on port 80. You are responsible for configuring SSL support until support is added (either to Kubernetes or AWX itself).
Now you can log in at the URL above with the username "admin" and the password above. Happy Automating!
# Installing the AWX CLI

View File

@ -1,6 +0,0 @@
---
- name: Render AWX Dockerfile and sources
hosts: localhost
gather_facts: true
roles:
- {role: dockerfile}

View File

@ -1,6 +0,0 @@
---
- name: Build and deploy AWX
hosts: all
roles:
- {role: check_vars}
- {role: kubernetes, when: "openshift_host is defined or kubernetes_context is defined"}

View File

@ -1,173 +0,0 @@
localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python3"
[all:vars]
# Remove these lines if you want to run a local image build
# Otherwise the setup playbook will install the official Ansible images. Versions may
# be selected based on: latest, 1, 1.0, 1.0.0, 1.0.0.123
# by default the base will be used to search for ansible/awx
dockerhub_base=ansible
# Openshift Install
# Will need to set -e openshift_password=developer -e docker_registry_password=$(oc whoami -t)
# or set -e openshift_token=TOKEN
# openshift_host=127.0.0.1:8443
# openshift_project=awx
# openshift_user=developer
# openshift_skip_tls_verify=False
# openshift_pg_emptydir=True
# Kubernetes Install
# kubernetes_context=test-cluster
# kubernetes_namespace=awx
# kubernetes_web_svc_type=NodePort
# Optional Kubernetes Variables
# pg_image_registry=docker.io
# pg_serviceaccount=awx
# pg_volume_capacity=5
# pg_persistence_storageClass=StorageClassName
# pg_persistence_existingclaim=postgres_pvc
# pg_cpu_limit=1000
# pg_mem_limit=2
# Kubernetes Ingress Configuration
# You can use the variables below to configure Kubernetes Ingress
# Set hostname
# kubernetes_ingress_hostname=awx.example.org
# Add annotations. The example below shows an annotation to be used with Traefik but other Ingress controllers are also supported
# kubernetes_ingress_annotations={'kubernetes.io/ingress.class': 'traefik', 'traefik.ingress.kubernetes.io/redirect-entry-point': 'https'}
# Specify a secret for TLS termination
# kubernetes_ingress_tls_secret=awx-cert
# Kubernetes and Openshift Install Resource Requests
# These are the request and limit values for a pod's container for task/web/redis/management.
# The total amount of requested resources for a pod is the sum of all
# resources requested by all containers in the pod
# A cpu_request of 1500 is 1.5 cores for the container to start out with.
# A cpu_limit defines the maximum cores that that container can reserve.
# A mem_request of 2 is for 2 gigabytes of memory for the container
# A mem_limit defines the maximum memory that that container can reserve.
# Default values for these entries can be found in ./roles/kubernetes/defaults/main.yml
# task_cpu_request=1500
# task_mem_request=2
# task_cpu_limit=2000
# task_mem_limit=4
# web_cpu_limit=1000
# web_mem_limit=2
# redis_cpu_limit=1000
# redis_mem_limit=3
# management_cpu_limit=2000
# management_mem_limit=2
# Common Docker parameters
awx_task_hostname=awx
awx_web_hostname=awxweb
# Local directory that is mounted in the awx_postgres docker container to place the db in
postgres_data_dir="~/.awx/pgdocker"
host_port=80
host_port_ssl=443
#ssl_certificate=
# Optional key file
#ssl_certificate_key=
docker_compose_dir="~/.awx/awxcompose"
# Required for Openshift when building the image on your own
# Optional for Openshift if using Dockerhub or another prebuilt registry
# Required for Docker Compose Install if building the image on your own
# Optional for Docker Compose Install if using Dockerhub or another prebuilt registry
# Define if you want the image pushed to a registry. The container definition will also use these images
# docker_registry=172.30.1.1:5000
# docker_registry_repository=awx
# docker_registry_username=developer
# Set pg_hostname if you have an external postgres server, otherwise
# a new postgres service will be created
# pg_hostname=postgresql
pg_username=awx
# pg_password should be random 10 character alphanumeric string, when postgresql is running on kubernetes
# NB: it's a limitation of the "official" postgres helm chart
pg_password=awxpass
pg_database=awx
pg_port=5432
#pg_sslmode=require
# If requiring SSL communication (e.g. pg_sslmode='verify-full') with Postgres
# and using a self-signed certificate or a certificate signed by a custom CA
# set pg_root_ca_file to a file containing the self-signed certificate or the
# root CA certificate chain.
# pg_root_ca_file='example_root_ca.crt'
# The following variable is only required when using the provided
# containerized postgres deployment on OpenShift
# pg_admin_password=postgrespass
# This will create or update a default admin (superuser) account in AWX, if not provided
# then these default values are used
admin_user=admin
# admin_password=password
# Whether or not to create preload data for demonstration purposes
create_preload_data=True
# AWX Secret key
# It's *very* important that this stay the same between upgrades or you will lose the ability to decrypt
# your credentials
secret_key=awxsecret
# By default a broadcast websocket secret will be generated.
# If you would like to *rerun the playbook*, you need to set a unique password.
# Otherwise it would generate a new one every playbook run.
# broadcast_websocket_secret=
# Build AWX with official logos
# Requires cloning awx-logos repo as a sibling of this project.
# Review the trademark guidelines at https://github.com/ansible/awx-logos/blob/master/TRADEMARKS.md
# awx_official=false
# Proxy
#http_proxy=http://proxy:3128
#https_proxy=http://proxy:3128
#no_proxy=mycorp.org
# Container networking configuration
# Set the awx_task and awx_web containers' search domain(s)
#awx_container_search_domains=example.com,ansible.com
# Alternate DNS servers
#awx_alternate_dns_servers="10.1.2.3,10.2.3.4"
# AWX project data folder. If you need access to the location where AWX stores the projects
# it manages from the docker host, you can set this to turn it into a volume for the container.
#project_data_dir=/var/lib/awx/projects
# AWX custom virtual environment folder. Only usable for local install.
#custom_venv_dir=/opt/my-envs/
# CA Trust directory. If you need to provide custom CA certificates, supplying
# this variable causes this directory on the host to be bind mounted over
# /etc/pki/ca-trust in the awx_task and awx_web containers.
# If you are deploying on openshift or kubernetes, set the variable to /etc/pki/ca-trust instead,
# as the awx_web and awx_task containers will not run the `update-ca-trust` command.
#ca_trust_dir=/etc/pki/ca-trust/source/anchors
# Include /etc/nginx/awx_extra.conf
# Note the use of glob pattern for nginx
# which makes include "optional" - i.e. not fail
# if file is absent
#extra_nginx_include="/etc/nginx/awx_extra[.]conf"
# Docker compose explicit subnet. Set to avoid overlapping your existing LAN networks.
#docker_compose_subnet="172.17.0.1/16"
#
# Allow for different docker logging drivers
# By Default; the logger will be json-file, however you can override
# that by uncommenting the docker_logger below.
# Be aware that journald may rate limit your log messages if you choose it.
# See: https://docs.docker.com/config/containers/logging/configure/
# docker_logger=journald
#
# Add extra hosts to docker compose file. This might be necessary to
# sneak in servernames. For example for DMZ self-signed CA certificates.
# Equivialent to using the --add-host parameter with "docker run".
#docker_compose_extra_hosts="otherserver.local:192.168.0.1,ldap-server.local:192.168.0.2"

View File

@ -1,48 +0,0 @@
# check_openshift.yml
---
- name: openshift_project should be defined
assert:
that:
- openshift_project is defined and openshift_project != ''
msg: "Set the value of 'openshift_project' in the inventory file."
- name: openshift_user should be defined
assert:
that:
- openshift_user is defined and openshift_user != ''
msg: "Set the value of 'openshift_user' in the inventory file."
- name: openshift_password or openshift_token should be defined
assert:
that:
- (openshift_password is defined and openshift_password != '') or
(openshift_token is defined and openshift_token != '')
msg: "Set the value of 'openshift_password' or 'openshift_token' in the inventory file."
- name: docker_registry should be defined if not using dockerhub
assert:
that:
- docker_registry is defined and docker_registry != ''
msg: "Set the value of 'docker_registry' in the inventory file."
when: dockerhub_base is not defined
- name: docker_registry_repository should be defined if not using dockerhub
assert:
that:
- docker_registry_repository is defined and docker_registry_repository != ''
msg: "Set the value of 'docker_registry_repository' in the inventory file."
when: dockerhub_base is not defined
- name: docker_registry_username should be defined if not using dockerhub
assert:
that:
- docker_registry_username is defined and docker_registry_username != ''
msg: "Set the value of 'docker_registry_username' in the inventory file."
when: dockerhub_base is not defined
- name: docker_registry_password should be defined
assert:
that:
- docker_registry_password is defined and docker_registry_password != ''
msg: "Set the value of 'docker_registry_password' in the inventory file."
when: dockerhub_base is not defined

View File

@ -1,10 +0,0 @@
# main.yml
---
- name: admin_password should be defined
assert:
that:
- admin_password is defined and admin_password != ''
msg: "Set the value of 'admin_password' in the inventory file."
- include_tasks: check_openshift.yml
when: openshift_host is defined and openshift_host != ''

View File

@ -1,62 +0,0 @@
---
dockerhub_version: "{{ lookup('file', playbook_dir + '/../VERSION') }}"
create_preload_data: true
admin_user: 'admin'
admin_email: 'root@localhost'
admin_password: ''
kubernetes_base_path: "{{ local_base_config_path|default('/tmp') }}/{{ kubernetes_deployment_name }}-config"
kubernetes_awx_version: "{{ dockerhub_version }}"
kubernetes_awx_image: "ansible/awx"
kubernetes_web_svc_type: "NodePort"
awx_psp_create: false
awx_psp_name: 'awx'
awx_psp_privileged: true
web_mem_request: 1
web_cpu_request: 500
web_security_context_enabled: true
web_security_context_privileged: false
task_mem_request: 2
task_cpu_request: 1500
task_security_context_enabled: true
task_security_context_privileged: true
redis_mem_request: 2
redis_cpu_request: 500
redis_security_context_enabled: true
redis_security_context_privileged: false
redis_security_context_user: 1001
kubernetes_redis_image: "redis"
kubernetes_redis_image_tag: "latest"
kubernetes_redis_config_mount_path: "/usr/local/etc/redis/redis.conf"
openshift_pg_emptydir: false
openshift_pg_pvc_name: postgresql
kubernetes_deployment_name: awx
kubernetes_serviceaccount_name: awx
kubernetes_deployment_replica_size: 1
postgress_activate_wait: 60
restore_backup_file: "./tower-openshift-backup-latest.tar.gz"
insights_url_base: "https://example.org"
automation_analytics_url: "https://example.org"
insights_agent_mime: "application/example"
custom_venvs_path: "/opt/custom-venvs"
custom_venvs_python: "python2"
ca_trust_bundle: "/etc/pki/tls/certs/ca-bundle.crt"
container_groups_image: "ansible/ansible-runner"
uwsgi_bash: "bash -c"

View File

@ -1,5 +0,0 @@
---
- name: remove-rmq_cert_tempdir
file:
state: absent
path: "{{ rmq_cert_tempdir.path }}"

View File

@ -1,82 +0,0 @@
---
- name: Determine the timestamp for the backup.
set_fact:
now: '{{ lookup("pipe", "date +%F-%T") }}'
- include_tasks: openshift_auth.yml
when: openshift_host is defined
- include_tasks: kubernetes_auth.yml
when: kubernetes_context is defined
- name: Use kubectl or oc
set_fact:
kubectl_or_oc: "{{ openshift_oc_bin if openshift_oc_bin is defined else 'kubectl' }}"
- name: Delete any existing management pod
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
delete pod ansible-tower-management --grace-period=0 --ignore-not-found
- name: Template management pod
set_fact:
management_pod: "{{ lookup('template', 'management-pod.yml.j2') }}"
- name: Create management pod
shell: |
echo {{ management_pod | quote }} | {{ kubectl_or_oc }} apply -f -
- name: Wait for management pod to start
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
get pod ansible-tower-management -o jsonpath="{.status.phase}"
register: result
until: result.stdout == "Running"
retries: 60
delay: 10
- name: Create directory for backup
file:
state: directory
path: "{{ playbook_dir }}/tower-openshift-backup-{{ now }}"
- name: Precreate file for database dump
file:
path: "{{ playbook_dir }}/tower-openshift-backup-{{ now }}/tower.db"
state: touch
mode: 0600
- name: Dump database
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} exec ansible-tower-management -- \
bash -c "PGPASSWORD={{ pg_password | quote }} \
pg_dump --clean --create \
--host='{{ pg_hostname | default('postgresql') }}' \
--port={{ pg_port | default('5432') }} \
--username='{{ pg_username }}' \
--dbname='{{ pg_database }}'" > {{ playbook_dir }}/tower-openshift-backup-{{ now }}/tower.db
no_log: true
- name: Copy inventory into backup directory
copy:
src: "{{ inventory_file }}"
dest: "{{ playbook_dir }}/tower-openshift-backup-{{ now }}/"
mode: 0600
- name: Delete management pod
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
delete pod ansible-tower-management --grace-period=0 --ignore-not-found
- name: Create backup archive
archive:
path: "{{ playbook_dir }}/tower-openshift-backup-{{ now }}"
dest: "{{ item }}"
with_items:
- "{{ playbook_dir }}/tower-openshift-backup-{{ now }}.tar.gz"
- "{{ playbook_dir }}/tower-openshift-backup-latest.tar.gz"
- name: Remove temporary backup directory
file:
path: "{{ playbook_dir }}/tower-openshift-backup-{{ now }}"
state: absent

View File

@ -1,23 +0,0 @@
---
- name: Get Namespace Detail
shell: "kubectl get namespace {{ kubernetes_namespace }}"
register: namespace_details
ignore_errors: true
- name: Create AWX Kubernetes Project
shell: "kubectl create namespace {{ kubernetes_namespace }}"
when: namespace_details.rc != 0
- name: Set postgresql service name
set_fact:
postgresql_service_name: "{{ kubernetes_deployment_name }}-postgresql"
when: "pg_hostname is not defined or pg_hostname == ''"
- name: Get Kubernetes API version
command: |
kubectl version -o json
register: kube_version
- name: Extract server version from command output
set_fact:
kube_api_version: "{{ (kube_version.stdout | from_json).serverVersion.gitVersion[1:] }}"

View File

@ -1,3 +0,0 @@
---
- name: Set the Kubernetes Context
shell: "kubectl config use-context {{ kubernetes_context }}"

View File

@ -1,320 +0,0 @@
---
- name: Generate broadcast websocket secret
set_fact:
broadcast_websocket_secret: "{{ lookup('password', '/dev/null length=128') }}"
run_once: true
no_log: true
when: broadcast_websocket_secret is not defined
- fail:
msg: "Only set one of kubernetes_context or openshift_host"
when: openshift_host is defined and kubernetes_context is defined
- include_tasks: "{{ tasks }}"
with_items:
- openshift_auth.yml
- openshift.yml
loop_control:
loop_var: tasks
when: openshift_host is defined
- include_tasks: "{{ tasks }}"
with_items:
- kubernetes_auth.yml
- kubernetes.yml
loop_control:
loop_var: tasks
when: kubernetes_context is defined
- name: Use kubectl or oc
set_fact:
kubectl_or_oc: "{{ openshift_oc_bin if openshift_oc_bin is defined else 'kubectl' }}"
- set_fact:
deployment_object: "deployment"
- name: Record deployment size
shell: |
{{ kubectl_or_oc }} get {{ deployment_object }} \
{{ kubernetes_deployment_name }} \
-n {{ kubernetes_namespace }} -o=jsonpath='{.status.replicas}'
register: deployment_details
ignore_errors: true
- name: Set expected post-deployment Replicas value
set_fact:
kubernetes_deployment_replica_size: "{{ deployment_details.stdout | int }}"
when: deployment_details.rc == 0
- name: Delete existing Deployment (or StatefulSet)
shell: |
{{ kubectl_or_oc }} delete sts \
{{ kubernetes_deployment_name }} -n {{ kubernetes_namespace }} --ignore-not-found
{{ kubectl_or_oc }} delete {{ deployment_object }} \
{{ kubernetes_deployment_name }} -n {{ kubernetes_namespace }} --ignore-not-found
- name: Get Postgres Service Detail
shell: "{{ kubectl_or_oc }} describe svc {{ postgresql_service_name }} -n {{ kubernetes_namespace }}"
register: postgres_svc_details
ignore_errors: true
when: "pg_hostname is not defined or pg_hostname == ''"
- name: Deploy PostgreSQL (OpenShift)
block:
- name: Template PostgreSQL Deployment (OpenShift)
template:
src: postgresql-persistent.yml.j2
dest: "{{ kubernetes_base_path }}/postgresql-persistent.yml"
mode: '0600'
- name: Deploy and Activate Postgres (OpenShift)
shell: |
{{ openshift_oc_bin }} new-app --file={{ kubernetes_base_path }}/postgresql-persistent.yml \
-e MEMORY_LIMIT={{ pg_memory_limit|default('512') }}Mi \
-e DATABASE_SERVICE_NAME=postgresql \
-e POSTGRESQL_MAX_CONNECTIONS={{ pg_max_connections|default(1024) }} \
-e POSTGRESQL_USER={{ pg_username }} \
-e POSTGRESQL_PASSWORD={{ pg_password | quote }} \
-e POSTGRESQL_DATABASE={{ pg_database | quote }} \
-e POSTGRESQL_VERSION=12 \
-n {{ kubernetes_namespace }}
register: openshift_pg_activate
no_log: true
when:
- pg_hostname is not defined or pg_hostname == ''
- postgres_svc_details is defined and postgres_svc_details.rc != 0
- openshift_host is defined
- name: Deploy PostgreSQL (Kubernetes)
block:
- name: Create Temporary Values File (Kubernetes)
tempfile:
state: file
suffix: .yml
register: values_file
- name: Populate Temporary Values File (Kubernetes)
template:
src: postgresql-values.yml.j2
dest: "{{ values_file.path }}"
no_log: true
- name: Deploy and Activate Postgres (Kubernetes)
shell: |
helm repo add stable https://charts.helm.sh/stable
helm repo update
helm upgrade {{ postgresql_service_name }} \
--install \
--namespace {{ kubernetes_namespace }} \
--version="8.3.0" \
--values {{ values_file.path }} \
stable/postgresql
register: kubernetes_pg_activate
no_log: true
- name: Remove tempfile
file:
path: "{{ values_file.path }}"
state: absent
when:
- pg_hostname is not defined or pg_hostname == ''
- postgres_svc_details is defined and postgres_svc_details.rc != 0
- kubernetes_context is defined
- name: Set postgresql hostname to helm package service (Kubernetes)
set_fact:
pg_hostname: "{{ postgresql_service_name }}"
when:
- pg_hostname is not defined or pg_hostname == ''
- kubernetes_context is defined
- name: Wait for Postgres to activate
pause:
seconds: "{{ postgress_activate_wait }}"
when: openshift_pg_activate.changed or kubernetes_pg_activate.changed
- name: Check postgres version and upgrade Postgres if necessary (Openshift)
block:
- name: Check if Postgres 10 is being used
shell: |
POD=$({{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
get pods -l=name=postgresql --field-selector status.phase=Running -o jsonpath="{.items[0].metadata.name}")
{{ kubectl_or_oc }} exec $POD -n {{ kubernetes_namespace }} -- bash -c "psql -tAc 'select version()'"
register: pg_version
- name: Upgrade postgres if necessary
block:
- name: Set new pg image
shell: |
IMAGE=registry.redhat.io/rhel-8/postgresql-12
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} set image dc/postgresql postgresql=$IMAGE
- name: Wait for change to take affect
pause:
seconds: 5
- name: Set env var for pg upgrade
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} set env dc/postgresql POSTGRESQL_UPGRADE=copy
- name: Wait for change to take affect
pause:
seconds: 5
- name: Set env var for new pg version
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} set env dc/postgresql POSTGRESQL_VERSION=12
- name: Wait for Postgres to redeploy
pause:
seconds: "{{ postgress_activate_wait }}"
- name: Wait for Postgres to finish upgrading
shell: |
POD=$({{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
get pods -l=name=postgresql -o jsonpath="{.items[0].metadata.name}")
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} logs $POD | grep 'Upgrade DONE'
register: pg_upgrade_logs
retries: 360
delay: 10
until: pg_upgrade_logs is success
- name: Unset upgrade env var
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} set env dc/postgresql POSTGRESQL_UPGRADE-
- name: Wait for Postgres to redeploy
pause:
seconds: "{{ postgress_activate_wait }}"
when: "pg_version is success and '10' in pg_version.stdout"
when:
- pg_hostname is not defined or pg_hostname == ''
- postgres_svc_details is defined and postgres_svc_details.rc != 0
- openshift_host is defined
- name: Set image names if using custom registry
block:
- name: Set awx image name
set_fact:
kubernetes_awx_image: "{{ docker_registry }}/{{ docker_registry_repository }}/{{ awx_image }}"
when: kubernetes_awx_image is not defined
when: docker_registry is defined
- name: Determine Deployment api version
set_fact:
kubernetes_deployment_api_version: "{{ 'apps/v1' if kube_api_version is version('1.9', '>=') else 'apps/v1beta1' }}"
- name: Use Custom Root CA file for PosgtreSQL SSL communication
block:
- name: Get Root CA file contents
set_fact:
postgres_root_ca_cert: "{{ lookup('file', pg_root_ca_file) }}"
no_log: true
- name: Render Root CA template
set_fact:
postgres_root_ca: "{{ lookup('template', 'postgres_root_ca.yml.j2') }}"
no_log: true
- name: Apply Root CA template
shell: |
echo {{ postgres_root_ca | quote }} | {{ kubectl_or_oc }} apply -f -
no_log: true
- name: Set Root CA file name
set_fact:
postgres_root_ca_filename: 'postgres_root_ca.crt'
- name: Set Root CA file location
set_fact:
ca_trust_bundle: '/etc/tower/{{ postgres_root_ca_filename }}'
when:
- pg_root_ca_file is defined
- pg_root_ca_file != ''
- name: Render deployment templates
set_fact:
"{{ item }}": "{{ lookup('template', item + '.yml.j2') }}"
with_items:
- 'configmap'
- 'secret'
- 'deployment'
- 'supervisor'
no_log: true
- name: Apply Deployment
shell: |
echo {{ item | quote }} | {{ kubectl_or_oc }} apply -f -
with_items:
- "{{ configmap }}"
- "{{ secret }}"
- "{{ deployment }}"
- "{{ supervisor }}"
no_log: true
- name: Delete any existing management pod
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
delete pod ansible-tower-management --grace-period=0 --ignore-not-found
- name: Template management pod
set_fact:
management_pod: "{{ lookup('template', 'management-pod.yml.j2') }}"
- name: Create management pod
shell: |
echo {{ management_pod | quote }} | {{ kubectl_or_oc }} apply -f -
- name: Wait for management pod to start
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
get pod ansible-tower-management -o jsonpath="{.status.phase}"
register: result
until: result.stdout == "Running"
retries: 60
delay: 10
- name: Migrate database
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} exec ansible-tower-management -- \
bash -c "awx-manage migrate --noinput"
- name: Check for Tower Super users
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} exec ansible-tower-management -- \
bash -c "echo 'from django.contrib.auth.models import User; nsu = User.objects.filter(is_superuser=True).count(); exit(0 if nsu > 0 else 1)' | awx-manage shell"
register: super_check
ignore_errors: true
changed_when: super_check.rc > 0
- name: create django super user if it does not exist
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} exec ansible-tower-management -- \
bash -c "echo \"from django.contrib.auth.models import User; User.objects.create_superuser('{{ admin_user }}', '{{ admin_email }}', '{{ admin_password }}')\" | awx-manage shell"
no_log: true
when: super_check.rc > 0
- name: update django super user password
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} exec ansible-tower-management -- \
bash -c "awx-manage update_password --username='{{ admin_user }}' --password='{{ admin_password }}'"
no_log: true
register: result
changed_when: "'Password updated' in result.stdout"
- name: Create the default organization if it is needed.
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} exec ansible-tower-management -- \
bash -c "awx-manage create_preload_data"
register: cdo
changed_when: "'added' in cdo.stdout"
when: create_preload_data | bool
- name: Delete management pod
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
delete pod ansible-tower-management --grace-period=0 --ignore-not-found
- name: Scale up deployment
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
scale {{ deployment_object }} {{ kubernetes_deployment_name }} --replicas={{ replicas | default(kubernetes_deployment_replica_size) }}

View File

@ -1,76 +0,0 @@
---
- name: Get Project Detail
shell: "{{ openshift_oc_bin }} get project {{ openshift_project }}"
register: project_details
ignore_errors: true
- name: Create AWX Openshift Project
shell: "{{ openshift_oc_bin }} new-project {{ openshift_project }}"
when: project_details.rc != 0
- name: Ensure PostgreSQL PVC is available
block:
- name: Check PVC status
command: "{{ openshift_oc_bin }} get pvc {{ openshift_pg_pvc_name }} -n {{ openshift_project }} -o=jsonpath='{.status.phase}'"
register: pg_pvc_status
ignore_errors: true
- name: Ensure PostgreSQL PVC is available
assert:
that:
- pg_pvc_status.stdout in ["Bound", "Pending"]
msg: "Ensure a PVC named '{{ openshift_pg_pvc_name }}' is available in the namespace '{{ openshift_project }}'."
when:
- pg_hostname is not defined or pg_hostname == ''
- openshift_pg_emptydir is defined and (openshift_pg_emptydir | bool) != true
- name: Set postgresql service name
set_fact:
postgresql_service_name: "postgresql"
when: "pg_hostname is not defined or pg_hostname == ''"
- name: Add privileged SCC to service account
shell: |
{{ openshift_oc_bin }} adm policy add-scc-to-user privileged system:serviceaccount:{{ openshift_project }}:awx
# https://github.com/openshift/origin/issues/19182#issuecomment-378233606
# If oc version ever grows a -o json option, remove the following tasks
# and go with the approach in kubernetes.yml.
- name: Get Kubernetes Config
command: |
{{ openshift_oc_bin }} config view -o json
register: kube_config_cmd
no_log: true
- name: Convert kube config to dictionary
set_fact:
kube_config: "{{ kube_config_cmd.stdout | from_json }}"
no_log: true
- name: Extract current context from kube config
set_fact:
current_kube_context: "{{ kube_config['current-context'] }}"
- name: Find cluster for current context
set_fact:
kube_cluster: |
{{ (kube_config.contexts |
selectattr("name", "match", current_kube_context) |
list)[0].context.cluster }}
- name: Find server for current context
set_fact:
kube_server: |
{{ (kube_config.clusters |
selectattr("name", "match", kube_cluster|trim) |
list)[0].cluster.server }}
- name: Get kube version from api server
uri:
url: "{{ kube_server | trim }}/version"
validate_certs: false
register: kube_version
- name: Extract server version from command output
set_fact:
kube_api_version: "{{ kube_version.json.gitVersion[1:] }}"

View File

@ -1,56 +0,0 @@
---
- include_vars: openshift.yml
- name: Set kubernetes_namespace
set_fact:
kubernetes_namespace: "{{ openshift_project }}"
- name: Ensure workspace directories exist
file:
path: "{{ item }}"
state: directory
with_items:
- "{{ kubernetes_base_path }}"
- "{{ openshift_oc_config_file | dirname }}"
- name: Authenticate with OpenShift via user and password
shell: |
{{ openshift_oc_bin }} login {{ openshift_host }} \
-u {{ openshift_user }} \
-p {{ openshift_password | quote }} \
--insecure-skip-tls-verify={{ openshift_skip_tls_verify | default(false) | bool }}
when:
- openshift_user is defined
- openshift_password is defined
- openshift_token is not defined
register: openshift_auth_result
ignore_errors: true
no_log: true
- name: OpenShift authentication failed on TLS verification
fail:
msg: "Failed to verify TLS, consider settings openshift_skip_tls_verify=True {{ openshift_auth_result.stderr | default('certificate does not match hostname') }}"
when:
- openshift_skip_tls_verify is not defined or not openshift_skip_tls_verify
- openshift_auth_result.rc is defined and openshift_auth_result.rc != 0
- openshift_auth_result.stderr is defined and (openshift_auth_result.stderr | search("certificate that does not match its hostname"))
- name: OpenShift authentication failed
fail:
msg: "{{ openshift_auth_result.stderr | default('Invalid credentials') }}"
when: openshift_auth_result.rc is defined and openshift_auth_result.rc != 0
- name: Authenticate with OpenShift via token
shell: |
{{ openshift_oc_bin }} login {{ openshift_host }} \
--token {{ openshift_token }} \
--insecure-skip-tls-verify={{ openshift_skip_tls_verify | default(false) | bool }}
when: openshift_token is defined
register: openshift_auth_result
ignore_errors: true
no_log: true
- name: OpenShift authentication failed
fail:
msg: "{{ openshift_auth_result.stderr | default('Invalid token') }}"
when: openshift_auth_result.rc is defined and openshift_auth_result.rc != 0

View File

@ -1,72 +0,0 @@
---
- include_tasks: openshift_auth.yml
when: openshift_host is defined
- include_tasks: kubernetes_auth.yml
when: kubernetes_context is defined
- name: Use kubectl or oc
set_fact:
kubectl_or_oc: "{{ openshift_oc_bin if openshift_oc_bin is defined else 'kubectl' }}"
- set_fact:
deployment_object: "deployment"
- name: Record deployment size
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
get {{ deployment_object }} {{ kubernetes_deployment_name }} -o jsonpath="{.status.replicas}"
register: deployment_size
- name: Scale deployment down
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
scale {{ deployment_object }} {{ kubernetes_deployment_name }} --replicas=0
- name: Wait for scale down
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} get pods \
-o jsonpath='{.items[*].metadata.name}' \
| tr -s '[[:space:]]' '\n' \
| grep {{ kubernetes_deployment_name }} \
| grep -v postgres | wc -l
register: tower_pods
until: (tower_pods.stdout | trim) == '0'
retries: 30
- name: Delete any existing management pod
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
delete pod ansible-tower-management --grace-period=0 --ignore-not-found
- name: Template management pod
set_fact:
management_pod: "{{ lookup('template', 'management-pod.yml.j2') }}"
- name: Create management pod
shell: |
echo {{ management_pod | quote }} | {{ kubectl_or_oc }} apply -f -
- name: Wait for management pod to start
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
get pod ansible-tower-management -o jsonpath="{.status.phase}"
register: result
until: result.stdout == "Running"
retries: 60
delay: 10
- name: generate a new SECRET_KEY
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
exec -i ansible-tower-management -- bash -c "awx-manage regenerate_secret_key"
register: new_key
- name: print the new SECRET_KEY
debug:
msg: "{{ new_key.stdout }}"
- name: Delete management pod
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
delete pod ansible-tower-management --grace-period=0 --ignore-not-found

View File

@ -1,145 +0,0 @@
---
- include_tasks: openshift_auth.yml
when: openshift_host is defined
- include_tasks: kubernetes_auth.yml
when: kubernetes_context is defined
- name: Use kubectl or oc
set_fact:
kubectl_or_oc: "{{ openshift_oc_bin if openshift_oc_bin is defined else 'kubectl' }}"
- name: Remove any present restore directories
file:
state: absent
path: "{{ playbook_dir }}/tower-openshift-restore"
- name: Create directory for restore data
file:
state: directory
path: "{{ playbook_dir }}/tower-openshift-restore"
- name: Unarchive Tower backup
unarchive:
src: "{{ restore_backup_file }}"
dest: "{{ playbook_dir }}/tower-openshift-restore"
extra_opts: [--strip-components=1]
- name: Verify if common.tar.gz exists
stat:
path: "{{ playbook_dir }}/tower-openshift-restore/common.tar.gz"
register: common_tarball
- name: Unarchive Tower backup from common.tar.gz
unarchive:
src: "{{ playbook_dir }}/tower-openshift-restore/common.tar.gz"
dest: "{{ playbook_dir }}/tower-openshift-restore"
extra_opts: [--strip-components=1]
when: common_tarball.stat.exists
- set_fact:
deployment_object: "deployment"
- name: Record deployment size
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
get {{ deployment_object }} {{ kubernetes_deployment_name }} -o jsonpath="{.status.replicas}"
register: deployment_size
- name: Scale deployment down
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
scale {{ deployment_object }} {{ kubernetes_deployment_name }} --replicas=0
- name: Delete management pod
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
delete pod ansible-tower-management --grace-period=0 --ignore-not-found
- name: Wait for scale down
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} get pods \
-o jsonpath='{.items[*].metadata.name}' \
| tr -s '[[:space:]]' '\n' \
| grep {{ kubernetes_deployment_name }} \
| grep -v postgres | wc -l
register: tower_pods
until: (tower_pods.stdout | trim) == '0'
retries: 30
- name: Setup Management Pod & Restore (External DB)
block:
- name: Delete any existing management pod
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
delete pod ansible-tower-management --grace-period=0 --ignore-not-found
- name: Template management pod
set_fact:
management_pod: "{{ lookup('template', 'management-pod.yml.j2') }}"
- name: Create management pod
shell: |
echo {{ management_pod | quote }} | {{ kubectl_or_oc }} apply -f -
- name: Wait for management pod to start
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
get pod ansible-tower-management -o jsonpath="{.status.phase}"
register: result
until: result.stdout == "Running"
retries: 60
delay: 10
- name: Perform a PostgreSQL restore (for External Postgres)
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
exec -i ansible-tower-management -- bash -c "PGPASSWORD={{ pg_password | quote }} \
psql \
--host={{ pg_hostname | default('postgresql') }} \
--port={{ pg_port | default('5432') }} \
--username={{ pg_username }} \
--dbname=template1" < {{ playbook_dir }}/tower-openshift-restore/tower.db
no_log: true
- name: Delete management pod
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
delete pod ansible-tower-management --grace-period=0 --ignore-not-found
when: pg_hostname is defined or pg_hostname != ''
- name: Restore (Containerized DB)
block:
- name: Temporarily grant createdb role
shell: |
POD=$({{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
get pods -l=name=postgresql --field-selector status.phase=Running -o jsonpath="{.items[0].metadata.name}")
{{ kubectl_or_oc }} exec $POD -n {{ kubernetes_namespace }} -- bash -c "\
psql --dbname=template1 -c 'ALTER USER \"{{ pg_username }}\" CREATEDB;'"
- name: Perform a PostgreSQL restore
shell: |
POD=$({{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
get pods -l=name=postgresql --field-selector status.phase=Running -o jsonpath="{.items[0].metadata.name}")
{{ kubectl_or_oc }} exec -i $POD -n {{ kubernetes_namespace }} -- bash -c "\
psql --dbname=template1" < {{ playbook_dir }}/tower-openshift-restore/tower.db
no_log: true
- name: Revoke createdb role
shell: |
POD=$({{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
get pods -l=name=postgresql --field-selector status.phase=Running -o jsonpath="{.items[0].metadata.name}")
{{ kubectl_or_oc }} exec $POD -n {{ kubernetes_namespace }} -- bash -c "\
psql --dbname=template1 -c 'ALTER USER \"{{ pg_username }}\" NOCREATEDB;'"
when: pg_hostname is not defined or pg_hostname == ''
- name: Remove restore directory
file:
state: absent
path: "{{ playbook_dir }}/tower-openshift-restore"
- name: Scale deployment back up
shell: |
{{ kubectl_or_oc }} -n {{ kubernetes_namespace }} \
scale {{ deployment_object }} {{ kubernetes_deployment_name }} --replicas={{ deployment_size.stdout }}
when: deployment_size.stdout != ''

View File

@ -1,206 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ kubernetes_deployment_name }}-config
namespace: {{ kubernetes_namespace }}
data:
{{ kubernetes_deployment_name }}_nginx_conf: |
#user awx;
worker_processes 1;
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
sendfile on;
#tcp_nopush on;
#gzip on;
upstream uwsgi {
server 127.0.0.1:8050;
}
upstream daphne {
server 127.0.0.1:8051;
}
{% if ssl_certificate is defined %}
server {
listen 8052 default_server;
server_name _;
# Redirect all HTTP links to the matching HTTPS page
return 301 https://$host$request_uri;
}
{%endif %}
server {
{% if ssl_certificate is defined %}
listen 8053 ssl;
ssl_certificate /etc/nginx/awxweb.pem;
ssl_certificate_key /etc/nginx/awxweb.pem;
{% else %}
listen 8052 default_server;
{% endif %}
# If you have a domain name, this is where to add it
server_name _;
keepalive_timeout 65;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
# Protect against click-jacking https://www.owasp.org/index.php/Testing_for_Clickjacking_(OTG-CLIENT-009)
add_header X-Frame-Options "DENY";
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
location /static/ {
alias /var/lib/awx/public/static/;
}
location /favicon.ico { alias /var/lib/awx/public/static/favicon.ico; }
location /websocket {
# Pass request to the upstream alias
proxy_pass http://daphne;
# Require http version 1.1 to allow for upgrade requests
proxy_http_version 1.1;
# We want proxy_buffering off for proxying to websockets.
proxy_buffering off;
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if you use HTTPS:
proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client for the sake of redirects
proxy_set_header Host $http_host;
# We've set the Host header, so we don't need Nginx to muddle
# about with redirects
proxy_redirect off;
# Depending on the request value, set the Upgrade and
# connection headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location / {
# Add trailing / if missing
rewrite ^(.*)$http_host(.*[^/])$ $1$http_host$2/ permanent;
uwsgi_read_timeout 120s;
uwsgi_pass uwsgi;
include /etc/nginx/uwsgi_params;
{%- if extra_nginx_include is defined %}
include {{ extra_nginx_include }};
{%- endif %}
proxy_set_header X-Forwarded-Port 443;
uwsgi_param HTTP_X_FORWARDED_PORT 443;
}
}
}
{{ kubernetes_deployment_name }}_settings: |
import os
import socket
ADMINS = ()
AWX_PROOT_ENABLED = True
# Automatically deprovision pods that go offline
AWX_AUTO_DEPROVISION_INSTANCES = True
SYSTEM_TASK_ABS_CPU = {{ ((task_cpu_request|int / 1000) * 4)|int }}
SYSTEM_TASK_ABS_MEM = {{ ((task_mem_request|int * 1024) / 100)|int }}
INSIGHTS_URL_BASE = "{{ insights_url_base }}"
INSIGHTS_AGENT_MIME = "{{ insights_agent_mime }}"
AUTOMATION_ANALYTICS_URL = "{{ automation_analytics_url }}"
#Autoprovisioning should replace this
CLUSTER_HOST_ID = socket.gethostname()
SYSTEM_UUID = os.environ.get('MY_POD_UID', '00000000-0000-0000-0000-000000000000')
SESSION_COOKIE_SECURE = False
CSRF_COOKIE_SECURE = False
REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR']
STATIC_ROOT = '/var/lib/awx/public/static'
PROJECTS_ROOT = '/var/lib/awx/projects'
AWX_ANSIBLE_COLLECTIONS_PATHS = '/var/lib/awx/vendor/awx_ansible_collections'
JOBOUTPUT_ROOT = '/var/lib/awx/job_status'
SECRET_KEY = open('/etc/tower/SECRET_KEY', 'rb').read().strip()
ALLOWED_HOSTS = ['*']
SERVER_EMAIL = 'root@localhost'
DEFAULT_FROM_EMAIL = 'webmaster@localhost'
EMAIL_SUBJECT_PREFIX = '[AWX] '
EMAIL_HOST = 'localhost'
EMAIL_PORT = 25
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
LOGGING['handlers']['console'] = {
'()': 'logging.StreamHandler',
'level': 'DEBUG',
'formatter': 'simple',
'filters': ['guid'],
}
LOGGING['loggers']['django.request']['handlers'] = ['console']
LOGGING['loggers']['rest_framework.request']['handlers'] = ['console']
LOGGING['loggers']['awx']['handlers'] = ['console', 'external_logger']
LOGGING['loggers']['awx.main.commands.run_callback_receiver']['handlers'] = ['console']
LOGGING['loggers']['awx.main.commands.inventory_import']['handlers'] = ['console']
LOGGING['loggers']['awx.main.tasks']['handlers'] = ['console', 'external_logger']
LOGGING['loggers']['awx.main.scheduler']['handlers'] = ['console', 'external_logger']
LOGGING['loggers']['django_auth_ldap']['handlers'] = ['console']
LOGGING['loggers']['social']['handlers'] = ['console']
LOGGING['loggers']['system_tracking_migrations']['handlers'] = ['console']
LOGGING['loggers']['rbac_migrations']['handlers'] = ['console']
LOGGING['loggers']['awx.isolated.manager.playbooks']['handlers'] = ['console']
LOGGING['handlers']['callback_receiver'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['fact_receiver'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['task_system'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['tower_warnings'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['rbac_migrations'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['system_tracking_migrations'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['management_playbooks'] = {'class': 'logging.NullHandler'}
USE_X_FORWARDED_PORT = True
AWX_CONTAINER_GROUP_DEFAULT_IMAGE = "{{ container_groups_image }}"
REDHAT_CANDLEPIN_HOST = "{{ candlepin_host | default(omit) }}"
REDHAT_CANDLEPIN_VERIFY = "{{ candlepin_verify | default(omit) }}"
BROADCAST_WEBSOCKET_PORT = 8052
BROADCAST_WEBSOCKET_PROTOCOL = 'http'
{{ kubernetes_deployment_name }}_redis_conf: |
unixsocket /var/run/redis/redis.sock
unixsocketperm 660
port 0
bind 127.0.0.1

View File

@ -1,16 +0,0 @@
DATABASES = {
'default': {
'ATOMIC_REQUESTS': True,
'ENGINE': 'awx.main.db.profiled_pg',
'NAME': "{{ pg_database }}",
'USER': "{{ pg_username }}",
'PASSWORD': "{{ pg_password }}",
'HOST': "{{ pg_hostname|default('postgresql') }}",
'PORT': "{{ pg_port }}",
'OPTIONS': { 'sslmode': '{{ pg_sslmode|default("prefer") }}',
'sslrootcert': '{{ ca_trust_bundle }}',
},
}
}
BROADCAST_WEBSOCKET_SECRET = "{{ broadcast_websocket_secret | b64encode }}"

View File

@ -1,556 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ kubernetes_serviceaccount_name }}
namespace: {{ kubernetes_namespace }}
{% if kubernetes_service_account_annotations is defined %}
annotations:
{% for key, value in kubernetes_service_account_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
{% if kubernetes_image_pull_secrets is defined %}
imagePullSecrets:
- name: "{{ kubernetes_image_pull_secrets }}"
{% endif %}
{% if awx_psp_create is defined and awx_psp_create | bool %}
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ awx_psp_name }}-psp
spec:
{% if awx_psp_privileged is defined %}
privileged: {{ awx_psp_privileged }}
allowPrivilegeEscalation: {{ awx_psp_privileged }}
{% endif %}
requiredDropCapabilities:
- ALL
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: {{ kubernetes_namespace }}
name: {{ awx_psp_name }}-role
rules:
- apiGroups:
- policy
resources:
- podsecuritypolicies
resourceNames:
- {{ awx_psp_name }}-psp
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ awx_psp_name }}-role-binding
namespace: {{ kubernetes_namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ awx_psp_name }}-role
subjects:
- kind: ServiceAccount
name: {{ kubernetes_serviceaccount_name }}
namespace: {{ kubernetes_namespace }}
{% endif %}
---
apiVersion: {{ kubernetes_deployment_api_version }}
kind: Deployment
metadata:
name: {{ kubernetes_deployment_name }}
namespace: {{ kubernetes_namespace }}
{% if kubernetes_deployment_annotations is defined %}
annotations:
{% for key, value in kubernetes_deployment_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
{% if openshift_host is defined %}
labels:
app: {{ kubernetes_deployment_name }}
{% endif %}
spec:
replicas: 1
{% if kubernetes_deployment_api_version == "apps/v1" %}
selector:
matchLabels:
app: {{ kubernetes_deployment_name }}
{% endif %}
template:
metadata:
{% if kubernetes_pod_annotations is defined %}
annotations:
{% for key, value in kubernetes_pod_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
labels:
name: {{ kubernetes_deployment_name }}-web-deploy
service: django
app: {{ kubernetes_deployment_name }}
spec:
serviceAccountName: {{ kubernetes_serviceaccount_name }}
terminationGracePeriodSeconds: 10
{% if custom_venvs is defined %}
{% set trusted_hosts = "" %}
initContainers:
- image: 'centos:7'
name: init-custom-venvs
{% if http_proxy is defined or https_proxy is defined %}
{% set trusted_hosts = "--trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org" %}
env:
{% if http_proxy is defined %}
- name: http_proxy
value: {{ http_proxy }}
{% endif %}
{% if https_proxy is defined %}
- name: https_proxy
value: {{ https_proxy }}
{% endif %}
{% if no_proxy is defined %}
- name: no_proxy
value: {{ no_proxy }}
{% endif %}
{% endif %}
command:
- sh
- '-c'
- >-
yum install -y ansible curl python-setuptools epel-release \
openssl openssl-devel gcc python-devel &&
yum install -y python-virtualenv python36 python36-devel &&
mkdir -p {{ custom_venvs_path }} &&
{% for custom_venv in custom_venvs %}
virtualenv -p {{ custom_venv.python | default(custom_venvs_python) }} \
{{ custom_venvs_path }}/{{ custom_venv.name }} &&
source {{ custom_venvs_path }}/{{ custom_venv.name }}/bin/activate &&
{{ custom_venvs_path }}/{{ custom_venv.name }}/bin/pip install {{ trusted_hosts }} -U pip &&
{{ custom_venvs_path }}/{{ custom_venv.name }}/bin/pip install {{ trusted_hosts }} -U psutil \
"ansible=={{ custom_venv.python_ansible_version }}" &&
{% if custom_venv.python_modules is defined %}
{{ custom_venvs_path }}/{{ custom_venv.name }}/bin/pip install {{ trusted_hosts }} -U \
{% for module in custom_venv.python_modules %}{{ module }} {% endfor %} &&
{% endif %}
deactivate &&
{% endfor %}
:
volumeMounts:
- name: custom-venvs
mountPath: {{ custom_venvs_path }}
{% endif %}
containers:
- name: {{ kubernetes_deployment_name }}-web
{% if web_security_context_enabled is defined and web_security_context_enabled | bool %}
securityContext:
{% if web_security_context_privileged is defined %}
privileged: {{ web_security_context_privileged }}
{% endif %}
{% endif %}
image: "{{ kubernetes_awx_image }}:{{ kubernetes_awx_version }}"
imagePullPolicy: Always
ports:
- containerPort: 8052
{% if ca_trust_dir is defined %}
env:
- name: REQUESTS_CA_BUNDLE
value: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
{% endif %}
volumeMounts:
{% if postgres_root_ca_cert is defined %}
- name: {{ kubernetes_deployment_name }}-postgres-root-ca-cert
mountPath: {{ ca_trust_bundle }}
subPath: {{ postgres_root_ca_filename }}
readOnly: true
{% endif %}
- name: supervisor-socket
mountPath: "/var/run/supervisor"
- name: rsyslog-socket
mountPath: "/var/run/awx-rsyslog"
- name: rsyslog-dir
mountPath: "/var/lib/awx/rsyslog"
{% if ca_trust_dir is defined %}
- name: {{ kubernetes_deployment_name }}-ca-trust-dir
mountPath: "{{ ca_trust_dir }}"
readOnly: true
{% endif %}
{% if project_data_dir is defined %}
- name: {{ kubernetes_deployment_name }}-project-data-dir
mountPath: "/var/lib/awx/projects"
readOnly: false
{% endif %}
{% if custom_venvs is defined %}
- name: custom-venvs
mountPath: {{ custom_venvs_path }}
{% endif %}
- name: {{ kubernetes_deployment_name }}-application-config
mountPath: "/etc/tower/settings.py"
subPath: settings.py
readOnly: true
- name: {{ kubernetes_deployment_name }}-nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
- name: "{{ kubernetes_deployment_name }}-application-credentials"
mountPath: "/etc/tower/conf.d/"
readOnly: true
- name: {{ kubernetes_deployment_name }}-supervisor-web-config
mountPath: "/etc/supervisord.conf"
subPath: supervisor.conf
readOnly: true
- name: {{ kubernetes_deployment_name }}-supervisor-task-config
mountPath: "/etc/supervisord_task.conf"
subPath: supervisor_task.conf
readOnly: true
- name: {{ kubernetes_deployment_name }}-secret-key
mountPath: "/etc/tower/SECRET_KEY"
subPath: SECRET_KEY
readOnly: true
- name: {{ kubernetes_deployment_name }}-redis-socket
mountPath: "/var/run/redis"
resources:
requests:
memory: "{{ web_mem_request }}Gi"
cpu: "{{ web_cpu_request }}m"
{% if web_mem_limit is defined or web_cpu_limit is defined %}
limits:
{% endif %}
{% if web_mem_limit is defined %}
memory: "{{ web_mem_limit }}Gi"
{% endif %}
{% if web_cpu_limit is defined %}
cpu: "{{ web_cpu_limit }}m"
{% endif %}
- name: {{ kubernetes_deployment_name }}-task
{% if task_security_context_enabled is defined and task_security_context_enabled | bool %}
securityContext:
{% if task_security_context_privileged is defined %}
privileged: {{ task_security_context_privileged }}
{% endif %}
{% endif %}
image: "{{ kubernetes_awx_image }}:{{ kubernetes_awx_version }}"
command:
- /usr/bin/launch_awx_task.sh
imagePullPolicy: Always
volumeMounts:
{% if postgres_root_ca_cert is defined %}
- name: {{ kubernetes_deployment_name }}-postgres-root-ca-cert
mountPath: {{ ca_trust_bundle }}
subPath: {{ postgres_root_ca_filename }}
readOnly: true
{% endif %}
- name: supervisor-socket
mountPath: "/var/run/supervisor"
- name: rsyslog-socket
mountPath: "/var/run/awx-rsyslog"
- name: rsyslog-dir
mountPath: "/var/lib/awx/rsyslog"
{% if ca_trust_dir is defined %}
- name: {{ kubernetes_deployment_name }}-ca-trust-dir
mountPath: "{{ ca_trust_dir }}"
readOnly: true
{% endif %}
{% if custom_venvs is defined %}
- name: custom-venvs
mountPath: {{ custom_venvs_path }}
{% endif %}
- name: {{ kubernetes_deployment_name }}-application-config
mountPath: "/etc/tower/settings.py"
subPath: settings.py
readOnly: true
- name: "{{ kubernetes_deployment_name }}-application-credentials"
mountPath: "/etc/tower/conf.d/"
readOnly: true
- name: {{ kubernetes_deployment_name }}-supervisor-web-config
mountPath: "/etc/supervisord.conf"
subPath: supervisor.conf
readOnly: true
- name: {{ kubernetes_deployment_name }}-supervisor-task-config
mountPath: "/etc/supervisord_task.conf"
subPath: supervisor_task.conf
readOnly: true
- name: {{ kubernetes_deployment_name }}-secret-key
mountPath: "/etc/tower/SECRET_KEY"
subPath: SECRET_KEY
readOnly: true
- name: {{ kubernetes_deployment_name }}-redis-socket
mountPath: "/var/run/redis"
env:
- name: SUPERVISOR_WEB_CONFIG_PATH
value: "/etc/supervisord.conf"
- name: AWX_SKIP_MIGRATIONS
value: "1"
- name: MY_POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
{% if ca_trust_dir is defined %}
- name: REQUESTS_CA_BUNDLE
value: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
{% endif %}
resources:
requests:
memory: "{{ task_mem_request }}Gi"
cpu: "{{ task_cpu_request }}m"
{% if task_mem_limit is defined or task_cpu_limit is defined %}
limits:
{% endif %}
{% if task_mem_limit is defined %}
memory: "{{ task_mem_limit }}Gi"
{% endif %}
{% if task_cpu_limit is defined %}
cpu: "{{ task_cpu_limit }}m"
{% endif %}
- name: {{ kubernetes_deployment_name }}-redis
{% if redis_security_context_enabled is defined and redis_security_context_enabled | bool %}
securityContext:
{% if redis_security_context_privileged is defined %}
privileged: {{ redis_security_context_privileged }}
{% endif %}
{% if redis_security_context_user is defined %}
runAsUser: {{ redis_security_context_user }}
{% endif %}
{% endif %}
image: {{ kubernetes_redis_image }}:{{ kubernetes_redis_image_tag }}
imagePullPolicy: Always
args: ["redis-server", "{{ kubernetes_redis_config_mount_path }}"]
volumeMounts:
- name: {{ kubernetes_deployment_name }}-redis-config
mountPath: "{{ kubernetes_redis_config_mount_path }}"
subPath: redis.conf
readOnly: true
- name: {{ kubernetes_deployment_name }}-redis-socket
mountPath: "/var/run/redis"
resources:
requests:
memory: "{{ redis_mem_request }}Gi"
cpu: "{{ redis_cpu_request }}m"
{% if redis_mem_limit is defined or redis_cpu_limit is defined %}
limits:
{% endif %}
{% if redis_mem_limit is defined %}
memory: "{{ redis_mem_limit }}Gi"
{% endif %}
{% if redis_cpu_limit is defined %}
cpu: "{{ redis_cpu_limit }}m"
{% endif %}
{% if tolerations is defined %}
tolerations:
{{ tolerations | to_nice_yaml(indent=2) | indent(width=8, indentfirst=True) }}
{% endif %}
{% if node_selector is defined %}
nodeSelector:
{{ node_selector | to_nice_yaml(indent=2) | indent(width=8, indentfirst=True) }}
{% endif %}
{% if affinity is defined %}
affinity:
{{ affinity | to_nice_yaml(indent=2) | indent(width=8, indentfirst=True) }}
{% endif %}
volumes:
{% if postgres_root_ca_cert is defined %}
- name: {{ kubernetes_deployment_name }}-postgres-root-ca-cert
configMap:
name: {{ kubernetes_deployment_name }}-postgres-root-ca-cert
items:
- key: postgres_root_ca.crt
path: postgres_root_ca.crt
{% endif %}
- name: supervisor-socket
emptyDir: {}
- name: rsyslog-socket
emptyDir: {}
- name: rsyslog-dir
emptyDir: {}
{% if ca_trust_dir is defined %}
- name: {{ kubernetes_deployment_name }}-ca-trust-dir
hostPath:
path: "{{ ca_trust_dir }}"
type: Directory
{% endif %}
{% if project_data_dir is defined %}
- name: {{ kubernetes_deployment_name }}-project-data-dir
hostPath:
path: "{{ project_data_dir }}"
type: Directory
{% endif %}
{% if custom_venvs is defined %}
- name: custom-venvs
emptyDir: {}
{% endif %}
- name: {{ kubernetes_deployment_name }}-application-config
configMap:
name: {{ kubernetes_deployment_name }}-config
items:
- key: {{ kubernetes_deployment_name }}_settings
path: settings.py
- name: {{ kubernetes_deployment_name }}-nginx-config
configMap:
name: {{ kubernetes_deployment_name }}-config
items:
- key: {{ kubernetes_deployment_name }}_nginx_conf
path: nginx.conf
- name: {{ kubernetes_deployment_name }}-redis-config
configMap:
name: {{ kubernetes_deployment_name }}-config
items:
- key: {{ kubernetes_deployment_name }}_redis_conf
path: redis.conf
- name: "{{ kubernetes_deployment_name }}-application-credentials"
secret:
secretName: "{{ kubernetes_deployment_name }}-secrets"
items:
- key: credentials_py
path: 'credentials.py'
- key: environment_sh
path: 'environment.sh'
- name: {{ kubernetes_deployment_name }}-supervisor-web-config
configMap:
name: {{ kubernetes_deployment_name }}-supervisor-config
items:
- key: supervisor-web-config
path: 'supervisor.conf'
- name: {{ kubernetes_deployment_name }}-supervisor-task-config
configMap:
name: {{ kubernetes_deployment_name }}-supervisor-config
items:
- key: supervisor-task-config
path: 'supervisor_task.conf'
- name: {{ kubernetes_deployment_name }}-secret-key
secret:
secretName: "{{ kubernetes_deployment_name }}-secrets"
items:
- key: secret_key
path: SECRET_KEY
- name: {{ kubernetes_deployment_name }}-redis-socket
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: {{ kubernetes_deployment_name }}-web-svc
namespace: {{ kubernetes_namespace }}
labels:
name: {{ kubernetes_deployment_name }}-web-svc
{% if kubernetes_service_annotations is defined %}
annotations:
{% for key, value in kubernetes_service_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
spec:
type: {{ kubernetes_web_svc_type }}
ports:
- name: http
port: 80
{% if kubernetes_web_svc_type == "ClusterIP" %}
nodePort: null
{% endif %}
targetPort: 8052
selector:
name: {{ kubernetes_deployment_name }}-web-deploy
{% if kubernetes_context is defined %}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ kubernetes_deployment_name }}-web-svc
namespace: {{ kubernetes_namespace }}
{% if kubernetes_ingress_annotations is defined %}
annotations:
{% for key, value in kubernetes_ingress_annotations.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
{% endif %}
spec:
{% if kubernetes_ingress_hostname is defined %}
rules:
- host: {{ kubernetes_ingress_hostname }}
http:
paths:
- path: /
backend:
serviceName: {{ kubernetes_deployment_name }}-web-svc
servicePort: 80
{% else %}
backend:
serviceName: {{ kubernetes_deployment_name }}-web-svc
servicePort: 80
{% endif %}
{% if kubernetes_ingress_tls_secret is defined %}
tls:
- hosts:
- {{ kubernetes_ingress_hostname }}
secretName: {{ kubernetes_ingress_tls_secret }}
{% endif %}
{% endif %}
{% if openshift_host is defined %}
---
apiVersion: v1
kind: Route
metadata:
name: {{ kubernetes_deployment_name }}-web-svc
namespace: {{ kubernetes_namespace }}
spec:
port:
targetPort: http
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: {{ kubernetes_deployment_name }}-web-svc
weight: 100
wildcardPolicy: None
{% endif %}

View File

@ -1,5 +0,0 @@
DATABASE_USER={{ pg_username }}
DATABASE_NAME={{ pg_database }}
DATABASE_HOST={{ pg_hostname|default('postgresql') }}
DATABASE_PORT={{ pg_port|default('5432') }}
DATABASE_PASSWORD={{ pg_password | quote }}

View File

@ -1,106 +0,0 @@
---
apiVersion: v1
kind: Pod
metadata:
name: ansible-tower-management
namespace: {{ kubernetes_namespace }}
{% if kubernetes_pod_annotations is defined %}
annotations:
{% for key, value in kubernetes_pod_annotations.items() %}
{{ key }}: {{ value | quote }}
{% endfor %}
{% endif %}
spec:
{% if kubernetes_image_pull_secrets is defined %}
imagePullSecrets:
- name: "{{ kubernetes_image_pull_secrets }}"
{% endif %}
containers:
- name: ansible-tower-management
image: "{{ kubernetes_awx_image }}:{{ kubernetes_awx_version }}"
imagePullPolicy: Always
command: ["sleep", "infinity"]
volumeMounts:
{% if ca_trust_dir is defined %}
- name: {{ kubernetes_deployment_name }}-ca-trust-dir
mountPath: "/etc/pki/ca-trust/source/anchors/"
readOnly: true
{% endif %}
- name: {{ kubernetes_deployment_name }}-application-config
mountPath: "/etc/tower/settings.py"
subPath: settings.py
readOnly: true
{% if postgres_root_ca_cert is defined %}
- name: {{ kubernetes_deployment_name }}-postgres-root-ca-cert
mountPath: {{ ca_trust_bundle }}
subPath: {{ postgres_root_ca_filename }}
readOnly: true
{% endif %}
- name: "{{ kubernetes_deployment_name }}-application-credentials"
mountPath: "/etc/tower/conf.d/"
readOnly: true
- name: {{ kubernetes_deployment_name }}-secret-key
mountPath: "/etc/tower/SECRET_KEY"
subPath: SECRET_KEY
readOnly: true
resources:
{% if management_mem_limit is defined or management_cpu_limit is defined %}
limits:
{% endif %}
{% if management_mem_limit is defined %}
memory: "{{ management_mem_limit }}Gi"
{% endif %}
{% if management_cpu_limit is defined %}
cpu: "{{ management_cpu_limit }}m"
{% endif %}
{% if tolerations is defined %}
tolerations:
{{ tolerations | to_nice_yaml(indent=2) | indent(width=4, indentfirst=True) }}
{% endif %}
{% if node_selector is defined %}
nodeSelector:
{{ node_selector | to_nice_yaml(indent=2) | indent(width=4, indentfirst=True) }}
{% endif %}
{% if affinity is defined %}
affinity:
{{ affinity | to_nice_yaml(indent=2) | indent(width=4, indentfirst=True) }}
{% endif %}
volumes:
{% if ca_trust_dir is defined %}
- name: {{ kubernetes_deployment_name }}-ca-trust-dir
hostPath:
path: "{{ ca_trust_dir }}"
type: Directory
{% endif %}
- name: {{ kubernetes_deployment_name }}-application-config
configMap:
name: {{ kubernetes_deployment_name }}-config
items:
- key: {{ kubernetes_deployment_name }}_settings
path: settings.py
{% if postgres_root_ca_cert is defined %}
- name: {{ kubernetes_deployment_name }}-postgres-root-ca-cert
configMap:
name: {{ kubernetes_deployment_name }}-postgres-root-ca-cert
items:
- key: postgres_root_ca.crt
path: postgres_root_ca.crt
{% endif %}
- name: {{ kubernetes_deployment_name }}-secret-key
secret:
secretName: "{{ kubernetes_deployment_name }}-secrets"
items:
- key: secret_key
path: SECRET_KEY
- name: "{{ kubernetes_deployment_name }}-application-credentials"
secret:
secretName: "{{ kubernetes_deployment_name }}-secrets"
items:
- key: credentials_py
path: 'credentials.py'
restartPolicy: Never

View File

@ -1,8 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ kubernetes_deployment_name }}-postgres-root-ca-cert
namespace: {{ kubernetes_namespace }}
data:
postgres_root_ca.crt: |
{{ postgres_root_ca_cert | indent(width=4) }}

View File

@ -1,176 +0,0 @@
apiVersion: v1
kind: Template
labels:
template: postgresql-persistent-template
message: |-
The following service(s) have been created in your project: ${DATABASE_SERVICE_NAME}.
Username: ${POSTGRESQL_USER}
Password: ${POSTGRESQL_PASSWORD}
Database Name: ${POSTGRESQL_DATABASE}
Connection URL: postgresql://${DATABASE_SERVICE_NAME}:5432/
For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/.
metadata:
annotations:
description: |-
PostgreSQL database service, with persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/.
NOTE: Scaling to more than one replica is not supported. You must have persistent volumes available in your cluster to use this template.
iconClass: icon-postgresql
openshift.io/display-name: PostgreSQL (Persistent)
tags: database,postgresql
template.openshift.io/documentation-url: https://docs.openshift.org/latest/using_images/db_images/postgresql.html
template.openshift.io/long-description: This template provides a standalone
PostgreSQL server with a database created. The database is stored on persistent
storage. The database name, username, and password are chosen via parameters
when provisioning this service.
template.openshift.io/provider-display-name: Red Hat, Inc.
template.openshift.io/support-url: https://access.redhat.com
name: postgresql-persistent
objects:
- apiVersion: v1
kind: Secret
metadata:
annotations:
template.openshift.io/expose-database_name: '{.data[''database-name'']}'
template.openshift.io/expose-password: '{.data[''database-password'']}'
template.openshift.io/expose-admin_password: '{.data[''database-admin-password'']}'
template.openshift.io/expose-username: '{.data[''database-user'']}'
name: ${DATABASE_SERVICE_NAME}
stringData:
database-name: ${POSTGRESQL_DATABASE}
database-password: ${POSTGRESQL_PASSWORD}
database-admin-password: ${POSTGRESQL_PASSWORD}
database-user: ${POSTGRESQL_USER}
- apiVersion: v1
kind: Service
metadata:
annotations:
template.openshift.io/expose-uri: postgres://{.spec.clusterIP}:{.spec.ports[?(.name=="postgresql")].port}
name: ${DATABASE_SERVICE_NAME}
spec:
ports:
- name: postgresql
nodePort: 0
port: 5432
protocol: TCP
targetPort: 5432
selector:
name: ${DATABASE_SERVICE_NAME}
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
template.alpha.openshift.io/wait-for-ready: "true"
name: ${DATABASE_SERVICE_NAME}
spec:
replicas: 1
selector:
name: ${DATABASE_SERVICE_NAME}
strategy:
type: Recreate
template:
metadata:
labels:
name: ${DATABASE_SERVICE_NAME}
spec:
containers:
- capabilities: {}
env:
- name: POSTGRESQL_USER
valueFrom:
secretKeyRef:
key: database-user
name: ${DATABASE_SERVICE_NAME}
- name: POSTGRESQL_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: ${DATABASE_SERVICE_NAME}
- name: POSTGRESQL_DATABASE
valueFrom:
secretKeyRef:
key: database-name
name: ${DATABASE_SERVICE_NAME}
- name: POSTGRESQL_MAX_CONNECTIONS
value: ${POSTGRESQL_MAX_CONNECTIONS}
image: registry.redhat.io/rhel8/postgresql-12
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /usr/libexec/check-container
- --live
initialDelaySeconds: 120
timeoutSeconds: 10
name: postgresql
ports:
- containerPort: 5432
protocol: TCP
readinessProbe:
exec:
command:
- /usr/libexec/check-container
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
limits:
memory: ${MEMORY_LIMIT}
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/pgsql/data
name: ${DATABASE_SERVICE_NAME}-data
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- name: ${DATABASE_SERVICE_NAME}-data
{% if openshift_pg_emptydir | bool %}
emptyDir: {}
{% else %}
persistentVolumeClaim:
claimName: {{ openshift_pg_pvc_name }}
{% endif %}
triggers:
- type: ConfigChange
status: {}
parameters:
- description: Maximum amount of memory the container can use.
displayName: Memory Limit
name: MEMORY_LIMIT
required: true
value: 512Mi
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
value: openshift
- description: The name of the OpenShift Service exposed for the database.
displayName: Database Service Name
name: DATABASE_SERVICE_NAME
required: true
value: postgresql
- description: Username for PostgreSQL user that will be used for accessing the
database.
displayName: PostgreSQL Connection Username
from: user[A-Z0-9]{3}
generate: expression
name: POSTGRESQL_USER
required: true
- description: Password for the PostgreSQL connection user.
displayName: PostgreSQL Connection Password
from: '[a-zA-Z0-9]{16}'
generate: expression
name: POSTGRESQL_PASSWORD
required: true
- description: Name of the PostgreSQL database accessed.
displayName: PostgreSQL Database Name
name: POSTGRESQL_DATABASE
required: true
value: sampledb

View File

@ -1,64 +0,0 @@
postgresqlUsername: {{ pg_username }}
postgresqlPassword: {{ pg_password }}
postgresqlDatabase: {{ pg_database }}
persistence:
size: {{ pg_volume_capacity|default('5') }}Gi
{% if pg_persistence_storageClass is defined %}
storageClass: {{ pg_persistence_storageClass }}
{% endif %}
{% if pg_persistence_existingclaim is defined %}
existingClaim: {{ pg_persistence_existingclaim }}
{% endif %}
{% if pg_cpu_limit is defined or pg_mem_limit is defined %}
resources:
limits:
{% if pg_cpu_limit is defined %}
cpu: {{ pg_cpu_limit | string }}m
{% endif %}
{% if pg_mem_limit is defined %}
memory: {{ pg_mem_limit | string }}Gi
{% endif %}
{% endif %}
{% if tolerations is defined or node_selector is defined or affinity is defined %}
master:
{% if tolerations is defined %}
tolerations:
{{ tolerations | to_nice_yaml(indent=2) | indent(width=4, indentfirst=True) }}
{% endif %}
{% if node_selector is defined %}
nodeSelector:
{{ node_selector | to_nice_yaml(indent=2) | indent(width=4, indentfirst=True) }}
{% endif %}
{% if affinity is defined %}
affinity:
{{ affinity | to_nice_yaml(indent=2) | indent(width=4, indentfirst=True) }}
{% endif %}
{% endif %}
image:
{% if pg_image_registry is defined %}
# The default bitnami image from the chart doesn't work on ARM
registry: {{ pg_image_registry }}
{% endif %}
{% if pg_image_registry is not defined %}
registry: docker.io/bitnami
{% endif %}
repository: postgresql
tag: '12.5.0'
volumePermissions:
image:
{% if pg_image_registry is defined %}
registry: {{ pg_image_registry }}
{% endif %}
# The default bitnami image from the chart doesn't work on ARM
repository: alpine
tag: '3'
{% if pg_image_registry is defined %}
metrics:
image:
registry: {{ pg_image_registry }}
{% endif %}
{% if pg_serviceaccount is defined %}
serviceAccount:
enabled: true
name: {{ pg_serviceaccount }}
{% endif %}

View File

@ -1,11 +0,0 @@
---
apiVersion: v1
kind: Secret
metadata:
namespace: {{ kubernetes_namespace }}
name: "{{ kubernetes_deployment_name }}-secrets"
type: Opaque
data:
secret_key: "{{ secret_key | b64encode }}"
credentials_py: "{{ lookup('template', 'credentials.py.j2') | b64encode }}"
environment_sh: "{{ lookup('template', 'environment.sh.j2') | b64encode }}"

View File

@ -1,149 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ kubernetes_deployment_name }}-supervisor-config
namespace: {{ kubernetes_namespace }}
data:
supervisor-web-config: |
[supervisord]
nodaemon = True
umask = 022
logfile = /dev/stdout
logfile_maxbytes = 0
pidfile = /var/run/supervisor/supervisor.web.pid
[program:nginx]
command = nginx -g "daemon off;"
autostart = true
autorestart = true
stopwaitsecs = 5
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:uwsgi]
command = {{ uwsgi_bash }} '/var/lib/awx/venv/awx/bin/uwsgi --socket 127.0.0.1:8050 --module=awx.wsgi:application --vacuum --processes=5 --harakiri=120 --no-orphans --master --max-requests=1000 --master-fifo=/var/lib/awx/awxfifo --lazy-apps -b 32768'
directory = /var/lib/awx
autostart = true
autorestart = true
stopwaitsecs = 15
stopsignal = INT
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:daphne]
command = {{ uwsgi_bash }} '/var/lib/awx/venv/awx/bin/daphne -b 127.0.0.1 -p 8051 awx.asgi:channel_layer'
directory = /var/lib/awx
autostart = true
autorestart = true
stopwaitsecs = 5
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:wsbroadcast]
command = awx-manage run_wsbroadcast
directory = /var/lib/awx
autostart = true
autorestart = true
stopwaitsecs = 5
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:awx-rsyslogd]
command = rsyslogd -n -i /var/run/awx-rsyslog/rsyslog.pid -f /var/lib/awx/rsyslog/rsyslog.conf
autostart = true
autorestart = true
stopwaitsecs = 5
startretries = 10
stopsignal=TERM
stopasgroup=true
killasgroup=true
redirect_stderr=true
stdout_logfile=/dev/stderr
stdout_logfile_maxbytes=0
[group:tower-processes]
programs=nginx,uwsgi,daphne,wsbroadcast,awx-rsyslogd
priority=5
# TODO: Exit Handler
[eventlistener:awx-config-watcher]
command=/usr/bin/config-watcher
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes=0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
events=TICK_60
priority=0
[unix_http_server]
file=/var/run/supervisor/supervisor.web.sock
[supervisorctl]
serverurl=unix:///var/run/supervisor/supervisor.web.sock ; use a unix:// URL for a unix socket
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
supervisor-task-config: |
[supervisord]
nodaemon = True
umask = 022
logfile = /dev/stdout
logfile_maxbytes = 0
pidfile = /var/run/supervisor/supervisor.pid
[program:dispatcher]
command = awx-manage run_dispatcher
directory = /var/lib/awx
environment = LANGUAGE="en_US.UTF-8",LANG="en_US.UTF-8",LC_ALL="en_US.UTF-8",LC_CTYPE="en_US.UTF-8"
autostart = true
autorestart = true
stopwaitsecs = 5
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:callback-receiver]
command = awx-manage run_callback_receiver
directory = /var/lib/awx
autostart = true
autorestart = true
stopwaitsecs = 5
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[group:tower-processes]
programs=dispatcher,callback-receiver
priority=5
# TODO: Exit Handler
[eventlistener:awx-config-watcher]
command=/usr/bin/config-watcher
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes=0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
events=TICK_60
priority=0
[unix_http_server]
file=/var/run/supervisor/supervisor.sock
[supervisorctl]
serverurl=unix:///var/run/supervisor/supervisor.sock ; use a unix:// URL for a unix socket
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

View File

@ -1,3 +0,0 @@
---
openshift_oc_config_file: "{{ kubernetes_base_path }}/.kube/config"
openshift_oc_bin: "oc --kubeconfig={{ openshift_oc_config_file }}"