Remove unneeded roles from the installer directory

This commit is contained in:
Christian M. Adams 2021-02-22 13:40:13 -05:00
parent 70325fd249
commit af6af052d0
No known key found for this signature in database
GPG Key ID: F41796178F693C8E
21 changed files with 10 additions and 1022 deletions

View File

@ -407,216 +407,6 @@ If your provider is able to allocate an IP Address from the Ingress controller t
Unlike Openshift's `Route` the Kubernetes `Ingress` doesn't yet handle SSL termination. As such the default configuration will only expose AWX through HTTP on port 80. You are responsible for configuring SSL support until support is added (either to Kubernetes or AWX itself).
<<<<<<< HEAD
## Docker-Compose
### Prerequisites
- [Docker](https://docs.docker.com/engine/installation/) on the host where AWX will be deployed. After installing Docker, the Docker service must be started (depending on your OS, you may have to add the local user that uses Docker to the ``docker`` group, refer to the documentation for details)
- [docker-compose](https://pypi.org/project/docker-compose/) Python module.
+ This also installs the `docker` Python module, which is incompatible with `docker-py`. If you have previously installed `docker-py`, please uninstall it.
- [Docker Compose](https://docs.docker.com/compose/install/).
### Pre-install steps
#### Deploying to a remote host
By default, the delivered [installer/inventory](./installer/inventory) file will deploy AWX to the local host. It is possible, however, to deploy to a remote host. The [installer/install.yml](./installer/install.yml) playbook can be used to build images on the local host, and ship the built images to, and run deployment tasks on, a remote host. To do this, modify the [installer/inventory](./installer/inventory) file, by commenting out `localhost`, and adding the remote host.
For example, suppose you wish to build images locally on your CI/CD host, and deploy them to a remote host named *awx-server*. To do this, add *awx-server* to the [installer/inventory](./installer/inventory) file, and comment out or remove `localhost`, as demonstrated by the following:
```yaml
# localhost ansible_connection=local
awx-server
[all:vars]
...
```
In the above example, image build tasks will be delegated to `localhost`, which is typically where the clone of the AWX project exists. Built images will be archived, copied to remote host, and imported into the remote Docker image cache. Tasks to start the AWX containers will then execute on the remote host.
If you choose to use the official images then the remote host will be the one to pull those images.
**Note**
> You may also want to set additional variables to control how Ansible connects to the host. For more information about this, view [Behavioral Inventory Parameters](http://docs.ansible.com/ansible/latest/intro_inventory.html#id12).
> As mentioned above, in [Prerequisites](#prerequisites-1), the prerequisites are required on the remote host.
> When deploying to a remote host, the playbook does not execute tasks with the `become` option. For this reason, make sure the user that connects to the remote host has privileges to run the `docker` command. This typically means that non-privileged users need to be part of the `docker` group.
#### Inventory variables
Before starting the install process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
*admin_password*
> Provide a strong password to prevent malicious logins after the installation.
*postgres_data_dir*
> If you're using the default PostgreSQL container (see [PostgreSQL](#postgresql-1) below), provide a path that can be mounted to the container, and where the database can be persisted.
*host_port*
> Provide a port number that can be mapped from the Docker daemon host to the web server running inside the AWX container. If undefined no port will be exposed. Defaults to *80*.
*host_port_ssl*
> Provide a port number that can be mapped from the Docker daemon host to the web server running inside the AWX container for SSL support. If undefined no port will be exposed. Defaults to *443*, only works if you also set `ssl_certificate` (see below).
*ssl_certificate*
> Optionally, provide the path to a file that contains a certificate and its private key. This needs to be a .pem-file
*docker_compose_dir*
> When using docker-compose, the `docker-compose.yml` file will be created there (default `~/.awx/awxcompose`).
*custom_venv_dir*
> Adds the custom venv environments from the local host to be passed into the containers at install.
*ca_trust_dir*
> If you're using a non trusted CA, provide a path where the untrusted Certs are stored on your Host.
#### Docker registry
If you wish to tag and push built images to a Docker registry, set the following variables in the inventory file:
*docker_registry*
> IP address and port, or URL, for accessing a registry.
*docker_registry_repository*
> Namespace to use when pushing and pulling images to and from the registry. Defaults to *awx*.
*docker_registry_username*
> Username of the user that will push images to the registry. Defaults to *developer*.
**Note**
> These settings are ignored if using official images
#### Proxy settings
*http_proxy*
> IP address and port, or URL, for using an http_proxy.
*https_proxy*
> IP address and port, or URL, for using an https_proxy.
*no_proxy*
> Exclude IP address or URL from the proxy.
#### PostgreSQL
AWX requires access to a PostgreSQL database, and by default, one will be created and deployed in a container, and data will be persisted to a host volume. In this scenario, you must set the value of `postgres_data_dir` to a path that can be mounted to the container. When the container is stopped, the database files will still exist in the specified path.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_admin_password`, `pg_database`, and `pg_port` with the connection information.
### Run the installer
If you are not pushing images to a Docker registry, start the install by running the following:
```bash
# Set the working directory to installer
$ cd installer
# Run the Ansible playbook
$ ansible-playbook -i inventory install.yml
```
If you're pushing built images to a repository, then use the `-e` option to pass the registry password as follows, replacing *password* with the password of the username assigned to `docker_registry_username` (note that you will also need to remove `dockerhub_base` and `dockerhub_version` from the inventory file):
```bash
# Set the working directory to installer
$ cd installer
# Run the Ansible playbook
$ ansible-playbook -i inventory -e docker_registry_password=password install.yml
```
### Post-install
After the playbook run completes, Docker starts a series of containers that provide the services that make up AWX. You can view the running containers using the `docker ps` command.
If you're deploying using Docker Compose, container names will be prefixed by the name of the folder where the docker-compose.yml file is created (by default, `awx`).
Immediately after the containers start, the *awx_task* container will perform required setup tasks, including database migrations. These tasks need to complete before the web interface can be accessed. To monitor the progress, you can follow the container's STDOUT by running the following:
```bash
# Tail the awx_task log
$ docker logs -f awx_task
```
You will see output similar to the following:
```bash
Using /etc/ansible/ansible.cfg as config file
127.0.0.1 | SUCCESS => {
"changed": false,
"db": "awx"
}
Operations to perform:
Synchronize unmigrated apps: solo, api, staticfiles, messages, channels, django_extensions, ui, rest_framework, polymorphic
Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying taggit.0001_initial... OK
Applying taggit.0002_auto_20150616_2121... OK
Applying main.0001_initial... OK
...
```
Once migrations complete, you will see the following log output, indicating that migrations have completed:
```bash
Python 2.7.5 (default, Nov 6 2016, 00:28:07)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> <User: admin>
>>> Default organization added.
Demo Credential, Inventory, and Job Template added.
Successfully registered instance awx
(changed: True)
Creating instance group tower
Added instance awx to tower
(changed: True)
...
```
### Accessing AWX
The AWX web server is accessible on the deployment host, using the *host_port* value set in the *inventory* file. The default URL is [http://localhost](http://localhost).
You will prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
=======
>>>>>>> c4d87ec843... Consolidate the Local Docker installer and the dev env
# Installing the AWX CLI
`awx` is the official command-line client for AWX. It:

View File

@ -1,8 +0,0 @@
---
- name: Build AWX Docker Images
hosts: localhost
gather_facts: true
roles:
- {role: dockerfile}
- {role: image_build}
- {role: image_push, when: "docker_registry is defined"}

View File

@ -3,7 +3,4 @@
hosts: all
roles:
- {role: check_vars}
- {role: image_build, when: "dockerhub_base is not defined"}
- {role: image_push, when: "docker_registry is defined and dockerhub_base is not defined"}
- {role: kubernetes, when: "openshift_host is defined or kubernetes_context is defined"}
- {role: local_docker, when: "openshift_host is not defined and kubernetes_context is not defined"}

View File

@ -1,8 +0,0 @@
# check_docker.yml
---
- name: postgres_data_dir should be defined
assert:
that:
- postgres_data_dir is defined and postgres_data_dir != ''
msg: "Set the value of 'postgres_data_dir' in the inventory file."
when: pg_hostname is not defined or pg_hostname == ''

View File

@ -8,6 +8,3 @@
- include_tasks: check_openshift.yml
when: openshift_host is defined and openshift_host != ''
- include_tasks: check_docker.yml
when: openshift_host is not defined or openshift_host == ''

View File

@ -1,6 +0,0 @@
---
build_dev: false
kube_dev: false
dockerfile_dest: '..'
dockerfile_name: 'Dockerfile'
template_dest: '_build'

View File

@ -1,52 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
mQINBFVfhqABEAC6EEEPv57spTUSQvtgqbVZI7d5ooCTMXEo5KJGVPVSfKtO8+BV
ZTKPukUazbtplDlIe9csfbP7CBaaBn5CtDgIrbROzazxoWv7mIP6hjUaTQSd5tvv
ONDQvnCDD5SKcy+XhqkmALSvREsN9tNtKETGXgNOLwJAlzxcpt8JLXnuiCCbefum
gaDoPQsIkegFa/r6XhY6kLi2lpQOJ3v72IXNDpdau1vtp/xPHclfCI1iQ7gnfEdw
rRJRGeOx1qikyqAVFgXXiI/NAQrsyIsO0ECGSBLQeDna/bGrqpCGKnrbJhfGAIWA
aXUTRCQRemiansk0Whu4ATZz8iM9zJPi1R7CeMXgwe7VtD4KOd1y7UBHKwAhIWdu
4Q4lsOpm2tzYFQUrY6mQ/3BkywDHkdVqmQKTGCuwcNO9PMOBLSE99yCIjxXL04VM
dPWIqMvh15TLjd6UahNFucowX3312z4JpWFHWA075MdkvVVcqfMxohViOLUCYt/C
74xFmT+uZUKnSQFYT/JaGqxFLjkYHmnFrb710fBjniDlaB4Ii3Tft/yXsgx8P9xb
y2cWA/W6yFeRqXM49C3/KA6RhDWU90P55O/MWbYUSGiGu+eYT3rMAV2cI6r4+U7e
YgQvntpc9GbAzab6co8ceJ3lpTHtSl+QZJUhSoPYg5VbSilf0AqZgUSUUQARAQAB
tC5BbnNpYmxlLCBJbmMuIChyZWxlYXNlKSA8c2VjdXJpdHlAYW5zaWJsZS5jb20+
iQI4BBMBAgAiBQJVX4agAhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRC4
TjOcRCZnqfN+D/9LvJVLW/zMPcZ4qK+/KpNiD+MXducBpQiUfj2AoEqkR2LwL4/G
v5N3GCpBHSrzK1PBp5uW1+6TcdotBO5ePtmvJlSjVMVxHkcTTBfuzqtErcw/zequ
sNsagllPlCePT1Osi34onGm7zMdillh/uw30bojYRwtDpfXiXJAqPc3vqXrER8EE
r+ZFj4MJcqDGWdIguWX8hnIFkzYZ7Gyvwo4ETWrdRhRfHoIdwyiAawnxkgpMVHxv
2+JlqDb+qqY2Wjffd5WC2uaxK88sCsScJF+aE+WlBIVRA4POu4gZneLfuGzzpg2e
9PSWmtDo5X7ECRnfTIMvAbbpt08x/zSZxwRUwLdQ+F9vN7RZ/ibaXE2rG1xrWOO/
wel8cfhDl7YgZhKw8R/RURliOB5FFJ336zGWm5HTHyhblbiNn+LcUAy8ipYp6y+C
ATLnHzF6J5CtIRpG4Bs5Ar3xePNGUnEHXiLv3wYeq3uUkrFcRpmcMUSBrtk6QHbD
fmJvWpdA4twmoBFBMyvvTJmBb52teNzoqBgeNflXl+SVAMT2eZSezqbvevuHQTOX
uRw0GXsKCQ/hyR9f1fd0yGRMtPqNTRLwBb4lzpU70/rRmU9gHzY4Yhwg3E9Tv+rM
a5Lj3YmlJRax5gUVQN02E1zlBDsiNrGqpmDG1Mxo7YPpbxgu0PFPqeqcFbkCDQRV
X4agARAAwO4MA+7uIRV+oHmyMPLFWqiKp2nFy5McQByJxSQchn93/9qud4JYd1i6
8pIiKN6XJqtpt16UCTewcZHM5oOJQVNwAS8TP9imfg73TfUaoOoUbp0qfGKub/Q4
6Ktnwe940qEqYG1/QsPWNE/4G1O3b/O7m6qlozEEmxep8bRviRChz4/Mw75S1W6i
jlKYI8yZOUco9oiFJcKqyYtaKkgEg18cNuY8uvAlvULezaZyCqVjoVbKGUUAPSVg
CBixqLQ7UmBMA6xxptVuBvaRJAaF0VvvcyBZo4SzybtrHbUD1VWIzmWKKD/sDS2J
MQbnQ0FnhRzTjhvQhAp2LVPeAQVbQNFdG7y+ROCHeE9mqutTZLOilut+CQ8HDWuQ
/eCQU5yV7vh3FL/SVYS0ahZj+FdfTq8rbeIsDT42Z/MjDB54jxB5ajCHLomi4LhC
09zeb7HgwUc5wzoN7nU1OLmmn0AFwKJVD1R5UgySggv2xJym1H/mjJiR0MDweJDc
xj3bf4qGRDLVFRkZcO3cmMDLhL9gb1MIU3zBVotOBt2dig/Je+K6CUFHAA237Vcg
VKUrLIi6OdG3ecAdflGsaNKQ5XPv2mfhbieXu9N/S7HBvjeHIBD2xjWNz9UE1ymu
QPwR6+zTxD4Nx1xIiink0MN1PaCkGJ03YBSsXnHoyiOhqAfceRcAEQEAAYkCHwQY
AQIACQUCVV+GoAIbDAAKCRC4TjOcRCZnqSvGD/wP8y6fz2PsrgspHCraNuWTJaVA
DesQgOxJS6uHskW/jnHkvAMTNzlVhov1hN7g+QjPMISQDCn+913kyqZ0lU3lYmvz
nByPAbgzZvmAaTqb8v79zY6UH4NzbBuz4dhYN65dxhiMpNrXVvMRQjPFRXG0GG5d
7ypM1b9eoRTRlJNAwQ/ONoQxZdzVpmpXjcMOaifs75lkGAfNT0bcG/o/Qh/p4MRF
t/VSmH8tM8jJuHbIPcs8FWP4J8xzum8uhF2ZlKEQsR2C9cBJSBrs5jdOjgMqwFv5
2qCg0PpEKKNQdu9MabapBprFMwJWIl+dOjUE3fdMrOSJBZZusQq9nwtDNAaaLcD7
RwStw7AXi6CxYuB/uikKRviLqRCwASdj5Cdjtu6mohS8DdVkpEYbpuPjEdqc7UyW
fAZQqYMkwIfaxE25/S+FxqISSCFIOCL3QNTk0Q9u2W6Fh+KUACZobtwUL/XytPBz
7Fn5wXeOCPoAbOXoiT7kPsFGvIsFHpF3K7Fy+cMrqr5dqhywGK5ckIKXRKmCAu8H
iDeBqVjBn143WJPZ8uiu+7TiaGLuOqDdiDSchM24W4hs5DbD9zdVYy6IFi1OWSot
HUQyZisiIgD1hSHhkn2LTYrJqIdvJ/q8buMKywB9Avs5fwP/CnsrSP9z+RWJ8HKP
OwWvTVGXCPUZTxHiYg==
=msBf
-----END PGP PUBLIC KEY BLOCK-----

View File

@ -1,25 +0,0 @@
#!/usr/bin/env bash
if [ `id -u` -ge 500 ]; then
echo "awx:x:`id -u`:`id -g`:,,,:/var/lib/awx:/bin/bash" >> /tmp/passwd
cat /tmp/passwd > /etc/passwd
rm /tmp/passwd
fi
if [ -n "${AWX_KUBE_DEVEL}" ]; then
pushd /awx_devel
make awx-link
popd
export SDB_NOTIFY_HOST=$(ip route | head -n1 | awk '{print $3}')
fi
source /etc/tower/conf.d/environment.sh
ANSIBLE_REMOTE_TEMP=/tmp ANSIBLE_LOCAL_TEMP=/tmp ansible -i "127.0.0.1," -c local -v -m wait_for -a "host=$DATABASE_HOST port=$DATABASE_PORT" all
ANSIBLE_REMOTE_TEMP=/tmp ANSIBLE_LOCAL_TEMP=/tmp ansible -i "127.0.0.1," -c local -v -m postgresql_db --become-user $DATABASE_USER -a "name=$DATABASE_NAME owner=$DATABASE_USER login_user=$DATABASE_USER login_host=$DATABASE_HOST login_password=$DATABASE_PASSWORD port=$DATABASE_PORT" all
awx-manage collectstatic --noinput --clear
unset $(cut -d = -f -1 /etc/tower/conf.d/environment.sh)
supervisord -c /etc/supervisord.conf

View File

@ -1,40 +0,0 @@
#!/usr/bin/env bash
if [ `id -u` -ge 500 ]; then
echo "awx:x:`id -u`:`id -g`:,,,:/var/lib/awx:/bin/bash" >> /tmp/passwd
cat /tmp/passwd > /etc/passwd
rm /tmp/passwd
fi
if [ -n "${AWX_KUBE_DEVEL}" ]; then
pushd /awx_devel
make awx-link
popd
export SDB_NOTIFY_HOST=$(ip route | head -n1 | awk '{print $3}')
fi
source /etc/tower/conf.d/environment.sh
ANSIBLE_REMOTE_TEMP=/tmp ANSIBLE_LOCAL_TEMP=/tmp ansible -i "127.0.0.1," -c local -v -m wait_for -a "host=$DATABASE_HOST port=$DATABASE_PORT" all
ANSIBLE_REMOTE_TEMP=/tmp ANSIBLE_LOCAL_TEMP=/tmp ansible -i "127.0.0.1," -c local -v -m postgresql_db --become-user $DATABASE_USER -a "name=$DATABASE_NAME owner=$DATABASE_USER login_user=$DATABASE_USER login_host=$DATABASE_HOST login_password=$DATABASE_PASSWORD port=$DATABASE_PORT" all
if [ -z "$AWX_SKIP_MIGRATIONS" ]; then
awx-manage migrate --noinput
fi
if [ -z "$AWX_SKIP_PROVISION_INSTANCE" ]; then
awx-manage provision_instance --hostname=$(hostname)
fi
if [ -z "$AWX_SKIP_REGISTERQUEUE" ]; then
awx-manage register_queue --queuename=tower --instance_percent=100
fi
if [ ! -z "$AWX_ADMIN_USER" ]&&[ ! -z "$AWX_ADMIN_PASSWORD" ]; then
echo "from django.contrib.auth.models import User; User.objects.create_superuser('$AWX_ADMIN_USER', 'root@localhost', '$AWX_ADMIN_PASSWORD')" | awx-manage shell
fi
echo 'from django.conf import settings; x = settings.AWX_TASK_ENV; x["HOME"] = "/var/lib/awx"; settings.AWX_TASK_ENV = x' | awx-manage shell
unset $(cut -d = -f -1 /etc/tower/conf.d/environment.sh)
supervisord -c /etc/supervisord_task.conf

View File

@ -1,7 +0,0 @@
$WorkDirectory /var/lib/awx/rsyslog
$MaxMessageSize 700000
$IncludeConfig /var/lib/awx/rsyslog/conf.d/*.conf
module(load="imuxsock" SysSock.Use="off")
input(type="imuxsock" Socket="/var/run/awx-rsyslog/rsyslog.sock" unlink="on")
template(name="awx" type="string" string="%msg%")
action(type="omfile" file="/dev/null")

View File

@ -1,89 +0,0 @@
# AWX settings file
import os
def get_secret():
if os.path.exists("/etc/tower/SECRET_KEY"):
return open('/etc/tower/SECRET_KEY', 'rb').read().strip()
ADMINS = ()
STATIC_ROOT = '/var/lib/awx/public/static'
PROJECTS_ROOT = '/var/lib/awx/projects'
AWX_ANSIBLE_COLLECTIONS_PATHS = '/var/lib/awx/vendor/awx_ansible_collections'
JOBOUTPUT_ROOT = '/var/lib/awx/job_status'
SECRET_KEY = get_secret()
ALLOWED_HOSTS = ['*']
# Container environments don't like chroots
AWX_PROOT_ENABLED = False
CLUSTER_HOST_ID = "awx"
SYSTEM_UUID = '00000000-0000-0000-0000-000000000000'
CSRF_COOKIE_SECURE = False
SESSION_COOKIE_SECURE = False
###############################################################################
# EMAIL SETTINGS
###############################################################################
SERVER_EMAIL = 'root@localhost'
DEFAULT_FROM_EMAIL = 'webmaster@localhost'
EMAIL_SUBJECT_PREFIX = '[AWX] '
EMAIL_HOST = 'localhost'
EMAIL_PORT = 25
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
LOGGING['handlers']['console'] = {
'()': 'logging.StreamHandler',
'level': 'DEBUG',
'formatter': 'simple',
}
LOGGING['loggers']['django.request']['handlers'] = ['console']
LOGGING['loggers']['rest_framework.request']['handlers'] = ['console']
LOGGING['loggers']['awx']['handlers'] = ['console', 'external_logger']
LOGGING['loggers']['awx.main.commands.run_callback_receiver']['handlers'] = ['console']
LOGGING['loggers']['awx.main.tasks']['handlers'] = ['console', 'external_logger']
LOGGING['loggers']['awx.main.scheduler']['handlers'] = ['console', 'external_logger']
LOGGING['loggers']['django_auth_ldap']['handlers'] = ['console']
LOGGING['loggers']['social']['handlers'] = ['console']
LOGGING['loggers']['system_tracking_migrations']['handlers'] = ['console']
LOGGING['loggers']['rbac_migrations']['handlers'] = ['console']
LOGGING['loggers']['awx.isolated.manager.playbooks']['handlers'] = ['console']
LOGGING['handlers']['callback_receiver'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['task_system'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['tower_warnings'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['rbac_migrations'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['system_tracking_migrations'] = {'class': 'logging.NullHandler'}
LOGGING['handlers']['management_playbooks'] = {'class': 'logging.NullHandler'}
DATABASES = {
'default': {
'ATOMIC_REQUESTS': True,
'ENGINE': 'awx.main.db.profiled_pg',
'NAME': os.getenv("DATABASE_NAME", None),
'USER': os.getenv("DATABASE_USER", None),
'PASSWORD': os.getenv("DATABASE_PASSWORD", None),
'HOST': os.getenv("DATABASE_HOST", None),
'PORT': os.getenv("DATABASE_PORT", None),
}
}
if os.getenv("DATABASE_SSLMODE", False):
DATABASES['default']['OPTIONS'] = {'sslmode': os.getenv("DATABASE_SSLMODE")}
USE_X_FORWARDED_HOST = True
USE_X_FORWARDED_PORT = True

View File

@ -1,19 +0,0 @@
---
- name: Create .build directory
file:
path: "{{ dockerfile_dest }}/{{ template_dest }}"
state: directory
- name: Render supervisor configs
template:
src: "{{ item }}.j2"
dest: "{{ dockerfile_dest }}/{{ template_dest }}/{{ item }}"
with_items:
- "supervisor.conf"
- "supervisor_task.conf"
- name: Render Dockerfile
template:
src: Dockerfile.j2
dest: "{{ dockerfile_dest }}/{{ dockerfile_name }}"

View File

@ -1,267 +0,0 @@
### This file is generated from
### installer/roles/dockerfile/templates/Dockerfile.j2
###
### DO NOT EDIT
###
# Locations - set globally to be used across stages
ARG COLLECTION_BASE="/var/lib/awx/vendor/awx_ansible_collections"
# Build container
FROM centos:8 as builder
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
USER root
# Install build dependencies
RUN dnf -y module enable 'postgresql:12'
RUN dnf -y update && \
dnf -y install epel-release 'dnf-command(config-manager)' && \
dnf module -y enable 'postgresql:12' && \
dnf config-manager --set-enabled powertools && \
dnf -y install ansible \
gcc \
gcc-c++ \
git-core \
glibc-langpack-en \
libcurl-devel \
libffi-devel \
libtool-ltdl-devel \
make \
nodejs \
nss \
openldap-devel \
patch \
@postgresql:12 \
postgresql-devel \
python3-devel \
python3-pip \
python3-psycopg2 \
python3-setuptools \
swig \
unzip \
xmlsec1-devel \
xmlsec1-openssl-devel
RUN python3 -m ensurepip && pip3 install "virtualenv < 20"
# Install & build requirements
ADD Makefile /tmp/Makefile
RUN mkdir /tmp/requirements
ADD requirements/requirements_ansible.txt \
requirements/requirements_ansible_uninstall.txt \
requirements/requirements_ansible_git.txt \
requirements/requirements.txt \
requirements/requirements_tower_uninstall.txt \
requirements/requirements_git.txt \
requirements/collections_requirements.yml \
/tmp/requirements/
RUN cd /tmp && make requirements_awx requirements_ansible_py3
RUN cd /tmp && make requirements_collections
{% if (build_dev|bool) or (kube_dev|bool) %}
ADD requirements/requirements_dev.txt /tmp/requirements
RUN cd /tmp && make requirements_awx_dev requirements_ansible_dev
{% else %}
# Use the distro provided npm to bootstrap our required version of node
RUN npm install -g n && n 14.15.1 && dnf remove -y nodejs
# Copy source into builder, build sdist, install it into awx venv
COPY . /tmp/src/
WORKDIR /tmp/src/
RUN make sdist && \
/var/lib/awx/venv/awx/bin/pip install dist/awx-$(cat VERSION).tar.gz
{% endif %}
# Final container(s)
FROM centos:8
ARG COLLECTION_BASE
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
USER root
# Install runtime requirements
RUN dnf -y module enable 'postgresql:12'
RUN dnf -y update && \
dnf -y install epel-release 'dnf-command(config-manager)' && \
dnf module -y enable 'postgresql:12' && \
dnf config-manager --set-enabled powertools && \
dnf -y install acl \
ansible \
bubblewrap \
git-core \
git-lfs \
glibc-langpack-en \
krb5-workstation \
libcgroup-tools \
nginx \
@postgresql:12 \
python3-devel \
python3-libselinux \
python3-pip \
python3-psycopg2 \
python3-setuptools \
rsync \
subversion \
sudo \
vim-minimal \
which \
unzip \
xmlsec1-openssl && \
dnf -y install centos-release-stream && dnf -y install "rsyslog >= 8.1911.0" && dnf -y remove centos-release-stream && \
dnf -y clean all
# Install kubectl
RUN curl -L -o /usr/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.17.8/bin/linux/{{ kubectl_architecture | default('amd64') }}/kubectl && \
chmod a+x /usr/bin/kubectl
RUN curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 && \
chmod 700 get_helm.sh && \
./get_helm.sh
# Install tini
RUN curl -L -o /usr/bin/tini https://github.com/krallin/tini/releases/download/v0.19.0/tini-{{ tini_architecture | default('amd64') }} && \
chmod +x /usr/bin/tini
RUN python3 -m ensurepip && pip3 install "virtualenv < 20" supervisor {% if build_dev|bool %}flake8{% endif %}
RUN rm -rf /root/.cache && rm -rf /tmp/*
# Install OpenShift CLI
RUN cd /usr/local/bin && \
curl -L https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz | \
tar -xz --strip-components=1 --wildcards --no-anchored 'oc'
{% if (build_dev|bool) or (kube_dev|bool) %}
# Install development/test requirements
RUN dnf -y install \
gdb \
gtk3 \
gettext \
alsa-lib \
libX11-xcb \
libXScrnSaver \
strace \
vim \
nmap-ncat \
nodejs \
nss \
make \
patch \
socat \
tmux \
wget \
diffutils \
unzip && \
npm install -g n && n 14.15.1 && dnf remove -y nodejs
# This package randomly fails to download.
# It is nice to have in the dev env, but not necessary.
# Add it back to the list above if the repo ever straighten up.
RUN dnf --enablerepo=debuginfo -y install python3-debuginfo || :
{% endif %}
# Copy app from builder
COPY --from=builder /var/lib/awx /var/lib/awx
RUN ln -s /var/lib/awx/venv/awx/bin/awx-manage /usr/bin/awx-manage
{%if build_dev|bool %}
RUN openssl req -nodes -newkey rsa:2048 -keyout /etc/nginx/nginx.key -out /etc/nginx/nginx.csr \
-subj "/C=US/ST=North Carolina/L=Durham/O=Ansible/OU=AWX Development/CN=awx.localhost" && \
openssl x509 -req -days 365 -in /etc/nginx/nginx.csr -signkey /etc/nginx/nginx.key -out /etc/nginx/nginx.crt && \
chmod 640 /etc/nginx/nginx.{csr,key,crt}
{% endif %}
# Create default awx rsyslog config
ADD installer/roles/dockerfile/files/rsyslog.conf /var/lib/awx/rsyslog/rsyslog.conf
## File mappings
{% if build_dev|bool %}
ADD tools/docker-compose/launch_awx.sh /usr/bin/launch_awx.sh
ADD tools/docker-compose/nginx.conf /etc/nginx/nginx.conf
ADD tools/docker-compose/nginx.vh.default.conf /etc/nginx/conf.d/nginx.vh.default.conf
ADD tools/docker-compose/start_tests.sh /start_tests.sh
ADD tools/docker-compose/bootstrap_development.sh /usr/bin/bootstrap_development.sh
ADD tools/docker-compose/entrypoint.sh /entrypoint.sh
{% else %}
ADD installer/roles/dockerfile/files/launch_awx.sh /usr/bin/launch_awx.sh
ADD installer/roles/dockerfile/files/launch_awx_task.sh /usr/bin/launch_awx_task.sh
ADD installer/roles/dockerfile/files/settings.py /etc/tower/settings.py
ADD {{ template_dest }}/supervisor.conf /etc/supervisord.conf
ADD {{ template_dest }}/supervisor_task.conf /etc/supervisord_task.conf
ADD tools/scripts/config-watcher /usr/bin/config-watcher
{% endif %}
{% if (build_dev|bool) or (kube_dev|bool) %}
ADD tools/docker-compose/awx.egg-link /tmp/awx.egg-link
ADD tools/docker-compose/awx-manage /usr/local/bin/awx-manage
ADD tools/scripts/awx-python /usr/bin/awx-python
{% endif %}
# Pre-create things we need to access
RUN for dir in \
/var/lib/awx \
/var/lib/awx/rsyslog \
/var/lib/awx/rsyslog/conf.d \
/var/run/awx-rsyslog \
/var/log/tower \
/var/log/nginx \
/var/lib/postgresql \
/var/run/supervisor \
/var/lib/nginx ; \
do mkdir -m 0775 -p $dir ; chmod g+rw $dir ; chgrp root $dir ; done && \
for file in \
/etc/passwd ; \
do touch $file ; chmod g+rw $file ; chgrp root $file ; done
# Adjust any remaining permissions
RUN chmod u+s /usr/bin/bwrap ; \
chgrp -R root ${COLLECTION_BASE} ; \
chmod -R g+rw ${COLLECTION_BASE}
{% if (build_dev|bool) or (kube_dev|bool) %}
RUN for dir in \
/var/lib/awx/venv \
/var/lib/awx/venv/awx/lib/python3.6 \
/var/lib/awx/projects \
/var/lib/awx/rsyslog \
/var/run/awx-rsyslog \
/.ansible \
/var/lib/awx/vendor ; \
do mkdir -m 0775 -p $dir ; chmod g+rw $dir ; chgrp root $dir ; done && \
for file in \
/var/run/nginx.pid \
/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx.egg-link ; \
do touch $file ; chmod g+rw $file ; done
{% endif %}
{% if not build_dev|bool %}
RUN ln -sf /dev/stdout /var/log/nginx/access.log && \
ln -sf /dev/stderr /var/log/nginx/error.log
{% endif %}
ENV HOME="/var/lib/awx"
ENV PATH="/usr/pgsql-10/bin:${PATH}"
{% if build_dev|bool %}
EXPOSE 8043 8013 8080 22
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/bin/bash"]
{% else %}
USER 1000
EXPOSE 8052
ENTRYPOINT ["/usr/bin/tini", "--"]
CMD /usr/bin/launch_awx.sh
VOLUME /var/lib/nginx
{% endif %}

View File

@ -1,117 +0,0 @@
[supervisord]
nodaemon = True
umask = 022
logfile = /dev/stdout
logfile_maxbytes = 0
pidfile = /var/run/supervisor/supervisor.web.pid
[program:nginx]
{% if kube_dev | bool %}
command = make nginx
directory = /awx_devel
{% else %}
command = nginx -g "daemon off;"
{% endif %}
autostart = true
autorestart = true
stopwaitsecs = 5
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:uwsgi]
{% if kube_dev | bool %}
command = make uwsgi
directory = /awx_devel
environment =
UWSGI_DEV_RELOAD_COMMAND='supervisorctl -c /etc/supervisord_task.conf restart all; supervisorctl restart tower-processes:daphne tower-processes:wsbroadcast'
{% else %}
command = /var/lib/awx/venv/awx/bin/uwsgi --socket 127.0.0.1:8050 --module=awx.wsgi:application --vacuum --processes=5 --harakiri=120 --no-orphans --master --max-requests=1000 --master-fifo=/var/lib/awx/awxfifo --lazy-apps -b 32768
directory = /var/lib/awx
{% endif %}
autostart = true
autorestart = true
stopwaitsecs = 15
stopasgroup=true
killasgroup=true
stopsignal=KILL
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:daphne]
{% if kube_dev | bool %}
command = make daphne
directory = /awx_devel
{% else %}
command = /var/lib/awx/venv/awx/bin/daphne -b 127.0.0.1 -p 8051 --websocket_timeout -1 awx.asgi:channel_layer
directory = /var/lib/awx
{% endif %}
autostart = true
stopsignal=KILL
autorestart = true
stopwaitsecs = 5
stopasgroup=true
killasgroup=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:wsbroadcast]
{% if kube_dev | bool %}
command = make wsbroadcast
directory = /awx_devel
{% else %}
command = awx-manage run_wsbroadcast
directory = /var/lib/awx
{% endif %}
autostart = true
autorestart = true
stopwaitsecs = 5
stopasgroup=true
killasgroup=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:awx-rsyslogd]
command = rsyslogd -n -i /var/run/awx-rsyslog/rsyslog.pid -f /var/lib/awx/rsyslog/rsyslog.conf
autostart = true
autorestart = true
startretries = 10
stopwaitsecs = 5
stopsignal=TERM
stopasgroup=true
killasgroup=true
redirect_stderr=true
stdout_logfile=/dev/stderr
stdout_logfile_maxbytes=0
[group:tower-processes]
programs=nginx,uwsgi,daphne,wsbroadcast,awx-rsyslogd
priority=5
# TODO: Exit Handler
[eventlistener:awx-config-watcher]
command=/usr/bin/config-watcher
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes=0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
events=TICK_60
priority=0
[unix_http_server]
file=/var/run/supervisor/supervisor.web.sock
[supervisorctl]
serverurl=unix:///var/run/supervisor/supervisor.web.sock ; use a unix:// URL for a unix socket
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

View File

@ -1,62 +0,0 @@
[supervisord]
nodaemon = True
umask = 022
logfile = /dev/stdout
logfile_maxbytes = 0
pidfile = /var/run/supervisor/supervisor.pid
[program:dispatcher]
{% if kube_dev | bool %}
command = make dispatcher
directory = /awx_devel
{% else %}
command = awx-manage run_dispatcher
directory = /var/lib/awx
{% endif %}
autostart = true
autorestart = true
stopwaitsecs = 5
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:callback-receiver]
{% if kube_dev | bool %}
command = make receiver
directory = /awx_devel
{% else %}
command = awx-manage run_callback_receiver
directory = /var/lib/awx
{% endif %}
autostart = true
autorestart = true
stopwaitsecs = 5
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[group:tower-processes]
programs=dispatcher,callback-receiver
priority=5
# TODO: Exit Handler
[eventlistener:awx-config-watcher]
command=/usr/bin/config-watcher
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes=0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
events=TICK_60
priority=0
[unix_http_server]
file=/var/run/supervisor/supervisor.sock
[supervisorctl]
serverurl=unix:///var/run/supervisor/supervisor.sock ; use a unix:// URL for a unix socket
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

View File

@ -1,6 +0,0 @@
---
create_preload_data: true
# Helper vars to construct the proper download URL for the current architecture
tini_architecture: '{{ { "x86_64": "amd64", "aarch64": "arm64", "armv7": "arm" }[ansible_facts.architecture] }}'
kubectl_architecture: '{{ { "x86_64": "amd64", "aarch64": "arm64", "armv7": "arm" }[ansible_facts.architecture] }}'

View File

@ -1,31 +0,0 @@
---
- name: Set global version if not provided
set_fact:
awx_version: "{{ lookup('file', playbook_dir + '/../VERSION') }}"
when: awx_version is not defined
- name: Verify awx-logos directory exists for official install
stat:
path: "../../awx-logos"
register: logosdir
failed_when: logosdir.stat.isdir is not defined or not logosdir.stat.isdir
when: awx_official|default(false)|bool
- name: Copy logos for inclusion in sdist
copy:
src: "../../awx-logos/awx/ui/client/assets/"
dest: "../awx/ui_next/public/static/media/"
when: awx_official|default(false)|bool
- name: Set awx image name
set_fact:
awx_image: "{{ awx_image|default('awx') }}"
# Calling Docker directly because docker-py doesnt support BuildKit
- name: Build AWX image
command: docker build -t {{ awx_image }}:{{ awx_version }} ..
- name: Tag awx images as latest
command: "docker tag {{ item }}:{{ awx_version }} {{ item }}:latest"
with_items:
- "{{ awx_image }}"

View File

@ -1,33 +0,0 @@
---
- name: Authenticate with Docker registry if registry password given
docker_login:
registry: "{{ docker_registry }}"
username: "{{ docker_registry_username }}"
password: "{{ docker_registry_password }}"
reauthorize: true
when: docker_registry is defined and docker_registry_password is defined
- name: Remove local images to ensure proper push behavior
block:
- name: Remove awx image
docker_image:
name: "{{ docker_registry }}/{{ docker_registry_repository }}/{{ awx_image }}"
tag: "{{ awx_version }}"
state: absent
- name: Tag and Push Container Images
block:
- name: Tag and push awx image to registry
docker_image:
name: "{{ awx_image }}"
repository: "{{ docker_registry }}/{{ docker_registry_repository }}/{{ awx_image }}"
tag: "{{ item }}"
push: true
with_items:
- "latest"
- "{{ awx_version }}"
- name: Set full image path for Registry
set_fact:
awx_docker_actual_image: >-
{{ docker_registry }}/{{ docker_registry_repository }}/{{ awx_image }}:{{ awx_version }}

View File

@ -4,18 +4,18 @@
Here are the main make targets:
* `docker-compose-build` - used for building the development image, which is used by both `docker-compose`
* `docker-compose` - Make target for development, passes awx_devel image and tag.
* `docker-compose-build` - used for building the development image, which is used by the `docker-compose` target
* `docker-compose` - make target for development, passes awx_devel image and tag
Notable files:
* `tools/docker-compose/inventory` file - used to configure the local AWX development deployment.
* `migrate.yml` - playbook for migrating data from Local Docker to the Development Environment.
* `tools/docker-compose/inventory` file - used to configure the local AWX development deploymen
* `migrate.yml` - playbook for migrating data from Local Docker to the Development Environment
### Prerequisites
- [Docker](https://docs.docker.com/engine/installation/) on the host where AWX will be deployed. After installing Docker, the Docker service must be started (depending on your OS, you may have to add the local user that uses Docker to the ``docker`` group, refer to the documentation for details)
- [docker-compose](https://pypi.org/project/docker-compose/) Python module.
+ This also installs the `docker` Python module, which is incompatible with `docker-py`. If you have previously installed `docker-py`, please uninstall it.
+ This also installs the `docker` Python module, which is incompatible with [`docker-py`](https://pypi.org/project/docker-py/). If you have previously installed `docker-py`, please uninstall it.
- [Docker Compose](https://docs.docker.com/compose/install/).
## Configuration

View File

@ -4,6 +4,7 @@
- name: Remove awx_postgres to ensure consistent start state
shell: |
docker rm -f awx_postgres
ignore_errors: true
- name: Start Local Docker database container
docker_compose:
@ -13,6 +14,10 @@
state: present
recreate: always
- name: Wait for postgres to initialize
wait_for:
timeout: 3
- name: Database dump to local filesystem
shell: |
docker-compose -f {{ old_docker_compose_dir }}/docker-compose.yml exec -T postgres pg_dumpall -U {{ pg_username }} > awx_dump.sql

View File

@ -1,31 +0,0 @@
---
version: '2'
services:
# Primary Tower Development Container link
awx:
links:
- hashivault
- conjur
hashivault:
image: vault
container_name: tools_hashivault_1
ports:
- '8200:8200'
cap_add:
- IPC_LOCK
environment:
VAULT_DEV_ROOT_TOKEN_ID: 'vaultdev'
conjur:
image: cyberark/conjur
container_name: tools_conjur_1
command: server -p 8300
environment:
DATABASE_URL: postgres://awx@postgres/postgres
CONJUR_DATA_KEY: 'dveUwOI/71x9BPJkIgvQRRBF3SdASc+HP4CUGL7TKvM='
depends_on:
- postgres
links:
- postgres
ports:
- "8300:8300"