mirror of
https://github.com/ansible/awx.git
synced 2026-02-19 20:20:06 -03:30
Merge branch 'devel' into patch-1
This commit is contained in:
@@ -241,6 +241,11 @@ $ make docker-compose
|
||||
- [Start a shell](#start-a-shell)
|
||||
- [Start AWX from the container shell](#start-awx-from-the-container-shell)
|
||||
- [Using Logstash](./docs/logstash.md)
|
||||
- [Start a Cluster](#start-a-cluster)
|
||||
- [Start with Minikube](#start-with-minikube)
|
||||
- [Keycloak Integration](#keycloak-integration)
|
||||
- [OpenLDAP Integration](#openldap-integration)
|
||||
- [Splunk Integration](#splunk-integration)
|
||||
|
||||
### Start a Shell
|
||||
|
||||
@@ -311,3 +316,163 @@ If you want to clean all things once your are done, you can do:
|
||||
```bash
|
||||
(host)$ make docker-compose-container-group-clean
|
||||
```
|
||||
|
||||
### Keycloak Integration
|
||||
Keycloak is a SAML provider and can be used to test AWX social auth. This section describes how to build a reference Keycloak instance and plumb it with AWX for testing purposes.
|
||||
|
||||
First, be sure that you have the awx.awx collection installed by running `make install_collection`.
|
||||
Next, make sure you have your containers running by running `make docker-compose`.
|
||||
|
||||
Note: The following instructions assume we are using the built-in postgres database container. If you are not using the internal database you can use this guide as a reference, updating the database fields as required for your connection.
|
||||
|
||||
We are now ready to run two one time commands to build and pre-populate the Keycloak database.
|
||||
|
||||
The first one time command will be creating a Keycloak database in your postgres database by running:
|
||||
```bash
|
||||
docker exec tools_postgres_1 /usr/bin/psql -U awx --command "create database keycloak with encoding 'UTF8';"
|
||||
```
|
||||
|
||||
After running this command the following message should appear and you should be returned to your prompt:
|
||||
```base
|
||||
CREATE DATABASE
|
||||
```
|
||||
|
||||
The second one time command will be to start a Keycloak container to build our admin user; be sure to set pg_username and pg_password to work for you installation. Note: the command below set the username as admin with a password of admin, you can change this if you want. Also, if you are using your own container or have changed the pg_username please update the command accordingly.
|
||||
```bash
|
||||
PG_PASSWORD=`cat tools/docker-compose/_sources/secrets/pg_password.yml | cut -f 2 -d \'`
|
||||
docker run --rm -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin --net=_sources_default \
|
||||
-e DB_VENDOR=postgres -e DB_ADDR=postgres -e DB_DATABASE=keycloak -e DB_USER=awx -e DB_PASSWORD=${PG_PASSWORD} \
|
||||
quay.io/keycloak/keycloak:15.0.2
|
||||
```
|
||||
|
||||
Once you see a message like: `WFLYSRV0051: Admin console listening on http://127.0.0.1:9990` you can stop the container.
|
||||
|
||||
Now that we have performed the one time setup anytime you want to run a Keycloak instance alongside AWX we can start docker-compose with the KEYCLOAK option to get a Keycloak instance with the command:
|
||||
```bash
|
||||
KEYCLOAK=true make docker-compose
|
||||
```
|
||||
|
||||
Go ahead and stop your existing docker-compose run and restart with Keycloak before proceeding to the next steps.
|
||||
|
||||
Once the containers come up a new port (8443) should be exposed and the Keycloak interface should be running on that port. Connect to this through a url like `https://localhost:8443` to confirm that Keycloak has stared. If you wanted to login and look at Keycloak itself you could select the "Administration console" link and log into the UI the username/password set in the previous `docker run` command. For more information about Keycloak and links to their documentation see their project at https://github.com/keycloak/keycloak.
|
||||
|
||||
Now we are ready to configure and plumb Keycloak with AWX. To do this we have provided a playbook which will:
|
||||
* Create a certificate for data exchange between Keycloak and AWX.
|
||||
* Create a realm in Keycloak with a client for AWX and 3 users.
|
||||
* Backup and configure the SMAL adapter in AWX. NOTE: the private key of any existing SAML adapters can not be backed up through the API, you need a DB backup to recover this.
|
||||
|
||||
Before we can run the playbook we need to understand that SAML works by sending redirects between AWX and Keycloak through the browser. Because of this we have to tell both AWX and Keycloak how they will construct the redirect URLs. On the Keycloak side, this is done within the realm configuration and on the AWX side its done through the SAML settings. The playbook requires a variable called `container_reference` to be set. The container_reference variable needs to be how your browser will be able to talk to the running containers. Here are some examples of how to choose a proper container_reference.
|
||||
* If you develop on a mac which runs a Fedora VM which has AWX running within that and the browser you use to access AWX runs on the mac. The the VM with the container has its own IP that is mapped to a name like `tower.home.net`. In this scenario your "container_reference" could be either the IP of the VM or the tower.home.net friendly name.
|
||||
* If you are on a Fedora work station running AWX and also using a browser on your workstation you could use localhost, your work stations IP or hostname as the container_reference.
|
||||
|
||||
In addition to container_reference, there are some additional variables which you can override if you need/choose to do so. Here are their names and default values:
|
||||
```yaml
|
||||
keycloak_user: admin
|
||||
keycloak_pass: admin
|
||||
cert_subject: "/C=US/ST=NC/L=Durham/O=awx/CN="
|
||||
```
|
||||
|
||||
* keycloak_(user|pass) need to change if you modified the user when starting the initial container above.
|
||||
* cert_subject will be the subject line of the certificate shared between AWX and keycloak you can change this if you like or just use the defaults.
|
||||
|
||||
To override any of the variables above you can add more `-e` arguments to the playbook run below. For example, if you simply need to change the `keycloak_pass` add the argument `-r keycloak_pass=my_secret_pass` to the next command.
|
||||
|
||||
In addition, you may need to override the username or password to get into your AWX instance. We log into AWX in order to read and write the SAML settings. This can be done in several ways because we are using the awx.awx collection. The easiest way is to set environment variables such as `CONTROLLER_USERNAME`. See the awx.awx documentation for more information on setting environment variables. In the example provided below we are showing an example of specifying a username/password for authentication.
|
||||
|
||||
Now that we have all of our variables covered we can run the playbook like:
|
||||
```bash
|
||||
export CONTROLLER_USERNAME=<your username>
|
||||
export CONTROLLER_PASSWORD=<your password>
|
||||
ansible-playbook tools/docker-compose/ansible/plumb_keycloak.yml -e container_reference=<your container_reference here>
|
||||
```
|
||||
|
||||
Once the playbook is done running SAML should now be setup in your development environment. This realm has three users with the following username/passwords:
|
||||
1. awx_unpriv:unpriv123
|
||||
2. awx_admin:admin123
|
||||
3. awx_auditor:audit123
|
||||
|
||||
The first account is a normal user. The second account has the attribute is_superuser set in Keycloak so will be a super user in AWX. The third account has the is_system_auditor attribute in Keycloak so it will be a system auditor in AWX. To log in with one of these Keycloak users go to the AWX login screen and click the small "Sign In With SAML Keycloak" button at the bottom of the login box.
|
||||
|
||||
### OpenLDAP Integration
|
||||
|
||||
OpenLDAP is an LDAP provider that can be used to test AWX with LDAP integration. This section describes how to build a reference OpenLDAP instance and plumb it with your AWX for testing purposes.
|
||||
|
||||
First, be sure that you have the awx.awx collection installed by running `make install_collection`.
|
||||
|
||||
Anytime you want to run an OpenLDAP instance alongside AWX we can start docker-compose with the LDAP option to get an LDAP instance with the command:
|
||||
```bash
|
||||
LDAP=true make docker-compose
|
||||
```
|
||||
|
||||
Once the containers come up two new ports (389, 636) should be exposed and the LDAP server should be running on those ports. The first port (389) is non-SSL and the second port (636) is SSL enabled.
|
||||
|
||||
Now we are ready to configure and plumb OpenLDAP with AWX. To do this we have provided a playbook which will:
|
||||
* Backup and configure the LDAP adapter in AWX. NOTE: this will back up your existing settings but the password fields can not be backed up through the API, you need a DB backup to recover this.
|
||||
|
||||
Note: The default configuration will utilize the non-tls connection. If you want to use the tls configuration you will need to work through TLS negotiation issues because the LDAP server is using a self signed certificate.
|
||||
|
||||
Before we can run the playbook we need to understand that LDAP will be communicated to from within the AWX container. Because of this, we have to tell AWX how to route traffic to the LDAP container through the `LDAP Server URI` settings. The playbook requires a variable called container_reference to be set. The container_reference variable needs to be how your AWX container will be able to talk to the LDAP container. See the SAML section for some examples for how to select a `container_reference`.
|
||||
|
||||
Once you have your container reference you can run the playbook like:
|
||||
```bash
|
||||
export CONTROLLER_USERNAME=<your username>
|
||||
export CONTROLLER_PASSWORD=<your password>
|
||||
ansible-playbook tools/docker-compose/ansible/plumb_ldap.yml -e container_reference=<your container_reference here>
|
||||
```
|
||||
|
||||
|
||||
Once the playbook is done running LDAP should now be setup in your development environment. This realm has four users with the following username/passwords:
|
||||
1. awx_ldap_unpriv:unpriv123
|
||||
2. awx_ldap_admin:admin123
|
||||
3. awx_ldap_auditor:audit123
|
||||
4. awx_ldap_org_admin:orgadmin123
|
||||
|
||||
The first account is a normal user. The second account will be a super user in AWX. The third account will be a system auditor in AWX. The fourth account is an org admin. All users belong to an org called "LDAP Organization". To log in with one of these users go to the AWX login screen enter the username/password.
|
||||
|
||||
|
||||
### Splunk Integration
|
||||
|
||||
Splunk is a log aggregation tool that can be used to test AWX with external logging integration. This section describes how to build a reference Splunk instance and plumb it with your AWX for testing purposes.
|
||||
|
||||
First, be sure that you have the awx.awx collection installed by running `make install_collection`.
|
||||
|
||||
Next, install the splunk.es collection by running `ansible-galaxy collection install splunk.es`.
|
||||
|
||||
Anytime you want to run a Splunk instance alongside AWX we can start docker-compose with the SPLUNK option to get a Splunk instance with the command:
|
||||
```bash
|
||||
SPLUNK=true make docker-compose
|
||||
```
|
||||
|
||||
Once the containers come up three new ports (8000, 8089 and 9199) should be exposed and the Splunk server should be running on some of those ports (the 9199 will be created later by the plumbing playbook). The first port (8000) is the non-SSL admin port and you can log into splunk with the credentials admin/splunk_admin. The url will be like http://<server>:8000/ this will be referenced below. The 8089 is the API port that the ansible modules will use to connect to and configure splunk. The 9199 port will be used to construct a TCP listener in Splunk that AWX will forward messages to.
|
||||
|
||||
Once the containers are up we are ready to configure and plumb Splunk with AWX. To do this we have provided a playbook which will:
|
||||
* Backup and configure the External Logging adapter in AWX. NOTE: this will back up your existing settings but the password fields can not be backed up through the API, you need a DB backup to recover this.
|
||||
* Create a TCP port in Splunk for log forwarding
|
||||
|
||||
For routing traffic between AWX and Splunk we will use the internal docker compose network. The `Logging Aggregator` will be configured using the internal network machine name of `splunk`.
|
||||
|
||||
Once you have have the collections installed (from above) you can run the playbook like:
|
||||
```bash
|
||||
export CONTROLLER_USERNAME=<your username>
|
||||
export CONTROLLER_PASSWORD=<your password>
|
||||
ansible-playbook tools/docker-compose/ansible/plumb_splunk.yml
|
||||
```
|
||||
|
||||
Once the playbook is done running Splunk should now be setup in your development environment. You can log into the admin console (see above for username/password) and click on "Searching and Reporting" in the left hand navigation. In the search box enter `source="http:tower_logging_collections"` and click search.
|
||||
|
||||
|
||||
### Prometheus and Grafana integration
|
||||
|
||||
Prometheus is a metrics collecting tool, and we support prometheus formatted data at the `api/v2/metrics` endpoint.
|
||||
|
||||
1. Change the `username` and `password` in `tools/prometheus/prometheus.yml`. You can also change the scrape interval.
|
||||
2. (optional) if you are in a clustered environment, you can change the target to `haproxy:8043` so that the incoming prometheus requests go through the load balancer. Leaving it set to `awx1` also works.
|
||||
3. run `make prometheus`
|
||||
4. navigate to `http://localhost:9090/targets` and check that the metrics endpoint State is Up.
|
||||
5. Click the Graph tab, start typing a metric name, or use the Open metrics explorer button to find a metric to display (next to `Execute` button)
|
||||
|
||||
Prometheus can display basic graphs of your data, but it is minimal. Often Prometheus is paired with an app like Grafana for better visualization features.
|
||||
|
||||
1. `make grafana` to run a local docker grafana instance.
|
||||
2. Navigate to `http://localhost:3001`. Sign in, using `admin` for both username and password.
|
||||
4. Now you can create a dashboard and add panels for whichever metrics you like.
|
||||
|
||||
81
tools/docker-compose/ansible/plumb_keycloak.yml
Normal file
81
tools/docker-compose/ansible/plumb_keycloak.yml
Normal file
@@ -0,0 +1,81 @@
|
||||
---
|
||||
- name: Plumb a keycloak instance
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: False
|
||||
vars:
|
||||
private_key_file: ../_sources/keycloak.key
|
||||
public_key_file: ../_sources/keycloak.cert
|
||||
awx_host: "https://localhost:8043"
|
||||
keycloak_realm_template: ../_sources/keycloak.awx.realm.json
|
||||
keycloak_user: admin
|
||||
keycloak_pass: admin
|
||||
cert_subject: "/C=US/ST=NC/L=Durham/O=awx/CN="
|
||||
tasks:
|
||||
- name: Generate certificates for keycloak
|
||||
command: 'openssl req -new -x509 -days 365 -nodes -out {{ public_key_file }} -keyout {{ private_key_file }} -subj "{{ cert_subject }}"'
|
||||
args:
|
||||
creates: "{{ public_key_file }}"
|
||||
|
||||
- name: Load certs, existing and new SAML settings
|
||||
set_fact:
|
||||
private_key: "{{ private_key_content }}"
|
||||
public_key: "{{ public_key_content }}"
|
||||
public_key_trimmed: "{{ public_key_content | regex_replace('-----BEGIN CERTIFICATE-----\\\\n', '') | regex_replace('\\\\n-----END CERTIFICATE-----', '') }}"
|
||||
existing_saml: "{{ lookup('awx.awx.controller_api', 'settings/saml', host=awx_host, verify_ssl=false) }}"
|
||||
new_saml: "{{ lookup('template', 'saml_settings.json.j2') }}"
|
||||
vars:
|
||||
# We add the extra \\ in here so that when jinja is templating out the files we end up with \n in the strings.
|
||||
public_key_content: "{{ lookup('file', public_key_file) | regex_replace('\n', '\\\\n') }}"
|
||||
private_key_content: "{{ lookup('file', private_key_file) | regex_replace('\n', '\\\\n') }}"
|
||||
|
||||
- name: Displauy existing SAML configuration
|
||||
debug:
|
||||
msg:
|
||||
- "Here is your existing SAML configuration for reference:"
|
||||
- "{{ existing_saml }}"
|
||||
|
||||
- pause:
|
||||
prompt: "Continuing to run this will replace your existing saml settings (displayed above). They will all be captured except for your private key. Be sure that is backed up before continuing"
|
||||
|
||||
- name: Write out the existing content
|
||||
copy:
|
||||
dest: "../_sources/existing_saml_adapter_settings.json"
|
||||
content: "{{ existing_saml }}"
|
||||
|
||||
- name: Configure AWX SAML adapter
|
||||
awx.awx.settings:
|
||||
settings: "{{ new_saml }}"
|
||||
controller_host: "{{ awx_host }}"
|
||||
validate_certs: False
|
||||
|
||||
- name: Get a keycloak token
|
||||
uri:
|
||||
url: "https://localhost:8443/auth/realms/master/protocol/openid-connect/token"
|
||||
method: POST
|
||||
body_format: form-urlencoded
|
||||
body:
|
||||
client_id: "admin-cli"
|
||||
username: "{{ keycloak_user }}"
|
||||
password: "{{ keycloak_pass }}"
|
||||
grant_type: "password"
|
||||
validate_certs: False
|
||||
register: keycloak_response
|
||||
|
||||
- name: Template the AWX realm
|
||||
template:
|
||||
src: keycloak.awx.realm.json.j2
|
||||
dest: "{{ keycloak_realm_template }}"
|
||||
|
||||
- name: Create the AWX realm
|
||||
uri:
|
||||
url: "https://localhost:8443/auth/admin/realms"
|
||||
method: POST
|
||||
body_format: json
|
||||
body: "{{ lookup('file', keycloak_realm_template) }}"
|
||||
validate_certs: False
|
||||
headers:
|
||||
Authorization: "Bearer {{ keycloak_response.json.access_token }}"
|
||||
status_code: 201
|
||||
register: realm_creation
|
||||
changed_when: True
|
||||
32
tools/docker-compose/ansible/plumb_ldap.yml
Normal file
32
tools/docker-compose/ansible/plumb_ldap.yml
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
- name: Plumb an ldap instance
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: False
|
||||
vars:
|
||||
awx_host: "https://localhost:8043"
|
||||
tasks:
|
||||
- name: Load existing and new LDAP settings
|
||||
set_fact:
|
||||
existing_ldap: "{{ lookup('awx.awx.controller_api', 'settings/ldap', host=awx_host, verify_ssl=false) }}"
|
||||
new_ldap: "{{ lookup('template', 'ldap_settings.json.j2') }}"
|
||||
|
||||
- name: Display existing LDAP configuration
|
||||
debug:
|
||||
msg:
|
||||
- "Here is your existing LDAP configuration for reference:"
|
||||
- "{{ existing_ldap }}"
|
||||
|
||||
- pause:
|
||||
prompt: "Continuing to run this will replace your existing ldap settings (displayed above). They will all be captured. Be sure that is backed up before continuing"
|
||||
|
||||
- name: Write out the existing content
|
||||
copy:
|
||||
dest: "../_sources/existing_ldap_adapter_settings.json"
|
||||
content: "{{ existing_ldap }}"
|
||||
|
||||
- name: Configure AWX LDAP adapter
|
||||
awx.awx.settings:
|
||||
settings: "{{ new_ldap }}"
|
||||
controller_host: "{{ awx_host }}"
|
||||
validate_certs: False
|
||||
51
tools/docker-compose/ansible/plumb_splunk.yml
Normal file
51
tools/docker-compose/ansible/plumb_splunk.yml
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
- name: Plumb a splunk instance
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: False
|
||||
vars:
|
||||
awx_host: "https://localhost:8043"
|
||||
collections:
|
||||
- splunk.es
|
||||
|
||||
tasks:
|
||||
- name: create splunk_data_input_network
|
||||
splunk.es.data_input_network:
|
||||
name: "9199"
|
||||
protocol: "tcp"
|
||||
source: "http:tower_logging_collections"
|
||||
sourcetype: "httpevent"
|
||||
state: "present"
|
||||
vars:
|
||||
ansible_network_os: splunk.es.splunk
|
||||
ansible_user: admin
|
||||
ansible_httpapi_pass: splunk_admin
|
||||
ansible_httpapi_port: 8089
|
||||
ansible_httpapi_use_ssl: yes
|
||||
ansible_httpapi_validate_certs: False
|
||||
ansible_connection: httpapi
|
||||
|
||||
- name: Load existing and new Logging settings
|
||||
set_fact:
|
||||
existing_logging: "{{ lookup('awx.awx.controller_api', 'settings/logging', host=awx_host, verify_ssl=false) }}"
|
||||
new_logging: "{{ lookup('template', 'logging.json.j2') }}"
|
||||
|
||||
- name: Display existing Logging configuration
|
||||
debug:
|
||||
msg:
|
||||
- "Here is your existing SAML configuration for reference:"
|
||||
- "{{ existing_logging }}"
|
||||
|
||||
- pause:
|
||||
prompt: "Continuing to run this will replace your existing logging settings (displayed above). They will all be captured except for your connection password. Be sure that is backed up before continuing"
|
||||
|
||||
- name: Write out the existing content
|
||||
copy:
|
||||
dest: "../_sources/existing_logging.json"
|
||||
content: "{{ existing_logging }}"
|
||||
|
||||
- name: Configure AWX logging adapter
|
||||
awx.awx.settings:
|
||||
settings: "{{ new_logging }}"
|
||||
controller_host: "{{ awx_host }}"
|
||||
validate_certs: False
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
sources_dest: '../_sources'
|
||||
compose_name: 'docker-compose.yml'
|
||||
awx_image: 'quay.io/ansible/awx_devel'
|
||||
awx_image: 'ghcr.io/ansible/awx_devel'
|
||||
pg_port: 5432
|
||||
pg_username: 'awx'
|
||||
pg_database: 'awx'
|
||||
@@ -16,3 +16,16 @@ receptor_work_sign_reconfigure: false
|
||||
work_sign_key_dir: '../_sources/receptor'
|
||||
work_sign_private_keyfile: "{{ work_sign_key_dir }}/work_private_key.pem"
|
||||
work_sign_public_keyfile: "{{ work_sign_key_dir }}/work_public_key.pem"
|
||||
|
||||
enable_keycloak: false
|
||||
|
||||
enable_ldap: false
|
||||
ldap_public_key_file_name: 'ldap.cert'
|
||||
ldap_private_key_file_name: 'ldap.key'
|
||||
ldap_cert_dir: '{{ sources_dest }}/ldap_certs'
|
||||
ldap_diff_dir: '{{ sources_dest }}/ldap_diffs'
|
||||
ldap_public_key_file: '{{ ldap_cert_dir }}/{{ ldap_public_key_file_name }}'
|
||||
ldap_private_key_file: '{{ ldap_cert_dir }}/{{ ldap_private_key_file_name }}'
|
||||
ldap_cert_subject: "/C=US/ST=NC/L=Durham/O=awx/CN="
|
||||
|
||||
enable_splunk: false
|
||||
|
||||
86
tools/docker-compose/ansible/roles/sources/files/ldap.ldif
Normal file
86
tools/docker-compose/ansible/roles/sources/files/ldap.ldif
Normal file
@@ -0,0 +1,86 @@
|
||||
dn: dc=example,dc=org
|
||||
objectClass: dcObject
|
||||
objectClass: organization
|
||||
dc: example
|
||||
o: example
|
||||
|
||||
dn: ou=users,dc=example,dc=org
|
||||
ou: users
|
||||
objectClass: organizationalUnit
|
||||
|
||||
dn: cn=awx_ldap_admin,ou=users,dc=example,dc=org
|
||||
mail: admin@example.org
|
||||
sn: LdapAdmin
|
||||
cn: awx_ldap_admin
|
||||
objectClass: top
|
||||
objectClass: person
|
||||
objectClass: organizationalPerson
|
||||
objectClass: inetOrgPerson
|
||||
userPassword: admin123
|
||||
givenName: awx
|
||||
|
||||
dn: cn=awx_ldap_auditor,ou=users,dc=example,dc=org
|
||||
mail: auditor@example.org
|
||||
sn: LdapAuditor
|
||||
cn: awx_ldap_auditor
|
||||
objectClass: top
|
||||
objectClass: person
|
||||
objectClass: organizationalPerson
|
||||
objectClass: inetOrgPerson
|
||||
userPassword: audit123
|
||||
givenName: awx
|
||||
|
||||
dn: cn=awx_ldap_unpriv,ou=users,dc=example,dc=org
|
||||
mail: unpriv@example.org
|
||||
sn: LdapUnpriv
|
||||
cn: awx_ldap_unpriv
|
||||
objectClass: top
|
||||
objectClass: person
|
||||
objectClass: organizationalPerson
|
||||
objectClass: inetOrgPerson
|
||||
givenName: awx
|
||||
userPassword: unpriv123
|
||||
|
||||
dn: ou=groups,dc=example,dc=org
|
||||
ou: groups
|
||||
objectClass: top
|
||||
objectClass: organizationalUnit
|
||||
|
||||
dn: cn=awx_users,ou=groups,dc=example,dc=org
|
||||
cn: awx_users
|
||||
objectClass: top
|
||||
objectClass: groupOfNames
|
||||
member: cn=awx_ldap_admin,ou=users,dc=example,dc=org
|
||||
member: cn=awx_ldap_auditor,ou=users,dc=example,dc=org
|
||||
member: cn=awx_ldap_unpriv,ou=users,dc=example,dc=org
|
||||
member: cn=awx_ldap_org_admin,ou=users,dc=example,dc=org
|
||||
|
||||
dn: cn=awx_admins,ou=groups,dc=example,dc=org
|
||||
cn: awx_admins
|
||||
objectClass: top
|
||||
objectClass: groupOfNames
|
||||
member: cn=awx_ldap_admin,ou=users,dc=example,dc=org
|
||||
|
||||
dn: cn=awx_auditors,ou=groups,dc=example,dc=org
|
||||
cn: awx_auditors
|
||||
objectClass: top
|
||||
objectClass: groupOfNames
|
||||
member: cn=awx_ldap_auditor,ou=users,dc=example,dc=org
|
||||
|
||||
dn: cn=awx_ldap_org_admin,ou=users,dc=example,dc=org
|
||||
mail: org.admin@example.org
|
||||
sn: LdapOrgAdmin
|
||||
cn: awx_ldap_org_admin
|
||||
objectClass: top
|
||||
objectClass: person
|
||||
objectClass: organizationalPerson
|
||||
objectClass: inetOrgPerson
|
||||
givenName: awx
|
||||
userPassword: orgadmin123
|
||||
|
||||
dn: cn=awx_org_admins,ou=groups,dc=example,dc=org
|
||||
cn: awx_org_admins
|
||||
objectClass: top
|
||||
objectClass: groupOfNames
|
||||
member: cn=awx_ldap_org_admin,ou=users,dc=example,dc=org
|
||||
|
||||
18
tools/docker-compose/ansible/roles/sources/tasks/ldap.yml
Normal file
18
tools/docker-compose/ansible/roles/sources/tasks/ldap.yml
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
- name: Create LDAP cert directory
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
loop:
|
||||
- "{{ ldap_cert_dir }}"
|
||||
- "{{ ldap_diff_dir }}"
|
||||
|
||||
- name: General LDAP cert
|
||||
command: 'openssl req -new -x509 -days 365 -nodes -out {{ ldap_public_key_file }} -keyout {{ ldap_private_key_file }} -subj "{{ ldap_cert_subject }}"'
|
||||
args:
|
||||
creates: "{{ ldap_public_key_file }}"
|
||||
|
||||
- name: Copy ldap.diff
|
||||
copy:
|
||||
src: "ldap.ldif"
|
||||
dest: "{{ ldap_diff_dir }}/ldap.ldif"
|
||||
@@ -26,6 +26,8 @@
|
||||
mode: '0600'
|
||||
when: not lookup('vars', item.item, default='') and not item.stat.exists
|
||||
loop: "{{ secrets.results }}"
|
||||
loop_control:
|
||||
label: '{{ item.item }}'
|
||||
|
||||
- name: Include generated secrets unless they are explicitly passed in
|
||||
include_vars: "{{ sources_dest }}/secrets/{{ item.item }}.yml"
|
||||
@@ -89,6 +91,10 @@
|
||||
args:
|
||||
creates: "{{ work_sign_public_keyfile }}"
|
||||
|
||||
- name: Include LDAP tasks if enabled
|
||||
include_tasks: ldap.yml
|
||||
when: enable_ldap | bool
|
||||
|
||||
- name: Render Docker-Compose
|
||||
template:
|
||||
src: docker-compose.yml.j2
|
||||
|
||||
@@ -19,6 +19,8 @@ services:
|
||||
AWX_GROUP_QUEUES: tower
|
||||
MAIN_NODE_TYPE: "${MAIN_NODE_TYPE:-hybrid}"
|
||||
RECEPTORCTL_SOCKET: {{ receptor_socket_file }}
|
||||
CONTROL_PLANE_NODE_COUNT: {{ control_plane_node_count|int }}
|
||||
EXECUTION_NODE_COUNT: {{ execution_node_count|int }}
|
||||
{% if loop.index == 1 %}
|
||||
RUN_MIGRATIONS: 1
|
||||
{% endif %}
|
||||
@@ -54,6 +56,7 @@ services:
|
||||
- "8013:8013" # http
|
||||
- "8043:8043" # https
|
||||
- "2222:2222" # receptor foo node
|
||||
- "3000:3001" # used by the UI dev env
|
||||
{% endif %}
|
||||
redis_{{ container_postfix }}:
|
||||
image: redis:latest
|
||||
@@ -79,6 +82,59 @@ services:
|
||||
{% set container_postfix = loop.index %}
|
||||
- "awx_{{ container_postfix }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% if enable_keycloak|bool %}
|
||||
keycloak:
|
||||
image: quay.io/keycloak/keycloak:15.0.2
|
||||
container_name: tools_keycloak_1
|
||||
hostname: keycloak
|
||||
user: "{{ ansible_user_uid }}"
|
||||
ports:
|
||||
- "8443:8443"
|
||||
environment:
|
||||
DB_VENDOR: postgres
|
||||
DB_ADDR: postgres
|
||||
DB_DATABASE: keycloak
|
||||
DB_USER: {{ pg_username }}
|
||||
DB_PASSWORD: {{ pg_password }}
|
||||
depends_on:
|
||||
- postgres
|
||||
{% endif %}
|
||||
{% if enable_ldap|bool %}
|
||||
ldap:
|
||||
image: bitnami/openldap:2
|
||||
container_name: tools_ldap_1
|
||||
hostname: ldap
|
||||
user: "{{ ansible_user_uid }}"
|
||||
ports:
|
||||
- "389:1389"
|
||||
- "636:1636"
|
||||
environment:
|
||||
LDAP_ADMIN_USERNAME: admin
|
||||
LDAP_ADMIN_PASSWORD: admin
|
||||
LDAP_CUSTOM_LDIF_DIR: /opt/bitnami/openldap/ldiffs
|
||||
LDAP_ENABLE_TLS: "yes"
|
||||
LDAP_LDAPS_PORT_NUMBER: 1636
|
||||
LDAP_TLS_CERT_FILE: /opt/bitnami/openldap/certs/{{ ldap_public_key_file_name }}
|
||||
LDAP_TLS_CA_FILE: /opt/bitnami/openldap/certs/{{ ldap_public_key_file_name }}
|
||||
LDAP_TLS_KEY_FILE: /opt/bitnami/openldap/certs/{{ ldap_private_key_file_name }}
|
||||
volumes:
|
||||
- 'openldap_data:/bitnami/openldap'
|
||||
- '../../docker-compose/_sources/ldap_certs:/opt/bitnami/openldap/certs'
|
||||
- '../../docker-compose/_sources/ldap_diffs:/opt/bitnami/openldap/ldiffs'
|
||||
{% endif %}
|
||||
{% if enable_splunk|bool %}
|
||||
splunk:
|
||||
image: splunk/splunk:latest
|
||||
container_name: tools_splunk_1
|
||||
hostname: splunk
|
||||
ports:
|
||||
- "8000:8000"
|
||||
- "8089:8089"
|
||||
- "9199:9199"
|
||||
environment:
|
||||
SPLUNK_START_ARGS: --accept-license
|
||||
SPLUNK_PASSWORD: splunk_admin
|
||||
{% endif %}
|
||||
# A useful container that simply passes through log messages to the console
|
||||
# helpful for testing awx/tower logging
|
||||
@@ -121,6 +177,7 @@ services:
|
||||
links:
|
||||
- receptor-hop
|
||||
volumes:
|
||||
- "../../../:/awx_devel" # not used, but mounted so that any in-place installs can be used for whole cluster
|
||||
- "../../docker-compose/_sources/receptor/receptor-worker-{{ loop.index }}.conf:/etc/receptor/receptor.conf"
|
||||
- "/sys/fs/cgroup:/sys/fs/cgroup"
|
||||
- "../../docker-compose/_sources/receptor/work_public_key.pem:/etc/receptor/work_public_key.pem"
|
||||
@@ -136,6 +193,11 @@ volumes:
|
||||
redis_socket_{{ container_postfix }}:
|
||||
name: tools_redis_socket_{{ container_postfix }}
|
||||
{% endfor -%}
|
||||
{% if enable_ldap %}
|
||||
openldap_data:
|
||||
name: tools_ldap_1
|
||||
driver: local
|
||||
{% endif %}
|
||||
{% if minikube_container_group|bool %}
|
||||
networks:
|
||||
default:
|
||||
|
||||
@@ -46,4 +46,3 @@ listen stats
|
||||
bind *:1936
|
||||
stats enable
|
||||
stats uri /
|
||||
|
||||
|
||||
@@ -10,3 +10,6 @@
|
||||
|
||||
- tcp-listener:
|
||||
port: 5555
|
||||
|
||||
- control-service:
|
||||
service: control
|
||||
|
||||
77
tools/docker-compose/ansible/smoke-test.yml
Normal file
77
tools/docker-compose/ansible/smoke-test.yml
Normal file
@@ -0,0 +1,77 @@
|
||||
---
|
||||
#
|
||||
# This is used by a CI check in GitHub Actions and isnt really
|
||||
# meant to be run locally.
|
||||
#
|
||||
# The development environment does some unfortunate things to
|
||||
# make rootless podman work inside of a docker container.
|
||||
# The goal here is to essentially tests that the awx user is
|
||||
# able to run `podman run`.
|
||||
#
|
||||
- name: Test that the development environment is able to launch a job
|
||||
hosts: localhost
|
||||
tasks:
|
||||
- name: Boot the development environment
|
||||
command: |
|
||||
make docker-compose
|
||||
environment:
|
||||
COMPOSE_UP_OPTS: -d
|
||||
args:
|
||||
chdir: "{{ repo_dir }}"
|
||||
|
||||
# Takes a while for migrations to finish
|
||||
- name: Wait for the dev environment to be ready
|
||||
uri:
|
||||
url: "http://localhost:8013/api/v2/ping/"
|
||||
register: _result
|
||||
until: _result.status == 200
|
||||
retries: 120
|
||||
delay: 5
|
||||
|
||||
- name: Reset admin password
|
||||
shell: |
|
||||
docker exec -i tools_awx_1 bash <<EOSH
|
||||
awx-manage update_password --username=admin --password=password
|
||||
awx-manage create_preload_data
|
||||
EOSH
|
||||
|
||||
- block:
|
||||
- name: Launch Demo Job Template
|
||||
awx.awx.job_launch:
|
||||
name: Demo Job Template
|
||||
wait: yes
|
||||
validate_certs: no
|
||||
controller_host: "http://localhost:8013"
|
||||
controller_username: "admin"
|
||||
controller_password: "password"
|
||||
rescue:
|
||||
- name: Get list of project updates and jobs
|
||||
uri:
|
||||
url: "http://localhost:8013/api/v2/{{ resource }}/"
|
||||
user: admin
|
||||
password: "password"
|
||||
force_basic_auth: yes
|
||||
register: job_lists
|
||||
loop:
|
||||
- project_updates
|
||||
- jobs
|
||||
loop_control:
|
||||
loop_var: resource
|
||||
|
||||
- name: Get all job and project details
|
||||
uri:
|
||||
url: "http://localhost:8013{{ endpoint }}"
|
||||
user: admin
|
||||
password: "password"
|
||||
force_basic_auth: yes
|
||||
loop: |
|
||||
{{ job_lists.results | map(attribute='json') | map(attribute='results') | flatten | map(attribute='url') }}
|
||||
loop_control:
|
||||
loop_var: endpoint
|
||||
|
||||
- name: Re-emit failure
|
||||
vars:
|
||||
failed_task:
|
||||
result: '{{ ansible_failed_result }}'
|
||||
fail:
|
||||
msg: '{{ failed_task }}'
|
||||
1770
tools/docker-compose/ansible/templates/keycloak.awx.realm.json.j2
Normal file
1770
tools/docker-compose/ansible/templates/keycloak.awx.realm.json.j2
Normal file
File diff suppressed because it is too large
Load Diff
52
tools/docker-compose/ansible/templates/ldap_settings.json.j2
Normal file
52
tools/docker-compose/ansible/templates/ldap_settings.json.j2
Normal file
@@ -0,0 +1,52 @@
|
||||
{
|
||||
"AUTH_LDAP_1_SERVER_URI": "ldap://{{ container_reference }}:389",
|
||||
"AUTH_LDAP_1_BIND_DN": "cn=admin,dc=example,dc=org",
|
||||
"AUTH_LDAP_1_BIND_PASSWORD": "admin",
|
||||
"AUTH_LDAP_1_START_TLS": false,
|
||||
"AUTH_LDAP_1_CONNECTION_OPTIONS": {
|
||||
"OPT_REFERRALS": 0,
|
||||
"OPT_NETWORK_TIMEOUT": 30
|
||||
},
|
||||
"AUTH_LDAP_1_USER_SEARCH": [
|
||||
"ou=users,dc=example,dc=org",
|
||||
"SCOPE_SUBTREE",
|
||||
"(cn=%(user)s)"
|
||||
],
|
||||
"AUTH_LDAP_1_USER_DN_TEMPLATE": "cn=%(user)s,ou=users,dc=example,dc=org",
|
||||
"AUTH_LDAP_1_USER_ATTR_MAP": {
|
||||
"first_name": "givenName",
|
||||
"last_name": "sn",
|
||||
"email": "mail"
|
||||
},
|
||||
"AUTH_LDAP_1_GROUP_SEARCH": [
|
||||
"ou=groups,dc=example,dc=org",
|
||||
"SCOPE_SUBTREE",
|
||||
"(objectClass=groupOfNames)"
|
||||
],
|
||||
"AUTH_LDAP_1_GROUP_TYPE": "MemberDNGroupType",
|
||||
"AUTH_LDAP_1_GROUP_TYPE_PARAMS": {
|
||||
"member_attr": "member",
|
||||
"name_attr": "cn"
|
||||
},
|
||||
"AUTH_LDAP_1_REQUIRE_GROUP": "cn=awx_users,ou=groups,dc=example,dc=org",
|
||||
"AUTH_LDAP_1_DENY_GROUP": null,
|
||||
"AUTH_LDAP_1_USER_FLAGS_BY_GROUP": {
|
||||
"is_superuser": [
|
||||
"cn=awx_admins,ou=groups,dc=example,dc=org"
|
||||
],
|
||||
"is_system_auditor": [
|
||||
"cn=awx_auditors,ou=groups,dc=example,dc=org"
|
||||
]
|
||||
},
|
||||
"AUTH_LDAP_1_ORGANIZATION_MAP": {
|
||||
"LDAP Organization": {
|
||||
"users": true,
|
||||
"remove_admins": false,
|
||||
"remove_users": true,
|
||||
"admins": [
|
||||
"cn=awx_org_admins,ou=groups,dc=example,dc=org"
|
||||
]
|
||||
}
|
||||
},
|
||||
"AUTH_LDAP_1_TEAM_MAP": {}
|
||||
}
|
||||
10
tools/docker-compose/ansible/templates/logging.json.j2
Normal file
10
tools/docker-compose/ansible/templates/logging.json.j2
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"LOG_AGGREGATOR_HOST": "splunk",
|
||||
"LOG_AGGREGATOR_PORT": 9199,
|
||||
"LOG_AGGREGATOR_TYPE": "splunk",
|
||||
"LOG_AGGREGATOR_USERNAME": "admin",
|
||||
"LOG_AGGREGATOR_PASSWORD": "splunk_admin",
|
||||
"LOG_AGGREGATOR_ENABLED": true,
|
||||
"LOG_AGGREGATOR_PROTOCOL": "tcp",
|
||||
"LOG_AGGREGATOR_VERIFY_CERT": false,
|
||||
}
|
||||
51
tools/docker-compose/ansible/templates/saml_settings.json.j2
Normal file
51
tools/docker-compose/ansible/templates/saml_settings.json.j2
Normal file
@@ -0,0 +1,51 @@
|
||||
{
|
||||
"SAML_AUTO_CREATE_OBJECTS": true,
|
||||
"SOCIAL_AUTH_SAML_SP_ENTITY_ID": "{{ container_reference }}:8043",
|
||||
"SOCIAL_AUTH_SAML_SP_PUBLIC_CERT": "{{ public_key_content | regex_replace('\\n', '') }}",
|
||||
"SOCIAL_AUTH_SAML_SP_PRIVATE_KEY": "{{ private_key_content | regex_replace('\\n', '') }}",
|
||||
"SOCIAL_AUTH_SAML_ORG_INFO": {
|
||||
"en-US": {
|
||||
"url": "https://{{ container_reference }}:8443",
|
||||
"name": "Keycloak",
|
||||
"displayname": "Keycloak Solutions Engineering"
|
||||
}
|
||||
},
|
||||
"SOCIAL_AUTH_SAML_TECHNICAL_CONTACT": {
|
||||
"givenName": "Me Myself",
|
||||
"emailAddress": "noone@nowhere.com"
|
||||
},
|
||||
"SOCIAL_AUTH_SAML_SUPPORT_CONTACT": {
|
||||
"givenName": "Me Myself",
|
||||
"emailAddress": "noone@nowhere.com"
|
||||
},
|
||||
"SOCIAL_AUTH_SAML_ENABLED_IDPS": {
|
||||
"Keycloak": {
|
||||
"attr_user_permanent_id": "name_id",
|
||||
"entity_id": "https://{{ container_reference }}:8443/auth/realms/awx",
|
||||
"attr_groups": "groups",
|
||||
"url": "https://{{ container_reference }}:8443/auth/realms/awx/protocol/saml",
|
||||
"attr_first_name": "first_name",
|
||||
"x509cert": "{{ public_key_content | regex_replace('\\n', '') }}",
|
||||
"attr_email": "email",
|
||||
"attr_last_name": "last_name",
|
||||
"attr_username": "username"
|
||||
}
|
||||
},
|
||||
"SOCIAL_AUTH_SAML_SECURITY_CONFIG": {
|
||||
"requestedAuthnContext": false
|
||||
},
|
||||
"SOCIAL_AUTH_SAML_SP_EXTRA": null,
|
||||
"SOCIAL_AUTH_SAML_EXTRA_DATA": null,
|
||||
"SOCIAL_AUTH_SAML_ORGANIZATION_MAP": {
|
||||
"Default": {
|
||||
"users": true
|
||||
}
|
||||
},
|
||||
"SOCIAL_AUTH_SAML_TEAM_MAP": null,
|
||||
"SOCIAL_AUTH_SAML_ORGANIZATION_ATTR": {},
|
||||
"SOCIAL_AUTH_SAML_TEAM_ATTR": {},
|
||||
"SOCIAL_AUTH_SAML_USER_FLAGS_BY_ATTR": {
|
||||
"is_superuser_attr": "is_superuser",
|
||||
"is_system_auditor_attr": "is_system_auditor"
|
||||
}
|
||||
}
|
||||
20
tools/docker-compose/awx-autoreload
Executable file
20
tools/docker-compose/awx-autoreload
Executable file
@@ -0,0 +1,20 @@
|
||||
#!/bin/env bash
|
||||
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage:"
|
||||
echo " autoreload directory command"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
last_reload=`date +%s`
|
||||
|
||||
inotifywait -mrq -e create,delete,attrib,close_write,move --exclude '/awx_devel/awx/ui' $1 | while read directory action file; do
|
||||
this_reload=`date +%s`
|
||||
since_last=$((this_reload-last_reload))
|
||||
if [[ "$file" =~ .*py$ ]] && [[ "$since_last" -gt 1 ]]; then
|
||||
echo "File changed: $file"
|
||||
echo "Running command: $2"
|
||||
eval $2
|
||||
last_reload=`date +%s`
|
||||
fi
|
||||
done
|
||||
@@ -19,9 +19,6 @@ else
|
||||
wait-for-migrations
|
||||
fi
|
||||
|
||||
make init
|
||||
|
||||
|
||||
if output=$(awx-manage createsuperuser --noinput --username=admin --email=admin@localhost 2> /dev/null); then
|
||||
echo $output
|
||||
admin_password=$(openssl rand -base64 12)
|
||||
@@ -35,6 +32,27 @@ mkdir -p /awx_devel/awx/public/static
|
||||
mkdir -p /awx_devel/awx/ui/static
|
||||
mkdir -p /awx_devel/awx/ui/build/static
|
||||
|
||||
awx-manage provision_instance --hostname="$(hostname)" --node_type="$MAIN_NODE_TYPE"
|
||||
awx-manage register_queue --queuename=controlplane --instance_percent=100
|
||||
awx-manage register_queue --queuename=default --instance_percent=100
|
||||
|
||||
if [[ -n "$RUN_MIGRATIONS" ]]; then
|
||||
for (( i=1; i<$CONTROL_PLANE_NODE_COUNT; i++ )); do
|
||||
for (( j=i + 1; j<=$CONTROL_PLANE_NODE_COUNT; j++ )); do
|
||||
awx-manage register_peers "awx_$i" --peers "awx_$j"
|
||||
done
|
||||
done
|
||||
|
||||
if [[ $EXECUTION_NODE_COUNT > 0 ]]; then
|
||||
awx-manage provision_instance --hostname="receptor-hop" --node_type="hop"
|
||||
awx-manage register_peers "receptor-hop" --peers "awx_1"
|
||||
for (( e=1; e<=$EXECUTION_NODE_COUNT; e++ )); do
|
||||
awx-manage provision_instance --hostname="receptor-$e" --node_type="execution"
|
||||
awx-manage register_peers "receptor-$e" --peers "receptor-hop"
|
||||
done
|
||||
fi
|
||||
fi
|
||||
|
||||
# Create resource entries when using Minikube
|
||||
if [[ -n "$MINIKUBE_CONTAINER_GROUP" ]]; then
|
||||
awx-manage shell < /awx_devel/tools/docker-compose-minikube/_sources/bootstrap_minikube.py
|
||||
|
||||
@@ -15,5 +15,5 @@ localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env pyth
|
||||
# pg_username=""
|
||||
# pg_hostname=""
|
||||
|
||||
# awx_image="quay.io/awx/awx_devel"
|
||||
# awx_image="ghcr.io/ansible/awx_devel"
|
||||
# migrate_local_docker=false
|
||||
|
||||
@@ -5,4 +5,4 @@ bootstrap_development.sh
|
||||
|
||||
cd /awx_devel
|
||||
# Start the services
|
||||
exec tini -- make supervisor
|
||||
exec make supervisor
|
||||
|
||||
@@ -22,6 +22,7 @@ server {
|
||||
|
||||
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
|
||||
add_header Strict-Transport-Security max-age=15768000;
|
||||
add_header X-Content-Type-Options nosniff;
|
||||
|
||||
location /static/ {
|
||||
root /awx_devel;
|
||||
@@ -84,6 +85,7 @@ server {
|
||||
|
||||
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
|
||||
add_header Strict-Transport-Security max-age=15768000;
|
||||
add_header X-Content-Type-Options nosniff;
|
||||
|
||||
location /static/ {
|
||||
root /awx_devel;
|
||||
|
||||
@@ -6,4 +6,9 @@ make clean
|
||||
make awx-link
|
||||
|
||||
cp tools/docker-compose/ansible/roles/sources/files/local_settings.py awx/settings/local_settings.py
|
||||
make "${1:-test}"
|
||||
|
||||
if [[ ! $@ ]]; then
|
||||
make test
|
||||
else
|
||||
make $@
|
||||
fi
|
||||
|
||||
@@ -5,79 +5,70 @@ nodaemon=true
|
||||
|
||||
[program:awx-dispatcher]
|
||||
command = make dispatcher
|
||||
autostart = true
|
||||
autorestart = true
|
||||
stopwaitsecs = 1
|
||||
stopsignal=KILL
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
redirect_stderr=true
|
||||
stdout_events_enabled = true
|
||||
stderr_events_enabled = true
|
||||
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
|
||||
[program:awx-receiver]
|
||||
command = make receiver
|
||||
autostart = true
|
||||
autorestart = true
|
||||
stopwaitsecs = 1
|
||||
stopsignal=KILL
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
redirect_stderr=true
|
||||
stdout_events_enabled = true
|
||||
stderr_events_enabled = true
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
|
||||
[program:awx-wsbroadcast]
|
||||
command = make wsbroadcast
|
||||
autostart = true
|
||||
autorestart = true
|
||||
stopwaitsecs = 1
|
||||
stopsignal=KILL
|
||||
autorestart = true
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
redirect_stderr=true
|
||||
stdout_events_enabled = true
|
||||
stderr_events_enabled = true
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
|
||||
[program:awx-uwsgi]
|
||||
command = make uwsgi
|
||||
autostart = true
|
||||
autorestart = true
|
||||
redirect_stderr=true
|
||||
stopwaitsecs = 1
|
||||
stopsignal=KILL
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
stdout_events_enabled = true
|
||||
stderr_events_enabled = true
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
|
||||
[program:awx-daphne]
|
||||
command = make daphne
|
||||
autostart = true
|
||||
autorestart = true
|
||||
redirect_stderr=true
|
||||
stopwaitsecs = 1
|
||||
stopsignal=KILL
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
stdout_events_enabled = true
|
||||
stderr_events_enabled = true
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
|
||||
[program:awx-nginx]
|
||||
command = make nginx
|
||||
autostart = true
|
||||
autorestart = true
|
||||
redirect_stderr=true
|
||||
stdout_events_enabled = true
|
||||
stderr_events_enabled = true
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
|
||||
[program:awx-rsyslogd]
|
||||
command = rsyslogd -n -i /var/run/awx-rsyslog/rsyslog.pid -f /var/lib/awx/rsyslog/rsyslog.conf
|
||||
autostart = true
|
||||
autorestart = true
|
||||
stopwaitsecs = 5
|
||||
stopsignal=TERM
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
redirect_stderr=true
|
||||
@@ -86,19 +77,40 @@ stderr_events_enabled = true
|
||||
|
||||
[program:awx-receptor]
|
||||
command = receptor --config /etc/receptor/receptor.conf
|
||||
autostart = true
|
||||
autorestart = true
|
||||
stopsignal = KILL
|
||||
stopasgroup = true
|
||||
killasgroup = true
|
||||
redirect_stderr=true
|
||||
stdout_events_enabled = true
|
||||
stderr_events_enabled = true
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
|
||||
[group:tower-processes]
|
||||
programs=awx-dispatcher,awx-receiver,awx-uwsgi,awx-daphne,awx-nginx,awx-wsbroadcast,awx-rsyslogd
|
||||
priority=5
|
||||
|
||||
[program:awx-autoreload]
|
||||
command = make awx-autoreload
|
||||
autostart = true
|
||||
autorestart = true
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
stdout_events_enabled = true
|
||||
stderr_events_enabled = true
|
||||
|
||||
[eventlistener:superwatcher]
|
||||
command=stop-supervisor
|
||||
events=PROCESS_STATE_FATAL
|
||||
autorestart = true
|
||||
stderr_logfile=/dev/stdout
|
||||
stderr_logfile_maxbytes=0
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
|
||||
[unix_http_server]
|
||||
file=/var/run/supervisor/supervisor.sock
|
||||
|
||||
|
||||
Reference in New Issue
Block a user