Compare commits

..

97 Commits

Author SHA1 Message Date
Hao Liu
20f5b255c9 Fix "upgrade in progress" status page not showing up while migration is in progress (#14579)
Web container does not need to wait for migration

if the database is running and responsive, but migrations have not finished, it will start serving, and users will get the upgrading page

wait-for-migration prevent nginix and uwsgi from starting up to serve the "upgrade in progress" status page
2023-10-24 14:27:09 -04:00
Oleksii Baranov
3bcf46555d Fix swagger generation on rhel (#14317) (#14589) 2023-10-24 14:19:02 -04:00
Don Naro
94703ccf84 Pip compile docsite requirements (#14449)
Co-authored-by: Sviatoslav Sydorenko <578543+webknjaz@users.noreply.github.com>
Co-authored-by: Sviatoslav Sydorenko <wk.cvs.github@sydorenko.org.ua>
2023-10-24 12:53:41 -04:00
BHANUTEJA
6cdea1909d Alt text for Execution Env section of Userguide (#14576)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 18:48:07 +00:00
Mike Mwanje
f133580172 Adds alt text to instance_groups.rst images (#14571)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 16:11:17 +00:00
Kishan Mehta
4b90a7fcd1 Add alt text for image directives in credential_types.rst (#14551)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 09:36:05 -06:00
Marliana Lara
95bfedad5b Format constructed inventory hint example as valid YAML (#14568) 2023-10-20 10:24:47 -04:00
Kishan Mehta
1081f2d8e9 Add alt text for image directives in credentials.rst (#14550)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 14:13:49 +00:00
Kishan Mehta
c4ab54d7f3 Add alt text for image directives in job_capacity.rst & job_slices.rst (#14549)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 13:34:04 +00:00
Hao Liu
bcefcd8cf8 Remove specific version for receptorctl (#14593) 2023-10-19 22:49:42 -04:00
Kishan Mehta
0bd057529d Add alt text for image directives in job_templates.rst (#14548)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
2023-10-19 20:24:32 +00:00
Sayyed Faisal Ali
a82c03e2e2 added alt-text in projects.rst (#14544)
Signed-off-by: c0de-slayer <fsali315@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-19 12:39:58 -06:00
TVo
447ac77535 Corrected missing text replacement directives (#14592) 2023-10-19 16:36:41 +00:00
Andrew Klychkov
72d0928f1b [DOCS] EE guide: fix a ref to Get started with EE (#14587) 2023-10-19 03:30:21 -04:00
Deepshri M
6d727d4bc4 Adding alt text for image (#14541)
Signed-off-by: Deepshri M <deepshrim613@gmail.com>
2023-10-17 14:53:18 -06:00
Rohit Raj
6040e44d9d docs: Update teams.rst (#14539)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-17 20:16:09 +00:00
Rohit Raj
b99ce5cd62 docs: Update users.rst (#14538)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-17 14:58:40 +00:00
Rohit Raj
ba8a90c55f docs: Update security.rst (#14540)
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-16 17:56:46 -06:00
Sayyed Faisal Ali
7ee2172517 added alt-text in project-sign.rst (#14545)
Signed-off-by: c0de-slayer <fsali315@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-16 09:25:34 -06:00
Alan Rominger
07f49f5925 AAP-16926 Delete unpartitioned tables in a separate transaction (#14572) 2023-10-13 15:50:51 -04:00
Hao Liu
376993077a Removing mailing list from get involved (#14580) 2023-10-13 17:49:34 +00:00
Hao Liu
48f586bac4 Make wait-for-migrations wait forever (#14566) 2023-10-13 13:48:12 +00:00
Surendran
16dab57c63 Added alt-text for images in notifications.rst (#14555)
Signed-off-by: Surendran Gokul <surendrangokul55@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-12 15:22:37 -06:00
Surendran
75a71492fd Added alt-text for images in organizations.rst (#14556)
Signed-off-by: Surendran Gokul <surendrangokul55@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-12 15:15:45 -06:00
Hao Liu
e9bd99c1ff Fix CVE-2023-43665 (#14561) 2023-10-12 14:00:32 -04:00
Daniel Gonçalves
56878b4910 Add customizable batch_size for cleanup_activitystream and cleanup_jobs (#14412)
Signed-off-by: Daniel Gonçalves <daniel.gonc@lves.fr>
2023-10-11 20:09:16 +00:00
Alan Rominger
19ca480078 Upgrade client library for dsv since tss already landed (#14362) 2023-10-11 16:01:22 -04:00
Steffen Scheib
64eb963025 Cleaning SOS report passwords (#14557) 2023-10-11 19:54:28 +00:00
Will Thames
dc34d0887a Execution environment image should not be required (#14488) 2023-10-11 15:39:51 -04:00
Andrew Klychkov
160634fb6f ee_reference.rst: refert to Builder's definition docs instead of duplicating its content (#14562) 2023-10-11 13:54:12 +01:00
Alan Rominger
9745058546 Only block commits if black fails for certain paths (#14531) 2023-10-10 10:12:57 -04:00
Aviral Katiyar
c97a48b165 Fix: #14510 Add alt-text codeblock to Images for Userguide: jobs.rst (#14530)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-09 16:40:56 -06:00
Rohit Raj
259bca0113 docs: Update workflows.rst (#14537) 2023-10-06 15:30:47 -06:00
Aviral Katiyar
92c2b4e983 Fix: #14500 Added alt text to images for Userguide: credential_plugins.rst (#14527)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-06 14:53:23 -06:00
Seth Foster
127a0cff23 Set ip_address to empty string
ip_address cannot be null, so set to
empty instead of None

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2023-10-05 22:53:16 -04:00
Aviral Katiyar
a0ef25006a Fix: #14499 Added alt text to images for Userguide: applications_auth.rst (#14526)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-05 14:22:10 -06:00
Chris Meyers
50c98a52f7 Update setting_up.rst (#14542) 2023-10-05 15:06:40 -04:00
Michelle McCausland
4008d72af6 issue-14522: Add alt-text codeblock to Images for Userguide: webhooks.rst (#14529)
Signed-off-by: Michelle McCausland <mmccausl@redhat.com>
2023-10-05 17:40:07 +01:00
Alan Rominger
e72e9f94b9 Fix collection test flake due to successful canceled command (#14519) 2023-10-04 09:09:29 -04:00
Sasa Jovicic
9d60b0b9c6 Fix #12815 Direct links to AWX do not reroute the user after authentication (#14399)
Signed-off-by: Sasa993 <jovicic.sasa@hotmail.com>
Co-authored-by: Sasa Jovicic <sjovicic@anexia-it.com>
2023-10-03 16:55:22 -04:00
Aviral Katiyar
05b58c4df6 Fix : #14490 Fixed the required spelling errors (#14507)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
2023-10-03 14:15:13 -06:00
TVo
b1b960fd17 Updated Forum terminology and removed mailing list (#14491) 2023-10-03 19:24:19 +01:00
Jakub Laskowski
3c8f71e559 Fixed wrong arguments order in DomainPasswordGrantAuthorizer (#14441)
Signed-off-by: Jakub Laskowski <jakub.laskowski9@gmail.com>
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-10-03 11:54:57 -04:00
Alan Rominger
f5922f76fa DROP unnecessary unpartioned event tables (#14055) 2023-10-03 11:49:23 -04:00
kurokobo
05582702c6 fix: make type conversions work correctly (related #14487) (#14489)
Signed-off-by: kurokobo <2920259+kurokobo@users.noreply.github.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-09-30 04:02:10 +00:00
Alan Rominger
1d340c5b4e Add a section for postgres max_connections value (#14482) 2023-09-28 10:28:52 -04:00
TVo
15925f1416 Simplified release notes for AWX (#14485) 2023-09-27 14:50:57 -06:00
Salma Kochay
6e06a20cca add subscription usage page 2023-09-27 10:57:04 -04:00
Hao Liu
bb3acbb8ad Debug log for scheduler commit duration (#14035)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-09-27 09:46:55 -04:00
Hao Liu
a88e47930c Update django version to address CVE-2023-41164 (#14460) 2023-09-27 09:36:02 -04:00
Hao Liu
a0d4515ba4 Explicitly set collection version during promotion (#14484) 2023-09-26 14:19:22 -04:00
Alan Rominger
770cc10a78 Get rid of names_digest hack no longer needed (#14459) 2023-09-26 12:09:30 -04:00
Alan Rominger
159dd62d84 Add null value handling in create_partition (#14480) 2023-09-25 18:28:44 -04:00
TVo
640e5db9c6 Removed references of IRC and fixed formatting in "Work Items" section. (#14478)
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-09-25 11:24:39 -06:00
Alan Rominger
9ed527eb26 Consolidate image and server setup in several checks (#14477) 2023-09-25 09:02:20 -04:00
Alan Rominger
29ad6e1eaa Fix bug, None was used instead of empty for DB outage (#14463) 2023-09-21 14:30:25 -04:00
Alan Rominger
3e607f8964 AAP-15927 Use ATTACH PARTITION to avoid exclusive table lock for events (#14433) 2023-09-21 14:27:04 -04:00
TVo
c9d1a4d063 Added release notes for version 23.1.0 (#14471) 2023-09-21 11:02:38 -06:00
Hao Liu
a290b082db Use ldap container hostname for LDAP config (#14473) 2023-09-21 11:31:51 -04:00
Hao Liu
6d3c22e801 Update how to get involved with matrix and forum (#14472) 2023-09-20 18:33:04 +00:00
Michael Abashian
1f91773a3c Simplify docs string base generation 2023-09-20 13:16:54 -04:00
Hao Liu
7b846e1e49 Add makefile target to load dev image into Kind (#13775)
Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Rick Elrod <rick@elrod.me>
2023-09-19 13:34:10 -04:00
Don Naro
f7a2de8a07 Contributor guide and adjusted titles (#14447)
Co-authored-by: Thanhnguyet Vo <tvo@ansible.com>
2023-09-18 10:40:47 -06:00
Andrew Klychkov
194c214f03 userguide/execution_environments.rst: replace building paragraphs with ref to Get started EE guide (#14429) 2023-09-15 10:20:46 -04:00
Christian Adams
77e30dd4b2 Add link to script for publishing operator on OperatorHub (#14442) 2023-09-15 09:32:19 -04:00
jessicamack
9d7421b9bc Update README (#14452)
Signed-off-by: jessicamack <jmack@redhat.com>
2023-09-14 20:20:06 +00:00
Alan Rominger
3b8e662916 Remove conditional paths due to conflict with required checks (#14450) 2023-09-14 16:19:42 -04:00
Alan Rominger
aa3228eec9 Fix continue-on-error GH actions bug, always run archive step instead 2023-09-14 19:45:07 +00:00
Alan Rominger
7b0598c7d8 Continue workflow steps to save logs from failed tests (#14448) 2023-09-14 18:23:22 +00:00
Ivan Aragonés Muniesa
49832d6379 don't pass the 'organization' or other fields to the search of the instance group or execution environments (#14223) 2023-09-14 09:31:05 -04:00
Alan Rominger
8feeb5f1fa Allow saving github creds in user folder (#14435) 2023-09-12 15:47:12 -04:00
Michael Abashian
56230ba5d1 Show a toast when the job is already in the process of launching 2023-09-06 16:56:34 -04:00
Michael Abashian
480aaeace5 Prevent the user from launching multiple jobs by rapidly clicking on buttons 2023-09-06 16:56:34 -04:00
Joe Garcia
3eaea396be Add base64 check on JWT from authn 2023-09-06 15:58:36 -04:00
Keith Grant
deef8669c9 rebuild package-lock (#14423) 2023-09-06 12:36:50 -07:00
Don Naro
63223a2cc7 allow list for example secrets in docs 2023-09-06 15:15:58 -04:00
Keith Grant
a28bc2eb3f bump babel dependencies (#14370) 2023-09-06 09:14:04 -07:00
Alan Rominger
09168e5832 Edit docker-compose instructions for correctness (#14418) 2023-09-06 11:55:25 -04:00
Alan Rominger
6df1de4262 Avoid activity stream entries for instance going offline (#14385) 2023-09-06 11:18:52 -04:00
Alan Rominger
e072bb7668 Declare license for unique module that uses BSD-2
Co-authored-by: Maxwell G <maxwell@gtmx.me>
2023-09-06 10:43:25 -04:00
Alan Rominger
ec579fd637 Fix collection metadata license to match intent 2023-09-06 10:43:25 -04:00
Marliana Lara
b95d521162 Update missing inventory error message (#14416) 2023-09-06 10:24:25 -04:00
Rick Elrod
d03a6a809d Enable collection integration tests on GHA
There are a number of changes here:

- Abstract out a GHA composite action for running the dev environment
- Update the e2e tests to use that new abstracted action
- Introduce a new (matrixed) job for running collection integration
  tests. This splits the jobs up based on filename.
- Collect coverage info and generate an html report that people can
  download easily to see collection coverage info.
- Do some hacks to delete the intermediary coverage file artifacts
  which aren't needed after the job finishes.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-09-05 16:10:48 -05:00
TVo
4466976e10 Added relnotes for 23.0.0 (#14409) 2023-09-05 15:07:53 -06:00
Don Naro
5733f78fd8 Add readthedocs configuration (#14413) 2023-09-05 15:07:32 -06:00
Alan Rominger
20fc7c702a Add check for building docsite (#14406) 2023-09-05 16:07:48 -04:00
Lila Yasin
6ce5799689 Incorrect capacity for remote execution nodes 14051 (#14315) 2023-09-05 11:20:36 -04:00
Don Naro
dc81aa46d0 Create AWX docsite with RST content (#14328)
Co-authored-by: Thanhnguyet Vo <tvo@ansible.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-09-01 09:24:03 -06:00
Alan Rominger
ab3ceaecad Remove extra scheduler state save that does nothing (#14396) 2023-08-31 10:35:07 -04:00
John Westcott IV
1bb4240a6b Allow saml_admin_attr to work in conjunction with SAML Org Map (#14285)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-08-31 09:41:30 -03:00
Rick Elrod
5e105c2cbd [CI] Update GHA actions to sate some warnings emitted by test infrastructure (#14398)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-08-30 23:58:57 -05:00
Alan Rominger
cdb4f0b7fd Consume job_explanation from runner, fix error reporting error (#13482) 2023-08-30 16:45:50 -04:00
Ivanilson Junior
cf1e448577 Fix undefined property error when task is of type yum/debug and was s… (#14372)
Signed-off-by: Ivanilson Junior <ivanilsonaraujojr@gmail.com>
2023-08-30 15:37:28 -04:00
Andrew Klychkov
224e9e0324 [DOCS] tools/docker-compose/README.md: add way to solve postgresql issue (#14225) 2023-08-30 10:45:50 -04:00
Martin Slemr
660dab439b HostMetrics: Hard auto-cleanup (#14255)
Fix host metric settings

Cleanup_host_metric command with default params

Fix order of host metric cleanups
2023-08-30 09:18:59 -04:00
sean-m-sullivan
5ce2055431 update collection workflow example and tests 2023-08-30 09:15:54 -04:00
Alan Rominger
951bd1cc87 Re-run the updater script after upstream removal of future (#14265) 2023-08-29 15:36:42 -04:00
885 changed files with 19955 additions and 1520 deletions

View File

@@ -0,0 +1,28 @@
name: Setup images for AWX
description: Builds new awx_devel image
inputs:
github-token:
description: GitHub Token for registry access
required: true
runs:
using: composite
steps:
- name: Get python version from Makefile
shell: bash
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Log in to registry
shell: bash
run: |
echo "${{ inputs.github-token }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull latest devel image to warm cache
shell: bash
run: docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
- name: Build image for current source checkout
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
make docker-compose-build

View File

@@ -0,0 +1,73 @@
name: Run AWX docker-compose
description: Runs AWX with `make docker-compose`
inputs:
github-token:
description: GitHub Token to pass to awx_devel_image
required: true
build-ui:
description: Should the UI be built?
required: false
default: false
type: boolean
outputs:
ip:
description: The IP of the tools_awx_1 container
value: ${{ steps.data.outputs.ip }}
admin-token:
description: OAuth token for admin user
value: ${{ steps.data.outputs.admin_token }}
runs:
using: composite
steps:
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ inputs.github-token }}
- name: Upgrade ansible-core
shell: bash
run: python3 -m pip install --upgrade ansible-core
- name: Install system deps
shell: bash
run: sudo apt-get install -y gettext
- name: Start AWX
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
COMPOSE_UP_OPTS="-d" \
make docker-compose
- name: Update default AWX password
shell: bash
run: |
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]
do
echo "Waiting for AWX..."
sleep 5
done
echo "AWX is up, updating the password..."
docker exec -i tools_awx_1 sh <<-EOSH
awx-manage update_password --username=admin --password=password
EOSH
- name: Build UI
# This must be a string comparison in composite actions:
# https://github.com/actions/runner/issues/2238
if: ${{ inputs.build-ui == 'true' }}
shell: bash
run: |
docker exec -i tools_awx_1 sh <<-EOSH
make ui-devel
EOSH
- name: Get instance data
id: data
shell: bash
run: |
AWX_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' tools_awx_1)
ADMIN_TOKEN=$(docker exec -i tools_awx_1 awx-manage create_oauth2_token --user admin)
echo "ip=$AWX_IP" >> $GITHUB_OUTPUT
echo "admin_token=$ADMIN_TOKEN" >> $GITHUB_OUTPUT

View File

@@ -0,0 +1,19 @@
name: Upload logs
description: Upload logs from `make docker-compose` devel environment to GitHub as an artifact
inputs:
log-filename:
description: "*Unique* name of the log file"
required: true
runs:
using: composite
steps:
- name: Get AWX logs
shell: bash
run: |
docker logs tools_awx_1 > ${{ inputs.log-filename }}
- name: Upload AWX logs as artifact
uses: actions/upload-artifact@v3
with:
name: docker-compose-logs
path: ${{ inputs.log-filename }}

View File

@@ -35,29 +35,40 @@ jobs:
- name: ui-test-general
command: make ui-test-general
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run check ${{ matrix.tests.name }}
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make github_ci_runner
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make docker-runner
dev-env:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run smoke test
run: make github_ci_setup && ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
run: ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
awx-operator:
runs-on: ubuntu-latest
steps:
- name: Checkout awx
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: awx
- name: Checkout awx-operator
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ansible/awx-operator
path: awx-operator
@@ -67,7 +78,7 @@ jobs:
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
@@ -102,7 +113,7 @@ jobs:
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
# The containers that GitHub Actions use have Ansible installed, so upgrade to make sure we have the latest version.
- name: Upgrade ansible-core
@@ -114,3 +125,137 @@ jobs:
# needed due to cgroupsv2. This is fixed, but a stable release
# with the fix has not been made yet.
ANSIBLE_TEST_PREFER_PODMAN: 1
collection-integration:
name: awx_collection integration
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
target-regex:
- name: a-h
regex: ^[a-h]
- name: i-p
regex: ^[i-p]
- name: r-z0-9
regex: ^[r-z0-9]
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install dependencies for running tests
run: |
python3 -m pip install -e ./awxkit/
python3 -m pip install -r awx_collection/requirements.txt
- name: Run integration tests
run: |
echo "::remove-matcher owner=python::" # Disable annoying annotations from setup-python
echo '[general]' > ~/.tower_cli.cfg
echo 'host = https://${{ steps.awx.outputs.ip }}:8043' >> ~/.tower_cli.cfg
echo 'oauth_token = ${{ steps.awx.outputs.admin-token }}' >> ~/.tower_cli.cfg
echo 'verify_ssl = false' >> ~/.tower_cli.cfg
TARGETS="$(ls awx_collection/tests/integration/targets | grep '${{ matrix.target-regex.regex }}' | tr '\n' ' ')"
make COLLECTION_VERSION=100.100.100-git COLLECTION_TEST_TARGET="--coverage --requirements $TARGETS" test_collection_integration
env:
ANSIBLE_TEST_PREFER_PODMAN: 1
# Upload coverage report as artifact
- uses: actions/upload-artifact@v3
if: always()
with:
name: coverage-${{ matrix.target-regex.name }}
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: collection-integration-${{ matrix.target-regex.name }}.log
collection-integration-coverage-combine:
name: combine awx_collection integration coverage
runs-on: ubuntu-latest
needs:
- collection-integration
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v3
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
- name: Download coverage artifacts
uses: actions/download-artifact@v3
with:
path: coverage
- name: Combine coverage
run: |
make COLLECTION_VERSION=100.100.100-git install_collection
mkdir -p ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage
cd coverage
for i in coverage-*; do
cp -rv $i/* ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
done
cd ~/.ansible/collections/ansible_collections/awx/awx
ansible-test coverage combine --requirements
ansible-test coverage html
echo '## AWX Collection Integration Coverage' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
ansible-test coverage report >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo >> $GITHUB_STEP_SUMMARY
echo '## AWX Collection Integration Coverage HTML' >> $GITHUB_STEP_SUMMARY
echo 'Download the HTML artifacts to view the coverage report.' >> $GITHUB_STEP_SUMMARY
# This is a huge hack, there's no official action for removing artifacts currently.
# Also ACTIONS_RUNTIME_URL and ACTIONS_RUNTIME_TOKEN aren't available in normal run
# steps, so we have to use github-script to get them.
#
# The advantage of doing this, though, is that we save on artifact storage space.
- name: Get secret artifact runtime URL
uses: actions/github-script@v6
id: get-runtime-url
with:
result-encoding: string
script: |
const { ACTIONS_RUNTIME_URL } = process.env;
return ACTIONS_RUNTIME_URL;
- name: Get secret artifact runtime token
uses: actions/github-script@v6
id: get-runtime-token
with:
result-encoding: string
script: |
const { ACTIONS_RUNTIME_TOKEN } = process.env;
return ACTIONS_RUNTIME_TOKEN;
- name: Remove intermediary artifacts
env:
ACTIONS_RUNTIME_URL: ${{ steps.get-runtime-url.outputs.result }}
ACTIONS_RUNTIME_TOKEN: ${{ steps.get-runtime-token.outputs.result }}
run: |
echo "::add-mask::${ACTIONS_RUNTIME_TOKEN}"
artifacts=$(
curl -H "Authorization: Bearer $ACTIONS_RUNTIME_TOKEN" \
${ACTIONS_RUNTIME_URL}_apis/pipelines/workflows/${{ github.run_id }}/artifacts?api-version=6.0-preview \
| jq -r '.value | .[] | select(.name | startswith("coverage-")) | .url'
)
for artifact in $artifacts; do
curl -i -X DELETE -H "Accept: application/json;api-version=6.0-preview" -H "Authorization: Bearer $ACTIONS_RUNTIME_TOKEN" "$artifact"
done
- name: Upload coverage report as artifact
uses: actions/upload-artifact@v3
with:
name: awx-collection-integration-coverage-html
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/reports/coverage

View File

@@ -16,7 +16,7 @@ jobs:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
@@ -28,7 +28,7 @@ jobs:
OWNER: '${{ github.repository_owner }}'
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}

16
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
---
name: Docsite CI
on:
pull_request:
jobs:
docsite-build:
name: docsite test build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: install tox
run: pip install tox
- name: Assure docs can be built
run: tox -e docs

View File

@@ -19,41 +19,20 @@ jobs:
job: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
- uses: ./.github/actions/run_awx_devel
id: awx
with:
python-version: ${{ env.py_version }}
- name: Install system deps
run: sudo apt-get install -y gettext
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
- name: Build UI
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ github.base_ref }} make ui-devel
- name: Start AWX
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ github.base_ref }} make docker-compose &> make-docker-compose-output.log &
build-ui: true
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Pull awx_cypress_base image
run: |
docker pull quay.io/awx/awx_cypress_base:latest
- name: Checkout test project
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ${{ github.repository_owner }}/tower-qa
ssh-key: ${{ secrets.QA_REPO_KEY }}
@@ -65,18 +44,6 @@ jobs:
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
docker build -t awx-pf-tests .
- name: Update default AWX password
run: |
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]
do
echo "Waiting for AWX..."
sleep 5;
done
echo "AWX is up, updating the password..."
docker exec -i tools_awx_1 sh <<-EOSH
awx-manage update_password --username=admin --password=password
EOSH
- name: Run E2E tests
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
@@ -86,7 +53,7 @@ jobs:
export COMMIT_INFO_SHA=$GITHUB_SHA
export COMMIT_INFO_REMOTE=$GITHUB_REPOSITORY_OWNER
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
AWX_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' tools_awx_1)
AWX_IP=${{ steps.awx.outputs.ip }}
printenv > .env
echo "Executing tests:"
docker run \
@@ -102,8 +69,7 @@ jobs:
-w /e2e \
awx-pf-tests run --project .
- name: Save AWX logs
uses: actions/upload-artifact@v2
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
name: AWX-logs-${{ matrix.job }}
path: make-docker-compose-output.log
log-filename: e2e-${{ matrix.job }}.log

View File

@@ -28,7 +28,7 @@ jobs:
runs-on: ubuntu-latest
name: Label Issue - Community
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- name: Install python requests
run: pip install requests

View File

@@ -27,7 +27,7 @@ jobs:
runs-on: ubuntu-latest
name: Label PR - Community
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- name: Install python requests
run: pip install requests

View File

@@ -17,13 +17,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout awx
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
@@ -40,8 +40,12 @@ jobs:
if: ${{ github.repository_owner != 'ansible' }}
- name: Build collection and publish to galaxy
env:
COLLECTION_NAMESPACE: ${{ env.collection_namespace }}
COLLECTION_VERSION: ${{ github.event.release.tag_name }}
COLLECTION_TEMPLATE_VERSION: true
run: |
COLLECTION_TEMPLATE_VERSION=true COLLECTION_NAMESPACE=${{ env.collection_namespace }} make build_collection
make build_collection
if [ "$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
echo "Galaxy release already done"; \
else \

View File

@@ -44,7 +44,7 @@ jobs:
exit 0
- name: Checkout awx
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: awx
@@ -52,18 +52,18 @@ jobs:
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
- name: Checkout awx-logos
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ansible/awx-logos
path: awx-logos
- name: Checkout awx-operator
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ${{ github.repository_owner }}/awx-operator
path: awx-operator

View File

@@ -17,13 +17,13 @@ jobs:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}

4
.gitignore vendored
View File

@@ -165,3 +165,7 @@ use_dev_supervisor.txt
awx/ui_next/src
awx/ui_next/build
# Docs build stuff
docs/docsite/build/
_readthedocs/

5
.gitleaks.toml Normal file
View File

@@ -0,0 +1,5 @@
[allowlist]
description = "Documentation contains example secrets and passwords"
paths = [
"docs/docsite/rst/administration/oauth2_token_auth.rst",
]

5
.pip-tools.toml Normal file
View File

@@ -0,0 +1,5 @@
[tool.pip-tools]
resolver = "backtracking"
allow-unsafe = true
strip-extras = true
quiet = true

15
.readthedocs.yaml Normal file
View File

@@ -0,0 +1,15 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
version: 2
build:
os: ubuntu-22.04
tools:
python: >-
3.11
commands:
- pip install --user tox
- python3 -m tox -e docs
- mkdir -p _readthedocs/html/
- mv docs/docsite/build/html/* _readthedocs/html/

View File

@@ -10,6 +10,7 @@ ignore: |
tools/docker-compose/_sources
# django template files
awx/api/templates/instance_install_bundle/**
.readthedocs.yaml
extends: default

View File

@@ -6,6 +6,7 @@ DOCKER_COMPOSE ?= docker-compose
OFFICIAL ?= no
NODE ?= node
NPM_BIN ?= npm
KIND_BIN ?= $(shell which kind)
CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
@@ -78,7 +79,7 @@ I18N_FLAG_FILE = .i18n_built
sdist \
ui-release ui-devel \
VERSION PYTHON_VERSION docker-compose-sources \
.git/hooks/pre-commit github_ci_setup github_ci_runner
.git/hooks/pre-commit
clean-tmp:
rm -rf tmp/
@@ -323,21 +324,10 @@ test:
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
## Login to Github container image registry, pull image, then build image.
github_ci_setup:
# GITHUB_ACTOR is automatic github actions env var
# CI_GITHUB_TOKEN is defined in .github files
echo $(CI_GITHUB_TOKEN) | docker login ghcr.io -u $(GITHUB_ACTOR) --password-stdin
docker pull $(DEVEL_IMAGE_NAME) || : # Pre-pull image to warm build cache
$(MAKE) docker-compose-build
## Runs AWX_DOCKER_CMD inside a new docker container.
docker-runner:
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)
## Builds image and runs AWX_DOCKER_CMD in it, mainly for .github checks.
github_ci_runner: github_ci_setup docker-runner
test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
if [ "$(VENV_BASE)" ]; then \
@@ -383,7 +373,7 @@ test_collection_sanity:
cd $(COLLECTION_INSTALL) && ansible-test sanity $(COLLECTION_SANITY_ARGS)
test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && ansible-test integration $(COLLECTION_TEST_TARGET)
cd $(COLLECTION_INSTALL) && ansible-test integration -vvv $(COLLECTION_TEST_TARGET)
test_unit:
@if [ "$(VENV_BASE)" ]; then \
@@ -664,6 +654,9 @@ awx-kube-dev-build: Dockerfile.kube-dev
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
kind-dev-load: awx-kube-dev-build
$(KIND_BIN) load docker-image $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG)
# Translation TASKS
# --------------------------------------

View File

@@ -1,5 +1,5 @@
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX Mailing List](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://groups.google.com/g/awx-project)
[![IRC Chat - #ansible-awx](https://img.shields.io/badge/IRC-%23ansible--awx-blueviolet.svg)](https://libera.chat)
[![Ansible Matrix](https://img.shields.io/badge/matrix-Ansible%20Community-blueviolet.svg?logo=matrix)](https://chat.ansible.im/#/welcome) [![Ansible Discourse](https://img.shields.io/badge/discourse-Ansible%20Community-yellowgreen.svg?logo=discourse)](https://forum.ansible.com)
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
@@ -30,12 +30,12 @@ If you're experiencing a problem that you feel is a bug in AWX or have ideas for
Code of Conduct
---------------
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
Get Involved
------------
We welcome your feedback and ideas. Here's how to reach us with feedback and questions:
- Join the `#ansible-awx` channel on irc.libera.chat
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)
- Join the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com)
- Join the [Ansible Community Forum](https://forum.ansible.com)

View File

@@ -52,39 +52,14 @@ try:
except ImportError: # pragma: no cover
MODE = 'production'
import hashlib
try:
import django # noqa: F401
HAS_DJANGO = True
except ImportError:
HAS_DJANGO = False
pass
else:
from django.db.backends.base import schema
from django.db.models import indexes
from django.db.backends.utils import names_digest
from django.db import connection
if HAS_DJANGO is True:
# See upgrade blocker note in requirements/README.md
try:
names_digest('foo', 'bar', 'baz', length=8)
except ValueError:
def names_digest(*args, length):
"""
Generate a 32-bit digest of a set of arguments that can be used to shorten
identifying names. Support for use in FIPS environments.
"""
h = hashlib.md5(usedforsecurity=False)
for arg in args:
h.update(arg.encode())
return h.hexdigest()[:length]
schema.names_digest = names_digest
indexes.names_digest = names_digest
def find_commands(management_dir):
# Modified version of function from django/core/management/__init__.py.

View File

@@ -3233,7 +3233,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
if get_field_from_model_or_attrs('host_config_key') and not inventory:
raise serializers.ValidationError({'host_config_key': _("Cannot enable provisioning callback without an inventory set.")})
prompting_error_message = _("Must either set a default value or ask to prompt on launch.")
prompting_error_message = _("You must either set a default value or ask to prompt on launch.")
if project is None:
raise serializers.ValidationError({'project': _("Job Templates must have a project assigned.")})
elif inventory is None and not get_field_from_model_or_attrs('ask_inventory_on_launch'):

View File

@@ -418,6 +418,10 @@ class SettingsWrapper(UserSettingsHolder):
"""Get value while accepting the in-memory cache if key is available"""
with _ctit_db_wrapper(trans_safe=True):
return self._get_local(name)
# If the last line did not return, that means we hit a database error
# in that case, we should not have a local cache value
# thus, return empty as a signal to use the default
return empty
def __getattr__(self, name):
value = empty

View File

@@ -13,6 +13,7 @@ from unittest import mock
from django.conf import LazySettings
from django.core.cache.backends.locmem import LocMemCache
from django.core.exceptions import ImproperlyConfigured
from django.db.utils import Error as DBError
from django.utils.translation import gettext_lazy as _
import pytest
@@ -331,3 +332,18 @@ def test_in_memory_cache_works(settings):
with mock.patch.object(settings, '_get_local') as mock_get:
assert settings.AWX_VAR == 'DEFAULT'
mock_get.assert_not_called()
@pytest.mark.defined_in_file(AWX_VAR=[])
def test_getattr_with_database_error(settings):
"""
If a setting is defined via the registry and has a null-ish default which is not None
then referencing that setting during a database outage should give that default
this is regression testing for a bug where it would return None
"""
settings.registry.register('AWX_VAR', field_class=fields.StringListField, default=[], category=_('System'), category_slug='system')
settings._awx_conf_memoizedcache.clear()
with mock.patch('django.db.backends.base.base.BaseDatabaseWrapper.ensure_connection') as mock_ensure:
mock_ensure.side_effect = DBError('for test')
assert settings.AWX_VAR == []

View File

@@ -4,6 +4,8 @@ from urllib.parse import urljoin, quote
from django.utils.translation import gettext_lazy as _
import requests
import base64
import binascii
conjur_inputs = {
@@ -50,6 +52,13 @@ conjur_inputs = {
}
def _is_base64(s: str) -> bool:
try:
return base64.b64encode(base64.b64decode(s.encode("utf-8"))) == s.encode("utf-8")
except binascii.Error:
return False
def conjur_backend(**kwargs):
url = kwargs['url']
api_key = kwargs['api_key']
@@ -77,7 +86,7 @@ def conjur_backend(**kwargs):
token = resp.content.decode('utf-8')
lookup_kwargs = {
'headers': {'Authorization': 'Token token="{}"'.format(token)},
'headers': {'Authorization': 'Token token="{}"'.format(token if _is_base64(token) else base64.b64encode(token.encode('utf-8')).decode('utf-8'))},
'allow_redirects': False,
}

View File

@@ -2,7 +2,11 @@ from .plugin import CredentialPlugin
from django.conf import settings
from django.utils.translation import gettext_lazy as _
from thycotic.secrets.vault import SecretsVault
try:
from delinea.secrets.vault import SecretsVault
except ImportError:
from thycotic.secrets.vault import SecretsVault
dsv_inputs = {

View File

@@ -54,7 +54,9 @@ tss_inputs = {
def tss_backend(**kwargs):
if kwargs.get("domain"):
authorizer = DomainPasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'], kwargs['domain'])
authorizer = DomainPasswordGrantAuthorizer(
base_url=kwargs['server_url'], username=kwargs['username'], domain=kwargs['domain'], password=kwargs['password']
)
else:
authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'])
secret_server = SecretServer(kwargs['server_url'], authorizer)

View File

@@ -24,6 +24,9 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove activity stream events more than N days old')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)')
parser.add_argument(
'--batch-size', dest='batch_size', type=int, default=500, metavar='X', help='Remove activity stream events in batch of X events. Defaults to 500.'
)
def init_logging(self):
log_levels = dict(enumerate([logging.ERROR, logging.INFO, logging.DEBUG, 0]))
@@ -48,7 +51,7 @@ class Command(BaseCommand):
else:
pks_to_delete.add(asobj.pk)
# Cleanup objects in batches instead of deleting each one individually.
if len(pks_to_delete) >= 500:
if len(pks_to_delete) >= self.batch_size:
ActivityStream.objects.filter(pk__in=pks_to_delete).delete()
n_deleted_items += len(pks_to_delete)
pks_to_delete.clear()
@@ -63,4 +66,5 @@ class Command(BaseCommand):
self.days = int(options.get('days', 30))
self.cutoff = now() - datetime.timedelta(days=self.days)
self.dry_run = bool(options.get('dry_run', False))
self.batch_size = int(options.get('batch_size', 500))
self.cleanup_activitystream()

View File

@@ -1,22 +1,22 @@
from awx.main.models import HostMetric
from django.core.management.base import BaseCommand
from django.conf import settings
from awx.main.tasks.host_metrics import HostMetricTask
class Command(BaseCommand):
"""
Run soft-deleting of HostMetrics
This command provides cleanup task for HostMetric model.
There are two modes, which run in following order:
- soft cleanup
- - Perform soft-deletion of all host metrics last automated 12 months ago or before.
This is the same as issuing a DELETE request to /api/v2/host_metrics/N/ for all host metrics that match the criteria.
- - updates columns delete, deleted_counter and last_deleted
- hard cleanup
- - Permanently erase from the database all host metrics last automated 36 months ago or before.
This operation happens after the soft deletion has finished.
"""
help = 'Run soft-deleting of HostMetrics'
def add_arguments(self, parser):
parser.add_argument('--months-ago', type=int, dest='months-ago', action='store', help='Threshold in months for soft-deleting')
help = 'Run soft and hard-deletion of HostMetrics'
def handle(self, *args, **options):
months_ago = options.get('months-ago') or None
if not months_ago:
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12)
HostMetric.cleanup_task(months_ago)
HostMetricTask().cleanup(soft_threshold=settings.CLEANUP_HOST_METRICS_SOFT_THRESHOLD, hard_threshold=settings.CLEANUP_HOST_METRICS_HARD_THRESHOLD)

View File

@@ -9,6 +9,7 @@ import re
# Django
from django.apps import apps
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction, connection
from django.db.models import Min, Max
@@ -150,6 +151,9 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove jobs/updates executed more than N days ago. Defaults to 90.')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)')
parser.add_argument(
'--batch-size', dest='batch_size', type=int, default=100000, metavar='X', help='Remove jobs in batch of X jobs. Defaults to 100000.'
)
parser.add_argument('--jobs', dest='only_jobs', action='store_true', default=False, help='Remove jobs')
parser.add_argument('--ad-hoc-commands', dest='only_ad_hoc_commands', action='store_true', default=False, help='Remove ad hoc commands')
parser.add_argument('--project-updates', dest='only_project_updates', action='store_true', default=False, help='Remove project updates')
@@ -195,18 +199,58 @@ class Command(BaseCommand):
delete_meta.delete_jobs()
return (delete_meta.jobs_no_delete_count, delete_meta.jobs_to_delete_count)
def _cascade_delete_job_events(self, model, pk_list):
def has_unpartitioned_table(self, model):
tblname = unified_job_class_to_event_table_name(model)
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM pg_tables WHERE tablename = '_unpartitioned_{tblname}';")
row = cursor.fetchone()
if row is None:
return False
return True
def _delete_unpartitioned_table(self, model):
"If the unpartitioned table is no longer necessary, it will drop the table"
tblname = unified_job_class_to_event_table_name(model)
if not self.has_unpartitioned_table(model):
self.logger.debug(f'Table _unpartitioned_{tblname} does not exist, you are fully migrated.')
return
with connection.cursor() as cursor:
# same as UnpartitionedJobEvent.objects.aggregate(Max('created'))
cursor.execute(f'SELECT MAX("_unpartitioned_{tblname}"."created") FROM "_unpartitioned_{tblname}";')
row = cursor.fetchone()
last_created = row[0]
if last_created:
self.logger.info(f'Last event created in _unpartitioned_{tblname} was {last_created.isoformat()}')
else:
self.logger.info(f'Table _unpartitioned_{tblname} has no events in it')
if (last_created is None) or (last_created < self.cutoff):
self.logger.warning(
f'Dropping table _unpartitioned_{tblname} since no records are newer than {self.cutoff}\n'
'WARNING - this will happen in a separate transaction so a failure will not roll back prior cleanup'
)
with connection.cursor() as cursor:
cursor.execute(f'DROP TABLE _unpartitioned_{tblname};')
def _delete_unpartitioned_events(self, model, pk_list):
"If unpartitioned job events remain, it will cascade those from jobs in pk_list"
tblname = unified_job_class_to_event_table_name(model)
rel_name = model().event_parent_key
# Bail if the unpartitioned table does not exist anymore
if not self.has_unpartitioned_table(model):
return
# Table still exists, delete individual unpartitioned events
if pk_list:
with connection.cursor() as cursor:
tblname = unified_job_class_to_event_table_name(model)
self.logger.debug(f'Deleting {len(pk_list)} events from _unpartitioned_{tblname}, use a longer cleanup window to delete the table.')
pk_list_csv = ','.join(map(str, pk_list))
rel_name = model().event_parent_key
cursor.execute(f"DELETE FROM _unpartitioned_{tblname} WHERE {rel_name} IN ({pk_list_csv})")
cursor.execute(f"DELETE FROM _unpartitioned_{tblname} WHERE {rel_name} IN ({pk_list_csv});")
def cleanup_jobs(self):
batch_size = 100000
# Hack to avoid doing N+1 queries as each item in the Job query set does
# an individual query to get the underlying UnifiedJob.
Job.polymorphic_super_sub_accessors_replaced = True
@@ -221,13 +265,14 @@ class Command(BaseCommand):
deleted = 0
info = qs.aggregate(min=Min('id'), max=Max('id'))
if info['min'] is not None:
for start in range(info['min'], info['max'] + 1, batch_size):
qs_batch = qs.filter(id__gte=start, id__lte=start + batch_size)
for start in range(info['min'], info['max'] + 1, self.batch_size):
qs_batch = qs.filter(id__gte=start, id__lte=start + self.batch_size)
pk_list = qs_batch.values_list('id', flat=True)
_, results = qs_batch.delete()
deleted += results['main.Job']
self._cascade_delete_job_events(Job, pk_list)
# Avoid dropping the job event table in case we have interacted with it already
self._delete_unpartitioned_events(Job, pk_list)
return skipped, deleted
@@ -250,7 +295,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(AdHocCommand, pk_list)
self._delete_unpartitioned_events(AdHocCommand, pk_list)
skipped += AdHocCommand.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -278,7 +323,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(ProjectUpdate, pk_list)
self._delete_unpartitioned_events(ProjectUpdate, pk_list)
skipped += ProjectUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -306,7 +351,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(InventoryUpdate, pk_list)
self._delete_unpartitioned_events(InventoryUpdate, pk_list)
skipped += InventoryUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -330,7 +375,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(SystemJob, pk_list)
self._delete_unpartitioned_events(SystemJob, pk_list)
skipped += SystemJob.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -375,12 +420,12 @@ class Command(BaseCommand):
skipped += Notification.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@transaction.atomic
def handle(self, *args, **options):
self.verbosity = int(options.get('verbosity', 1))
self.init_logging()
self.days = int(options.get('days', 90))
self.dry_run = bool(options.get('dry_run', False))
self.batch_size = int(options.get('batch_size', 100000))
try:
self.cutoff = now() - datetime.timedelta(days=self.days)
except OverflowError:
@@ -402,19 +447,29 @@ class Command(BaseCommand):
del s.receivers[:]
s.sender_receivers_cache.clear()
for m in model_names:
if m not in models_to_cleanup:
continue
with transaction.atomic():
for m in models_to_cleanup:
skipped, deleted = getattr(self, 'cleanup_%s' % m)()
skipped, deleted = getattr(self, 'cleanup_%s' % m)()
func = getattr(self, 'cleanup_%s_partition' % m, None)
if func:
skipped_partition, deleted_partition = func()
skipped += skipped_partition
deleted += deleted_partition
func = getattr(self, 'cleanup_%s_partition' % m, None)
if func:
skipped_partition, deleted_partition = func()
skipped += skipped_partition
deleted += deleted_partition
if self.dry_run:
self.logger.log(99, '%s: %d would be deleted, %d would be skipped.', m.replace('_', ' '), deleted, skipped)
else:
self.logger.log(99, '%s: %d deleted, %d skipped.', m.replace('_', ' '), deleted, skipped)
if self.dry_run:
self.logger.log(99, '%s: %d would be deleted, %d would be skipped.', m.replace('_', ' '), deleted, skipped)
else:
self.logger.log(99, '%s: %d deleted, %d skipped.', m.replace('_', ' '), deleted, skipped)
# Deleting unpartitioned tables cannot be done in same transaction as updates to related tables
if not self.dry_run:
with transaction.atomic():
for m in models_to_cleanup:
unified_job_class_name = m[:-1].title().replace('Management', 'System').replace('_', '')
unified_job_class = apps.get_model('main', unified_job_class_name)
try:
unified_job_class().event_class
except (NotImplementedError, AttributeError):
continue # no need to run this for models without events
self._delete_unpartitioned_table(unified_job_class)

View File

@@ -125,14 +125,15 @@ class InstanceManager(models.Manager):
with advisory_lock('instance_registration_%s' % hostname):
if settings.AWX_AUTO_DEPROVISION_INSTANCES:
# detect any instances with the same IP address.
# if one exists, set it to None
inst_conflicting_ip = self.filter(ip_address=ip_address).exclude(hostname=hostname)
if inst_conflicting_ip.exists():
for other_inst in inst_conflicting_ip:
other_hostname = other_inst.hostname
other_inst.ip_address = None
other_inst.save(update_fields=['ip_address'])
logger.warning("IP address {0} conflict detected, ip address unset for host {1}.".format(ip_address, other_hostname))
# if one exists, set it to ""
if ip_address:
inst_conflicting_ip = self.filter(ip_address=ip_address).exclude(hostname=hostname)
if inst_conflicting_ip.exists():
for other_inst in inst_conflicting_ip:
other_hostname = other_inst.hostname
other_inst.ip_address = ""
other_inst.save(update_fields=['ip_address'])
logger.warning("IP address {0} conflict detected, ip address unset for host {1}.".format(ip_address, other_hostname))
# Return existing instance that matches hostname or UUID (default to UUID)
if node_uuid is not None and node_uuid != UUID_DEFAULT and self.filter(uuid=node_uuid).exists():

View File

@@ -289,7 +289,10 @@ class Instance(HasPolicyEditsMixin, BaseModel):
if update_last_seen:
update_fields += ['last_seen']
if perform_save:
self.save(update_fields=update_fields)
from awx.main.signals import disable_activity_stream
with disable_activity_stream():
self.save(update_fields=update_fields)
return update_fields
def set_capacity_value(self):
@@ -309,8 +312,8 @@ class Instance(HasPolicyEditsMixin, BaseModel):
self.cpu_capacity = 0
self.mem_capacity = 0 # formula has a non-zero offset, so we make sure it is 0 for hop nodes
else:
self.cpu_capacity = get_cpu_effective_capacity(self.cpu)
self.mem_capacity = get_mem_effective_capacity(self.memory)
self.cpu_capacity = get_cpu_effective_capacity(self.cpu, is_control_node=bool(self.node_type in (Instance.Types.CONTROL, Instance.Types.HYBRID)))
self.mem_capacity = get_mem_effective_capacity(self.memory, is_control_node=bool(self.node_type in (Instance.Types.CONTROL, Instance.Types.HYBRID)))
self.set_capacity_value()
def save_health_data(self, version=None, cpu=0, memory=0, uuid=None, update_last_seen=False, errors=''):
@@ -333,12 +336,17 @@ class Instance(HasPolicyEditsMixin, BaseModel):
self.version = version
update_fields.append('version')
new_cpu = get_corrected_cpu(cpu)
if self.node_type == Instance.Types.EXECUTION:
new_cpu = cpu
new_memory = memory
else:
new_cpu = get_corrected_cpu(cpu)
new_memory = get_corrected_memory(memory)
if new_cpu != self.cpu:
self.cpu = new_cpu
update_fields.append('cpu')
new_memory = get_corrected_memory(memory)
if new_memory != self.memory:
self.memory = new_memory
update_fields.append('memory')

View File

@@ -10,7 +10,6 @@ import copy
import os.path
from urllib.parse import urljoin
import dateutil.relativedelta
import yaml
# Django
@@ -890,23 +889,6 @@ class HostMetric(models.Model):
self.deleted = False
self.save(update_fields=['deleted'])
@classmethod
def cleanup_task(cls, months_ago):
try:
months_ago = int(months_ago)
if months_ago <= 0:
raise ValueError()
last_automation_before = now() - dateutil.relativedelta.relativedelta(months=months_ago)
logger.info(f'cleanup_host_metrics: soft-deleting records last automated before {last_automation_before}')
HostMetric.active_objects.filter(last_automation__lt=last_automation_before).update(
deleted=True, deleted_counter=models.F('deleted_counter') + 1, last_deleted=now()
)
settings.CLEANUP_HOST_METRICS_LAST_TS = now()
except (TypeError, ValueError):
logger.error(f"cleanup_host_metrics: months_ago({months_ago}) has to be a positive integer value")
class HostMetricSummaryMonthly(models.Model):
"""

View File

@@ -124,6 +124,13 @@ class TaskBase:
self.record_aggregate_metrics()
sys.exit(1)
def get_local_metrics(self):
data = {}
for k, metric in self.subsystem_metrics.METRICS.items():
if k.startswith(self.prefix) and metric.metric_has_changed:
data[k[len(self.prefix) + 1 :]] = metric.current_value
return data
def schedule(self):
# Always be able to restore the original signal handler if we finish
original_sigusr1 = signal.getsignal(signal.SIGUSR1)
@@ -146,10 +153,14 @@ class TaskBase:
signal.signal(signal.SIGUSR1, original_sigusr1)
commit_start = time.time()
logger.debug(f"Commiting {self.prefix} Scheduler changes")
if self.prefix == "task_manager":
self.subsystem_metrics.set(f"{self.prefix}_commit_seconds", time.time() - commit_start)
local_metrics = self.get_local_metrics()
self.record_aggregate_metrics()
logger.debug(f"Finishing {self.prefix} Scheduler")
logger.debug(f"Finished {self.prefix} Scheduler, timing data:\n{local_metrics}")
class WorkflowManager(TaskBase):

View File

@@ -208,9 +208,10 @@ class RunnerCallback:
# We opened a connection just for that save, close it here now
connections.close_all()
elif status_data['status'] == 'error':
result_traceback = status_data.get('result_traceback', None)
if result_traceback:
self.delay_update(result_traceback=result_traceback)
for field_name in ('result_traceback', 'job_explanation'):
field_value = status_data.get(field_name, None)
if field_value:
self.delay_update(**{field_name: field_value})
def artifacts_handler(self, artifact_dir):
self.artifacts_processed = True

10
awx/main/tasks/helpers.py Normal file
View File

@@ -0,0 +1,10 @@
from django.utils.timezone import now
from rest_framework.fields import DateTimeField
def is_run_threshold_reached(setting, threshold_seconds):
last_time = DateTimeField().to_internal_value(setting) if setting else None
if not last_time:
return True
else:
return (now() - last_time).total_seconds() > threshold_seconds

View File

@@ -3,33 +3,90 @@ from dateutil.relativedelta import relativedelta
import logging
from django.conf import settings
from django.db.models import Count
from django.db.models import Count, F
from django.db.models.functions import TruncMonth
from django.utils.timezone import now
from rest_framework.fields import DateTimeField
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.publish import task
from awx.main.models.inventory import HostMetric, HostMetricSummaryMonthly
from awx.main.tasks.helpers import is_run_threshold_reached
from awx.conf.license import get_license
logger = logging.getLogger('awx.main.tasks.host_metric_summary_monthly')
logger = logging.getLogger('awx.main.tasks.host_metrics')
@task(queue=get_task_queuename)
def cleanup_host_metrics():
if is_run_threshold_reached(getattr(settings, 'CLEANUP_HOST_METRICS_LAST_TS', None), getattr(settings, 'CLEANUP_HOST_METRICS_INTERVAL', 30) * 86400):
logger.info(f"Executing cleanup_host_metrics, last ran at {getattr(settings, 'CLEANUP_HOST_METRICS_LAST_TS', '---')}")
HostMetricTask().cleanup(
soft_threshold=getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12),
hard_threshold=getattr(settings, 'CLEANUP_HOST_METRICS_HARD_THRESHOLD', 36),
)
logger.info("Finished cleanup_host_metrics")
@task(queue=get_task_queuename)
def host_metric_summary_monthly():
"""Run cleanup host metrics summary monthly task each week"""
if _is_run_threshold_reached(
getattr(settings, 'HOST_METRIC_SUMMARY_TASK_LAST_TS', None), getattr(settings, 'HOST_METRIC_SUMMARY_TASK_INTERVAL', 7) * 86400
):
if is_run_threshold_reached(getattr(settings, 'HOST_METRIC_SUMMARY_TASK_LAST_TS', None), getattr(settings, 'HOST_METRIC_SUMMARY_TASK_INTERVAL', 7) * 86400):
logger.info(f"Executing host_metric_summary_monthly, last ran at {getattr(settings, 'HOST_METRIC_SUMMARY_TASK_LAST_TS', '---')}")
HostMetricSummaryMonthlyTask().execute()
logger.info("Finished host_metric_summary_monthly")
def _is_run_threshold_reached(setting, threshold_seconds):
last_time = DateTimeField().to_internal_value(setting) if setting else DateTimeField().to_internal_value('1970-01-01')
class HostMetricTask:
"""
This class provides cleanup task for HostMetric model.
There are two modes:
- soft cleanup (updates columns delete, deleted_counter and last_deleted)
- hard cleanup (deletes from the db)
"""
return (now() - last_time).total_seconds() > threshold_seconds
def cleanup(self, soft_threshold=None, hard_threshold=None):
"""
Main entrypoint, runs either soft cleanup, hard cleanup or both
:param soft_threshold: (int)
:param hard_threshold: (int)
"""
if hard_threshold is not None:
self.hard_cleanup(hard_threshold)
if soft_threshold is not None:
self.soft_cleanup(soft_threshold)
settings.CLEANUP_HOST_METRICS_LAST_TS = now()
@staticmethod
def soft_cleanup(threshold=None):
if threshold is None:
threshold = getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12)
try:
threshold = int(threshold)
except (ValueError, TypeError) as e:
raise type(e)("soft_threshold has to be convertible to number") from e
last_automation_before = now() - relativedelta(months=threshold)
rows = HostMetric.active_objects.filter(last_automation__lt=last_automation_before).update(
deleted=True, deleted_counter=F('deleted_counter') + 1, last_deleted=now()
)
logger.info(f'cleanup_host_metrics: soft-deleted records last automated before {last_automation_before}, affected rows: {rows}')
@staticmethod
def hard_cleanup(threshold=None):
if threshold is None:
threshold = getattr(settings, 'CLEANUP_HOST_METRICS_HARD_THRESHOLD', 36)
try:
threshold = int(threshold)
except (ValueError, TypeError) as e:
raise type(e)("hard_threshold has to be convertible to number") from e
last_deleted_before = now() - relativedelta(months=threshold)
queryset = HostMetric.objects.filter(deleted=True, last_deleted__lt=last_deleted_before)
rows = queryset.delete()
logger.info(f'cleanup_host_metrics: hard-deleted records which were soft deleted before {last_deleted_before}, affected rows: {rows[0]}')
class HostMetricSummaryMonthlyTask:

View File

@@ -1873,6 +1873,8 @@ class RunSystemJob(BaseTask):
if system_job.job_type in ('cleanup_jobs', 'cleanup_activitystream'):
if 'days' in json_vars:
args.extend(['--days', str(json_vars.get('days', 60))])
if 'batch_size' in json_vars:
args.extend(['--batch-size', str(json_vars['batch_size'])])
if 'dry_run' in json_vars and json_vars['dry_run']:
args.extend(['--dry-run'])
if system_job.job_type == 'cleanup_jobs':

View File

@@ -432,16 +432,16 @@ class AWXReceptorJob:
# massive, only ask for last 1000 bytes
startpos = max(stdout_size - 1000, 0)
resultsock, resultfile = receptor_ctl.get_work_results(self.unit_id, startpos=startpos, return_socket=True, return_sockfile=True)
resultsock.setblocking(False) # this makes resultfile reads non blocking
lines = resultfile.readlines()
receptor_output = b"".join(lines).decode()
if receptor_output:
self.task.runner_callback.delay_update(result_traceback=receptor_output)
self.task.runner_callback.delay_update(result_traceback=f'Worker output:\n{receptor_output}')
elif detail:
self.task.runner_callback.delay_update(result_traceback=detail)
self.task.runner_callback.delay_update(result_traceback=f'Receptor detail:\n{detail}')
else:
logger.warning(f'No result details or output from {self.task.instance.log_format}, status:\n{state_name}')
except Exception:
logger.exception(f'Work results error from job id={self.task.instance.id} work_unit={self.task.instance.work_unit_id}')
raise RuntimeError(detail)
return res

View File

@@ -48,7 +48,6 @@ from awx.main.models import (
Inventory,
SmartInventoryMembership,
Job,
HostMetric,
convert_jsonfields,
)
from awx.main.constants import ACTIVE_STATES
@@ -64,6 +63,7 @@ from awx.main.utils.common import (
from awx.main.utils.reload import stop_local_services
from awx.main.utils.pglock import advisory_lock
from awx.main.tasks.helpers import is_run_threshold_reached
from awx.main.tasks.receptor import get_receptor_ctl, worker_info, worker_cleanup, administrative_workunit_reaper, write_receptor_config
from awx.main.consumers import emit_channel_notification
from awx.main import analytics
@@ -368,9 +368,7 @@ def send_notifications(notification_list, job_id=None):
@task(queue=get_task_queuename)
def gather_analytics():
from awx.conf.models import Setting
if is_run_threshold_reached(Setting.objects.filter(key='AUTOMATION_ANALYTICS_LAST_GATHER').first(), settings.AUTOMATION_ANALYTICS_GATHER_INTERVAL):
if is_run_threshold_reached(getattr(settings, 'AUTOMATION_ANALYTICS_LAST_GATHER', None), settings.AUTOMATION_ANALYTICS_GATHER_INTERVAL):
analytics.gather()
@@ -427,29 +425,6 @@ def cleanup_images_and_files():
_cleanup_images_and_files()
@task(queue=get_task_queuename)
def cleanup_host_metrics():
"""Run cleanup host metrics ~each month"""
# TODO: move whole method to host_metrics in follow-up PR
from awx.conf.models import Setting
if is_run_threshold_reached(
Setting.objects.filter(key='CLEANUP_HOST_METRICS_LAST_TS').first(), getattr(settings, 'CLEANUP_HOST_METRICS_INTERVAL', 30) * 86400
):
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12)
logger.info("Executing cleanup_host_metrics")
HostMetric.cleanup_task(months_ago)
logger.info("Finished cleanup_host_metrics")
def is_run_threshold_reached(setting, threshold_seconds):
from rest_framework.fields import DateTimeField
last_time = DateTimeField().to_internal_value(setting.value) if setting and setting.value else DateTimeField().to_internal_value('1970-01-01')
return (now() - last_time).total_seconds() > threshold_seconds
@task(queue=get_task_queuename)
def cluster_node_health_check(node):
"""
@@ -491,7 +466,6 @@ def execution_node_health_check(node):
data = worker_info(node)
prior_capacity = instance.capacity
instance.save_health_data(
version='ansible-runner-' + data.get('runner_version', '???'),
cpu=data.get('cpu_count', 0),
@@ -789,7 +763,6 @@ def awx_periodic_scheduler():
new_unified_job.save(update_fields=['status', 'job_explanation'])
new_unified_job.websocket_emit_status("failed")
emit_channel_notification('schedules-changed', dict(id=schedule.id, group_name="schedules"))
state.save()
def schedule_manager_success_or_error(instance):

View File

@@ -0,0 +1,78 @@
import pytest
from awx.main.tasks.host_metrics import HostMetricTask
from awx.main.models.inventory import HostMetric
from awx.main.tests.factories.fixtures import mk_host_metric
from dateutil.relativedelta import relativedelta
from django.conf import settings
from django.utils import timezone
@pytest.mark.django_db
def test_no_host_metrics():
"""No-crash test"""
assert HostMetric.objects.count() == 0
HostMetricTask().cleanup(soft_threshold=0, hard_threshold=0)
HostMetricTask().cleanup(soft_threshold=24, hard_threshold=42)
assert HostMetric.objects.count() == 0
@pytest.mark.django_db
def test_delete_exception():
"""Crash test"""
with pytest.raises(ValueError):
HostMetricTask().soft_cleanup("")
with pytest.raises(TypeError):
HostMetricTask().hard_cleanup(set())
@pytest.mark.django_db
@pytest.mark.parametrize('threshold', [settings.CLEANUP_HOST_METRICS_SOFT_THRESHOLD, 20])
def test_soft_delete(threshold):
"""Metrics with last_automation < threshold are updated to deleted=True"""
mk_host_metric('host_1', first_automation=ago(months=1), last_automation=ago(months=1), deleted=False)
mk_host_metric('host_2', first_automation=ago(months=1), last_automation=ago(months=1), deleted=True)
mk_host_metric('host_3', first_automation=ago(months=1), last_automation=ago(months=threshold, hours=-1), deleted=False)
mk_host_metric('host_4', first_automation=ago(months=1), last_automation=ago(months=threshold, hours=-1), deleted=True)
mk_host_metric('host_5', first_automation=ago(months=1), last_automation=ago(months=threshold, hours=1), deleted=False)
mk_host_metric('host_6', first_automation=ago(months=1), last_automation=ago(months=threshold, hours=1), deleted=True)
mk_host_metric('host_7', first_automation=ago(months=1), last_automation=ago(months=42), deleted=False)
mk_host_metric('host_8', first_automation=ago(months=1), last_automation=ago(months=42), deleted=True)
assert HostMetric.objects.count() == 8
assert HostMetric.active_objects.count() == 4
for i in range(2):
HostMetricTask().cleanup(soft_threshold=threshold)
assert HostMetric.objects.count() == 8
hostnames = set(HostMetric.objects.filter(deleted=False).order_by('hostname').values_list('hostname', flat=True))
assert hostnames == {'host_1', 'host_3'}
@pytest.mark.django_db
@pytest.mark.parametrize('threshold', [settings.CLEANUP_HOST_METRICS_HARD_THRESHOLD, 20])
def test_hard_delete(threshold):
"""Metrics with last_deleted < threshold and deleted=True are deleted from the db"""
mk_host_metric('host_1', first_automation=ago(months=1), last_deleted=ago(months=1), deleted=False)
mk_host_metric('host_2', first_automation=ago(months=1), last_deleted=ago(months=1), deleted=True)
mk_host_metric('host_3', first_automation=ago(months=1), last_deleted=ago(months=threshold, hours=-1), deleted=False)
mk_host_metric('host_4', first_automation=ago(months=1), last_deleted=ago(months=threshold, hours=-1), deleted=True)
mk_host_metric('host_5', first_automation=ago(months=1), last_deleted=ago(months=threshold, hours=1), deleted=False)
mk_host_metric('host_6', first_automation=ago(months=1), last_deleted=ago(months=threshold, hours=1), deleted=True)
mk_host_metric('host_7', first_automation=ago(months=1), last_deleted=ago(months=42), deleted=False)
mk_host_metric('host_8', first_automation=ago(months=1), last_deleted=ago(months=42), deleted=True)
assert HostMetric.objects.count() == 8
assert HostMetric.active_objects.count() == 4
for i in range(2):
HostMetricTask().cleanup(hard_threshold=threshold)
assert HostMetric.objects.count() == 6
hostnames = set(HostMetric.objects.order_by('hostname').values_list('hostname', flat=True))
assert hostnames == {'host_1', 'host_2', 'host_3', 'host_4', 'host_5', 'host_7'}
def ago(months=0, hours=0):
return timezone.now() - relativedelta(months=months, hours=hours)

View File

@@ -76,3 +76,24 @@ def test_hashivault_handle_auth_kubernetes():
def test_hashivault_handle_auth_not_enough_args():
with pytest.raises(Exception):
hashivault.handle_auth()
class TestDelineaImports:
"""
These module have a try-except for ImportError which will allow using the older library
but we do not want the awx_devel image to have the older library,
so these tests are designed to fail if these wind up using the fallback import
"""
def test_dsv_import(self):
from awx.main.credential_plugins.dsv import SecretsVault # noqa
# assert this module as opposed to older thycotic.secrets.vault
assert SecretsVault.__module__ == 'delinea.secrets.vault'
def test_tss_import(self):
from awx.main.credential_plugins.tss import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret # noqa
for cls in (DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret):
# assert this module as opposed to older thycotic.secrets.server
assert cls.__module__ == 'delinea.secrets.server'

View File

@@ -38,8 +38,8 @@ def test_orphan_unified_job_creation(instance, inventory):
@pytest.mark.django_db
@mock.patch('awx.main.tasks.system.inspect_execution_and_hop_nodes', lambda *args, **kwargs: None)
@mock.patch('awx.main.models.ha.get_cpu_effective_capacity', lambda cpu: 8)
@mock.patch('awx.main.models.ha.get_mem_effective_capacity', lambda mem: 62)
@mock.patch('awx.main.models.ha.get_cpu_effective_capacity', lambda cpu, is_control_node: 8)
@mock.patch('awx.main.models.ha.get_mem_effective_capacity', lambda mem, is_control_node: 62)
def test_job_capacity_and_with_inactive_node():
i = Instance.objects.create(hostname='test-1')
i.save_health_data('18.0.1', 2, 8000)

View File

@@ -36,7 +36,9 @@ def test_SYSTEM_TASK_ABS_MEM_conversion(value, converted_value, mem_capacity):
mock_settings.IS_K8S = True
assert convert_mem_str_to_bytes(value) == converted_value
assert get_corrected_memory(-1) == converted_value
assert get_mem_effective_capacity(-1) == mem_capacity
assert get_mem_effective_capacity(1, is_control_node=True) == mem_capacity
# SYSTEM_TASK_ABS_MEM should not effect memory and capacity for execution nodes
assert get_mem_effective_capacity(2147483648, is_control_node=False) == 20
@pytest.mark.parametrize(
@@ -58,4 +60,6 @@ def test_SYSTEM_TASK_ABS_CPU_conversion(value, converted_value, cpu_capacity):
mock_settings.SYSTEM_TASK_FORKS_CPU = 4
assert convert_cpu_str_to_decimal_cpu(value) == converted_value
assert get_corrected_cpu(-1) == converted_value
assert get_cpu_effective_capacity(-1) == cpu_capacity
assert get_cpu_effective_capacity(-1, is_control_node=True) == cpu_capacity
# SYSTEM_TASK_ABS_CPU should not effect cpu count and capacity for execution nodes
assert get_cpu_effective_capacity(2.0, is_control_node=False) == 8

View File

@@ -23,7 +23,7 @@ from django.core.exceptions import ObjectDoesNotExist, FieldDoesNotExist
from django.utils.dateparse import parse_datetime
from django.utils.translation import gettext_lazy as _
from django.utils.functional import cached_property
from django.db import connection, transaction, ProgrammingError
from django.db import connection, transaction, ProgrammingError, IntegrityError
from django.db.models.fields.related import ForeignObjectRel, ManyToManyField
from django.db.models.fields.related_descriptors import ForwardManyToOneDescriptor, ManyToManyDescriptor
from django.db.models.query import QuerySet
@@ -768,14 +768,13 @@ def get_corrected_cpu(cpu_count): # formerlly get_cpu_capacity
return cpu_count # no correction
def get_cpu_effective_capacity(cpu_count):
def get_cpu_effective_capacity(cpu_count, is_control_node=False):
from django.conf import settings
cpu_count = get_corrected_cpu(cpu_count)
settings_forkcpu = getattr(settings, 'SYSTEM_TASK_FORKS_CPU', None)
env_forkcpu = os.getenv('SYSTEM_TASK_FORKS_CPU', None)
if is_control_node:
cpu_count = get_corrected_cpu(cpu_count)
if env_forkcpu:
forkcpu = int(env_forkcpu)
elif settings_forkcpu:
@@ -834,6 +833,7 @@ def get_corrected_memory(memory):
# Runner returns memory in bytes
# so we convert memory from settings to bytes as well.
if env_absmem is not None:
return convert_mem_str_to_bytes(env_absmem)
elif settings_absmem is not None:
@@ -842,14 +842,13 @@ def get_corrected_memory(memory):
return memory
def get_mem_effective_capacity(mem_bytes):
def get_mem_effective_capacity(mem_bytes, is_control_node=False):
from django.conf import settings
mem_bytes = get_corrected_memory(mem_bytes)
settings_mem_mb_per_fork = getattr(settings, 'SYSTEM_TASK_FORKS_MEM', None)
env_mem_mb_per_fork = os.getenv('SYSTEM_TASK_FORKS_MEM', None)
if is_control_node:
mem_bytes = get_corrected_memory(mem_bytes)
if env_mem_mb_per_fork:
mem_mb_per_fork = int(env_mem_mb_per_fork)
elif settings_mem_mb_per_fork:
@@ -1165,13 +1164,24 @@ def create_partition(tblname, start=None):
try:
with transaction.atomic():
with connection.cursor() as cursor:
cursor.execute(f"SELECT EXISTS (SELECT FROM information_schema.tables WHERE table_name = '{tblname}_{partition_label}');")
row = cursor.fetchone()
if row is not None:
for val in row: # should only have 1
if val is True:
logger.debug(f'Event partition table {tblname}_{partition_label} already exists')
return
cursor.execute(
f'CREATE TABLE IF NOT EXISTS {tblname}_{partition_label} '
f'PARTITION OF {tblname} '
f'FOR VALUES FROM (\'{start_timestamp}\') to (\'{end_timestamp}\');'
f'CREATE TABLE {tblname}_{partition_label} (LIKE {tblname} INCLUDING DEFAULTS INCLUDING CONSTRAINTS); '
f'ALTER TABLE {tblname} ATTACH PARTITION {tblname}_{partition_label} '
f'FOR VALUES FROM (\'{start_timestamp}\') TO (\'{end_timestamp}\');'
)
except ProgrammingError as e:
logger.debug(f'Caught known error due to existing partition: {e}')
except (ProgrammingError, IntegrityError) as e:
if 'already exists' in str(e):
logger.info(f'Caught known error due to partition creation race: {e}')
else:
raise
def cleanup_new_process(func):

View File

@@ -470,7 +470,7 @@ CELERYBEAT_SCHEDULE = {
'receptor_reaper': {'task': 'awx.main.tasks.system.awx_receptor_workunit_reaper', 'schedule': timedelta(seconds=60)},
'send_subsystem_metrics': {'task': 'awx.main.analytics.analytics_tasks.send_subsystem_metrics', 'schedule': timedelta(seconds=20)},
'cleanup_images': {'task': 'awx.main.tasks.system.cleanup_images_and_files', 'schedule': timedelta(hours=3)},
'cleanup_host_metrics': {'task': 'awx.main.tasks.system.cleanup_host_metrics', 'schedule': timedelta(hours=3, minutes=30)},
'cleanup_host_metrics': {'task': 'awx.main.tasks.host_metrics.cleanup_host_metrics', 'schedule': timedelta(hours=3, minutes=30)},
'host_metric_summary_monthly': {'task': 'awx.main.tasks.host_metrics.host_metric_summary_monthly', 'schedule': timedelta(hours=4)},
}
@@ -1049,7 +1049,7 @@ UI_NEXT = True
# - 'unique_managed_hosts': Compliant = automated - deleted hosts (using /api/v2/host_metrics/)
SUBSCRIPTION_USAGE_MODEL = ''
# Host metrics cleanup - last time of the cleanup run (soft-deleting records)
# Host metrics cleanup - last time of the task/command run
CLEANUP_HOST_METRICS_LAST_TS = None
# Host metrics cleanup - minimal interval between two cleanups in days
CLEANUP_HOST_METRICS_INTERVAL = 30 # days

View File

@@ -87,7 +87,7 @@ def _update_user_orgs(backend, desired_org_state, orgs_to_create, user=None):
is_member_expression = org_opts.get(user_type, None)
remove_members = bool(org_opts.get('remove_{}'.format(user_type), remove))
has_role = _update_m2m_from_expression(user, is_member_expression, remove_members)
desired_org_state[organization_name][role_name] = has_role
desired_org_state[organization_name][role_name] = desired_org_state[organization_name].get(role_name, False) or has_role
def _update_user_teams(backend, desired_team_state, teams_to_create, user=None):

View File

@@ -637,3 +637,75 @@ class TestSAMLUserFlags:
}
assert expected == _check_flag(user, 'superuser', attributes, user_flags_settings)
@pytest.mark.django_db
def test__update_user_orgs_org_map_and_saml_attr():
"""
This combines the action of two other tests where an org membership is defined both by
the ORGANIZATION_MAP and the SOCIAL_AUTH_SAML_ORGANIZATION_ATTR at the same time
"""
# This data will make the user a member
class BackendClass:
s = {
'ORGANIZATION_MAP': {
'Default1': {
'remove': True,
'remove_admins': True,
'users': 'foobar',
'remove_users': True,
'organization_alias': 'o1_alias',
}
}
}
def setting(self, key):
return self.s[key]
backend = BackendClass()
setting = {
'saml_attr': 'memberOf',
'saml_admin_attr': 'admins',
'saml_auditor_attr': 'auditors',
'remove': True,
'remove_admins': True,
}
# This data from the server will make the user an admin of the organization
kwargs = {
'username': 'foobar',
'uid': 'idp:cmeyers@redhat.com',
'request': {u'SAMLResponse': [], u'RelayState': [u'idp']},
'is_new': False,
'response': {
'session_index': '_0728f0e0-b766-0135-75fa-02842b07c044',
'idp_name': u'idp',
'attributes': {
'admins': ['Default1'],
},
},
'social': None,
'strategy': None,
'new_association': False,
}
this_user = User.objects.create(username='foobar')
with override_settings(SOCIAL_AUTH_SAML_ORGANIZATION_ATTR=setting):
desired_org_state = {}
orgs_to_create = []
# this should add user as an admin of the org
_update_user_orgs_by_saml_attr(backend, desired_org_state, orgs_to_create, **kwargs)
assert desired_org_state['o1_alias']['admin_role'] is True
assert set(orgs_to_create) == set(['o1_alias'])
# this should add user as a member of the org without reverting the admin status
_update_user_orgs(backend, desired_org_state, orgs_to_create, this_user)
assert desired_org_state['o1_alias']['member_role'] is True
assert desired_org_state['o1_alias']['admin_role'] is True
assert set(orgs_to_create) == set(['o1_alias'])

1771
awx/ui/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -33,12 +33,12 @@
"styled-components": "5.3.6"
},
"devDependencies": {
"@babel/core": "^7.16.10",
"@babel/eslint-parser": "^7.16.5",
"@babel/eslint-plugin": "^7.16.5",
"@babel/plugin-syntax-jsx": "7.16.7",
"@babel/polyfill": "^7.8.7",
"@babel/preset-react": "7.16.7",
"@babel/core": "^7.22.9",
"@babel/eslint-parser": "^7.22.9",
"@babel/eslint-plugin": "^7.22.10",
"@babel/plugin-syntax-jsx": "^7.22.5",
"@babel/polyfill": "^7.12.1",
"@babel/preset-react": "^7.22.5",
"@cypress/instrument-cra": "^1.4.0",
"@lingui/cli": "^3.7.1",
"@lingui/loader": "3.15.0",

View File

@@ -33,6 +33,7 @@ import Roles from './models/Roles';
import Root from './models/Root';
import Schedules from './models/Schedules';
import Settings from './models/Settings';
import SubscriptionUsage from './models/SubscriptionUsage';
import SystemJobs from './models/SystemJobs';
import SystemJobTemplates from './models/SystemJobTemplates';
import Teams from './models/Teams';
@@ -82,6 +83,7 @@ const RolesAPI = new Roles();
const RootAPI = new Root();
const SchedulesAPI = new Schedules();
const SettingsAPI = new Settings();
const SubscriptionUsageAPI = new SubscriptionUsage();
const SystemJobsAPI = new SystemJobs();
const SystemJobTemplatesAPI = new SystemJobTemplates();
const TeamsAPI = new Teams();
@@ -132,6 +134,7 @@ export {
RootAPI,
SchedulesAPI,
SettingsAPI,
SubscriptionUsageAPI,
SystemJobsAPI,
SystemJobTemplatesAPI,
TeamsAPI,

View File

@@ -0,0 +1,16 @@
import Base from '../Base';
class SubscriptionUsage extends Base {
constructor(http) {
super(http);
this.baseUrl = 'api/v2/host_metric_summary_monthly/';
}
readSubscriptionUsageChart(dateRange) {
return this.http.get(
`${this.baseUrl}?date__gte=${dateRange}&order_by=date&page_size=100`
);
}
}
export default SubscriptionUsage;

View File

@@ -11,6 +11,7 @@ import {
WorkflowJobsAPI,
WorkflowJobTemplatesAPI,
} from 'api';
import useToast, { AlertVariant } from 'hooks/useToast';
import AlertModal from '../AlertModal';
import ErrorDetail from '../ErrorDetail';
import LaunchPrompt from '../LaunchPrompt';
@@ -45,8 +46,22 @@ function LaunchButton({ resource, children }) {
const [isLaunching, setIsLaunching] = useState(false);
const [resourceCredentials, setResourceCredentials] = useState([]);
const [error, setError] = useState(null);
const { addToast, Toast, toastProps } = useToast();
const showToast = () => {
addToast({
id: resource.id,
title: t`A job has already been launched`,
variant: AlertVariant.info,
hasTimeout: true,
});
};
const handleLaunch = async () => {
if (isLaunching) {
showToast();
return;
}
setIsLaunching(true);
const readLaunch =
resource.type === 'workflow_job_template'
@@ -104,6 +119,11 @@ function LaunchButton({ resource, children }) {
};
const launchWithParams = async (params) => {
if (isLaunching) {
showToast();
return;
}
setIsLaunching(true);
try {
let jobPromise;
@@ -141,6 +161,10 @@ function LaunchButton({ resource, children }) {
let readRelaunch;
let relaunch;
if (isLaunching) {
showToast();
return;
}
setIsLaunching(true);
if (resource.type === 'inventory_update') {
// We'll need to handle the scenario where the src no longer exists
@@ -197,6 +221,7 @@ function LaunchButton({ resource, children }) {
handleRelaunch,
isLaunching,
})}
<Toast {...toastProps} />
{error && (
<AlertModal
isOpen={error}

View File

@@ -75,6 +75,7 @@ function SessionProvider({ children }) {
const [sessionCountdown, setSessionCountdown] = useState(0);
const [authRedirectTo, setAuthRedirectTo] = useState('/');
const [isUserBeingLoggedOut, setIsUserBeingLoggedOut] = useState(false);
const [isRedirectLinkReceived, setIsRedirectLinkReceived] = useState(false);
const {
request: fetchLoginRedirectOverride,
@@ -99,6 +100,7 @@ function SessionProvider({ children }) {
const logout = useCallback(async () => {
setIsUserBeingLoggedOut(true);
setIsRedirectLinkReceived(false);
if (!isSessionExpired.current) {
setAuthRedirectTo('/logout');
window.localStorage.setItem(SESSION_USER_ID, null);
@@ -112,6 +114,18 @@ function SessionProvider({ children }) {
return <Redirect to="/login" />;
}, [setSessionTimeout, setSessionCountdown]);
useEffect(() => {
const unlisten = history.listen((location, action) => {
if (action === 'POP') {
setIsRedirectLinkReceived(true);
}
});
return () => {
unlisten(); // ensure that the listener is removed when the component unmounts
};
}, [history]);
useEffect(() => {
if (!isAuthenticated(document.cookie)) {
return () => {};
@@ -176,6 +190,8 @@ function SessionProvider({ children }) {
logout,
sessionCountdown,
setAuthRedirectTo,
isRedirectLinkReceived,
setIsRedirectLinkReceived,
}),
[
authRedirectTo,
@@ -186,6 +202,8 @@ function SessionProvider({ children }) {
logout,
sessionCountdown,
setAuthRedirectTo,
isRedirectLinkReceived,
setIsRedirectLinkReceived,
]
);

View File

@@ -17,6 +17,7 @@ import Organizations from 'screens/Organization';
import Projects from 'screens/Project';
import Schedules from 'screens/Schedule';
import Settings from 'screens/Setting';
import SubscriptionUsage from 'screens/SubscriptionUsage/SubscriptionUsage';
import Teams from 'screens/Team';
import Templates from 'screens/Template';
import TopologyView from 'screens/TopologyView';
@@ -61,6 +62,11 @@ function getRouteConfig(userProfile = {}) {
path: '/host_metrics',
screen: HostMetrics,
},
{
title: <Trans>Subscription Usage</Trans>,
path: '/subscription_usage',
screen: SubscriptionUsage,
},
],
},
{
@@ -189,6 +195,7 @@ function getRouteConfig(userProfile = {}) {
'unique_managed_hosts'
) {
deleteRoute('host_metrics');
deleteRoute('subscription_usage');
}
if (userProfile?.isSuperUser || userProfile?.isSystemAuditor)
return routeConfig;
@@ -197,6 +204,7 @@ function getRouteConfig(userProfile = {}) {
deleteRoute('management_jobs');
deleteRoute('topology_view');
deleteRoute('instances');
deleteRoute('subscription_usage');
if (userProfile?.isOrgAdmin) return routeConfig;
if (!userProfile?.isNotificationAdmin) deleteRoute('notification_templates');

View File

@@ -31,6 +31,7 @@ describe('getRouteConfig', () => {
'/activity_stream',
'/workflow_approvals',
'/host_metrics',
'/subscription_usage',
'/templates',
'/credentials',
'/projects',
@@ -61,6 +62,7 @@ describe('getRouteConfig', () => {
'/activity_stream',
'/workflow_approvals',
'/host_metrics',
'/subscription_usage',
'/templates',
'/credentials',
'/projects',

View File

@@ -302,9 +302,9 @@ function HostsByProcessorTypeExample() {
const hostsByProcessorLimit = `intel_hosts`;
const hostsByProcessorSourceVars = `plugin: constructed
strict: true
groups:
intel_hosts: "GenuineIntel" in ansible_processor`;
strict: true
groups:
intel_hosts: "'GenuineIntel' in ansible_processor"`;
return (
<FormFieldGroupExpandable

View File

@@ -45,7 +45,7 @@ describe('<ConstructedInventoryHint />', () => {
);
expect(navigator.clipboard.writeText).toHaveBeenCalledWith(
expect.stringContaining(
'intel_hosts: "GenuineIntel" in ansible_processor'
`intel_hosts: \"'GenuineIntel' in ansible_processor\"`
)
);
});

View File

@@ -53,13 +53,9 @@ const getStdOutValue = (hostEvent) => {
const res = hostEvent?.event_data?.res;
let stdOut;
if (taskAction === 'debug' && res.result && res.result.stdout) {
if (taskAction === 'debug' && res?.result?.stdout) {
stdOut = res.result.stdout;
} else if (
taskAction === 'yum' &&
res.results &&
Array.isArray(res.results)
) {
} else if (taskAction === 'yum' && Array.isArray(res?.results)) {
stdOut = res.results.join('\n');
} else if (res?.stdout) {
stdOut = Array.isArray(res.stdout) ? res.stdout.join(' ') : res.stdout;

View File

@@ -45,7 +45,8 @@ const Login = styled(PFLogin)`
function AWXLogin({ alt, isAuthenticated }) {
const [userId, setUserId] = useState(null);
const { authRedirectTo, isSessionExpired } = useSession();
const { authRedirectTo, isSessionExpired, isRedirectLinkReceived } =
useSession();
const isNewUser = useRef(true);
const hasVerifiedUser = useRef(false);
@@ -179,7 +180,8 @@ function AWXLogin({ alt, isAuthenticated }) {
return <LoadingSpinner />;
}
if (userId && hasVerifiedUser.current) {
const redirect = isNewUser.current ? '/home' : authRedirectTo;
const redirect =
isNewUser.current && !isRedirectLinkReceived ? '/home' : authRedirectTo;
return <Redirect to={redirect} />;
}

View File

@@ -0,0 +1,319 @@
import React, { useEffect, useCallback } from 'react';
import { string, number, shape, arrayOf } from 'prop-types';
import * as d3 from 'd3';
import { t } from '@lingui/macro';
import { PageContextConsumer } from '@patternfly/react-core';
import UsageChartTooltip from './UsageChartTooltip';
function UsageChart({ id, data, height, pageContext }) {
const { isNavOpen } = pageContext;
// Methods
const draw = useCallback(() => {
const margin = { top: 15, right: 25, bottom: 105, left: 70 };
const getWidth = () => {
let width;
// This is in an a try/catch due to an error from jest.
// Even though the d3.select returns a valid selector with
// style function, it says it is null in the test
try {
width =
parseInt(d3.select(`#${id}`).style('width'), 10) -
margin.left -
margin.right || 700;
} catch (error) {
width = 700;
}
return width;
};
// Clear our chart container element first
d3.selectAll(`#${id} > *`).remove();
const width = getWidth();
function transition(path) {
path.transition().duration(1000).attrTween('stroke-dasharray', tweenDash);
}
function tweenDash(...params) {
const l = params[2][params[1]].getTotalLength();
const i = d3.interpolateString(`0,${l}`, `${l},${l}`);
return (val) => i(val);
}
const x = d3.scaleTime().rangeRound([0, width]);
const y = d3.scaleLinear().range([height, 0]);
// [consumed, capacity]
const colors = d3.scaleOrdinal(['#06C', '#C9190B']);
const svg = d3
.select(`#${id}`)
.append('svg')
.attr('width', width + margin.left + margin.right)
.attr('height', height + margin.top + margin.bottom)
.attr('z', 100)
.append('g')
.attr('id', 'chart-container')
.attr('transform', `translate(${margin.left}, ${margin.top})`);
// Tooltip
const tooltip = new UsageChartTooltip({
svg: `#${id}`,
colors,
label: t`Hosts`,
});
const parseTime = d3.timeParse('%Y-%m-%d');
const formattedData = data?.reduce(
(formatted, { date, license_consumed, license_capacity }) => {
const MONTH = parseTime(date);
const CONSUMED = +license_consumed;
const CAPACITY = +license_capacity;
return formatted.concat({ MONTH, CONSUMED, CAPACITY });
},
[]
);
// Scale the range of the data
const largestY = formattedData?.reduce((a_max, b) => {
const b_max = Math.max(b.CONSUMED > b.CAPACITY ? b.CONSUMED : b.CAPACITY);
return a_max > b_max ? a_max : b_max;
}, 0);
x.domain(d3.extent(formattedData, (d) => d.MONTH));
y.domain([
0,
largestY > 4 ? largestY + Math.max(largestY / 10, 1) : 5,
]).nice();
const capacityLine = d3
.line()
.curve(d3.curveMonotoneX)
.x((d) => x(d.MONTH))
.y((d) => y(d.CAPACITY));
const consumedLine = d3
.line()
.curve(d3.curveMonotoneX)
.x((d) => x(d.MONTH))
.y((d) => y(d.CONSUMED));
// Add the Y Axis
svg
.append('g')
.attr('class', 'y-axis')
.call(
d3
.axisLeft(y)
.ticks(
largestY > 3
? Math.min(largestY + Math.max(largestY / 10, 1), 10)
: 5
)
.tickSize(-width)
.tickFormat(d3.format('d'))
)
.selectAll('line')
.attr('stroke', '#d7d7d7');
svg.selectAll('.y-axis .tick text').attr('x', -5).attr('font-size', '14');
// text label for the y axis
svg
.append('text')
.attr('transform', 'rotate(-90)')
.attr('y', 0 - margin.left)
.attr('x', 0 - height / 2)
.attr('dy', '1em')
.style('text-anchor', 'middle')
.text(t`Unique Hosts`);
// Add the X Axis
let ticks;
const maxTicks = Math.round(
formattedData.length / (formattedData.length / 2)
);
ticks = formattedData.map((d) => d.MONTH);
if (formattedData.length === 13) {
ticks = formattedData
.map((d, i) => (i % maxTicks === 0 ? d.MONTH : undefined))
.filter((item) => item);
}
svg.select('.domain').attr('stroke', '#d7d7d7');
svg
.append('g')
.attr('class', 'x-axis')
.attr('transform', `translate(0, ${height})`)
.call(
d3
.axisBottom(x)
.tickValues(ticks)
.tickSize(-height)
.tickFormat(d3.timeFormat('%m/%y'))
)
.selectAll('line')
.attr('stroke', '#d7d7d7');
svg
.selectAll('.x-axis .tick text')
.attr('x', -25)
.attr('font-size', '14')
.attr('transform', 'rotate(-65)');
// text label for the x axis
svg
.append('text')
.attr(
'transform',
`translate(${width / 2} , ${height + margin.top + 50})`
)
.style('text-anchor', 'middle')
.text(t`Month`);
const vertical = svg
.append('path')
.attr('class', 'mouse-line')
.style('stroke', 'black')
.style('stroke-width', '3px')
.style('stroke-dasharray', '3, 3')
.style('opacity', '0');
const handleMouseOver = (event, d) => {
tooltip.handleMouseOver(event, d);
// show vertical line
vertical.transition().style('opacity', '1');
};
const handleMouseMove = function mouseMove(event) {
const [pointerX] = d3.pointer(event);
vertical.attr('d', () => `M${pointerX},${height} ${pointerX},${0}`);
};
const handleMouseOut = () => {
// hide tooltip
tooltip.handleMouseOut();
// hide vertical line
vertical.transition().style('opacity', 0);
};
const dateFormat = d3.timeFormat('%m/%y');
// Add the consumed line path
svg
.append('path')
.data([formattedData])
.attr('class', 'line')
.style('fill', 'none')
.style('stroke', () => colors(1))
.attr('stroke-width', 2)
.attr('d', consumedLine)
.call(transition);
// create our consumed line circles
svg
.selectAll('dot')
.data(formattedData)
.enter()
.append('circle')
.attr('r', 3)
.style('stroke', () => colors(1))
.style('fill', () => colors(1))
.attr('cx', (d) => x(d.MONTH))
.attr('cy', (d) => y(d.CONSUMED))
.attr('id', (d) => `consumed-dot-${dateFormat(d.MONTH)}`)
.on('mouseover', (event, d) => handleMouseOver(event, d))
.on('mousemove', handleMouseMove)
.on('mouseout', handleMouseOut);
// Add the capacity line path
svg
.append('path')
.data([formattedData])
.attr('class', 'line')
.style('fill', 'none')
.style('stroke', () => colors(0))
.attr('stroke-width', 2)
.attr('d', capacityLine)
.call(transition);
// create our capacity line circles
svg
.selectAll('dot')
.data(formattedData)
.enter()
.append('circle')
.attr('r', 3)
.style('stroke', () => colors(0))
.style('fill', () => colors(0))
.attr('cx', (d) => x(d.MONTH))
.attr('cy', (d) => y(d.CAPACITY))
.attr('id', (d) => `capacity-dot-${dateFormat(d.MONTH)}`)
.on('mouseover', handleMouseOver)
.on('mousemove', handleMouseMove)
.on('mouseout', handleMouseOut);
// Create legend
const legend_keys = [t`Subscriptions consumed`, t`Subscription capacity`];
let totalWidth = width / 2 - 175;
const lineLegend = svg
.selectAll('.lineLegend')
.data(legend_keys)
.enter()
.append('g')
.attr('class', 'lineLegend')
.each(function formatLegend() {
const current = d3.select(this);
current.attr('transform', `translate(${totalWidth}, ${height + 90})`);
totalWidth += 200;
});
lineLegend
.append('text')
.text((d) => d)
.attr('font-size', '14')
.attr('transform', 'translate(15,9)'); // align texts with boxes
lineLegend
.append('rect')
.attr('fill', (d) => colors(d))
.attr('width', 10)
.attr('height', 10);
}, [data, height, id]);
useEffect(() => {
draw();
}, [draw, isNavOpen]);
useEffect(() => {
function handleResize() {
draw();
}
window.addEventListener('resize', handleResize);
handleResize();
return () => window.removeEventListener('resize', handleResize);
}, [draw]);
return <div id={id} />;
}
UsageChart.propTypes = {
id: string.isRequired,
data: arrayOf(shape({})).isRequired,
height: number.isRequired,
};
const withPageContext = (Component) =>
function contextComponent(props) {
return (
<PageContextConsumer>
{(pageContext) => <Component {...props} pageContext={pageContext} />}
</PageContextConsumer>
);
};
export default withPageContext(UsageChart);

View File

@@ -0,0 +1,177 @@
import * as d3 from 'd3';
import { t } from '@lingui/macro';
class UsageChartTooltip {
constructor(opts) {
this.label = opts.label;
this.svg = opts.svg;
this.colors = opts.colors;
this.draw();
}
draw() {
this.toolTipBase = d3.select(`${this.svg} > svg`).append('g');
this.toolTipBase.attr('id', 'chart-tooltip');
this.toolTipBase.attr('overflow', 'visible');
this.toolTipBase.style('opacity', 0);
this.toolTipBase.style('pointer-events', 'none');
this.toolTipBase.attr('transform', 'translate(100, 100)');
this.boxWidth = 200;
this.textWidthThreshold = 20;
this.toolTipPoint = this.toolTipBase
.append('rect')
.attr('transform', 'translate(10, -10) rotate(45)')
.attr('x', 0)
.attr('y', 0)
.attr('height', 20)
.attr('width', 20)
.attr('fill', '#393f44');
this.boundingBox = this.toolTipBase
.append('rect')
.attr('x', 10)
.attr('y', -41)
.attr('rx', 2)
.attr('height', 82)
.attr('width', this.boxWidth)
.attr('fill', '#393f44');
this.circleBlue = this.toolTipBase
.append('circle')
.attr('cx', 26)
.attr('cy', 0)
.attr('r', 7)
.attr('stroke', 'white')
.attr('fill', this.colors(1));
this.circleRed = this.toolTipBase
.append('circle')
.attr('cx', 26)
.attr('cy', 26)
.attr('r', 7)
.attr('stroke', 'white')
.attr('fill', this.colors(0));
this.consumedText = this.toolTipBase
.append('text')
.attr('x', 43)
.attr('y', 4)
.attr('font-size', 12)
.attr('fill', 'white')
.text(t`Subscriptions consumed`);
this.capacityText = this.toolTipBase
.append('text')
.attr('x', 43)
.attr('y', 28)
.attr('font-size', 12)
.attr('fill', 'white')
.text(t`Subscription capacity`);
this.icon = this.toolTipBase
.append('text')
.attr('fill', 'white')
.attr('stroke', 'white')
.attr('x', 24)
.attr('y', 30)
.attr('font-size', 12);
this.consumed = this.toolTipBase
.append('text')
.attr('fill', 'white')
.attr('font-size', 12)
.attr('x', 122)
.attr('y', 4)
.attr('id', 'consumed-count')
.text('0');
this.capacity = this.toolTipBase
.append('text')
.attr('fill', 'white')
.attr('font-size', 12)
.attr('x', 122)
.attr('y', 28)
.attr('id', 'capacity-count')
.text('0');
this.date = this.toolTipBase
.append('text')
.attr('fill', 'white')
.attr('stroke', 'white')
.attr('x', 20)
.attr('y', -21)
.attr('font-size', 12);
}
handleMouseOver = (event, data) => {
let consumed = 0;
let capacity = 0;
const [x, y] = d3.pointer(event);
const tooltipPointerX = x + 75;
const formatTooltipDate = d3.timeFormat('%m/%y');
if (!event) {
return;
}
const toolTipWidth = this.toolTipBase.node().getBoundingClientRect().width;
const chartWidth = d3
.select(`${this.svg}> svg`)
.node()
.getBoundingClientRect().width;
const overflow = 100 - (toolTipWidth / chartWidth) * 100;
const flipped = overflow < (tooltipPointerX / chartWidth) * 100;
if (data) {
consumed = data.CONSUMED || 0;
capacity = data.CAPACITY || 0;
this.date.text(formatTooltipDate(data.MONTH || null));
}
this.capacity.text(`${capacity}`);
this.consumed.text(`${consumed}`);
this.consumedTextWidth = this.consumed.node().getComputedTextLength();
this.capacityTextWidth = this.capacity.node().getComputedTextLength();
const maxTextPerc = (this.jobsWidth / this.boxWidth) * 100;
const threshold = 40;
const overage = maxTextPerc / threshold;
let adjustedWidth;
if (maxTextPerc > threshold) {
adjustedWidth = this.boxWidth * overage;
} else {
adjustedWidth = this.boxWidth;
}
this.boundingBox.attr('width', adjustedWidth);
this.toolTipBase.attr('transform', `translate(${tooltipPointerX}, ${y})`);
if (flipped) {
this.toolTipPoint.attr('transform', 'translate(-20, -10) rotate(45)');
this.boundingBox.attr('x', -adjustedWidth - 20);
this.circleBlue.attr('cx', -adjustedWidth);
this.circleRed.attr('cx', -adjustedWidth);
this.icon.attr('x', -adjustedWidth - 2);
this.consumedText.attr('x', -adjustedWidth + 17);
this.capacityText.attr('x', -adjustedWidth + 17);
this.consumed.attr('x', -this.consumedTextWidth - 20 - 12);
this.capacity.attr('x', -this.capacityTextWidth - 20 - 12);
this.date.attr('x', -adjustedWidth - 5);
} else {
this.toolTipPoint.attr('transform', 'translate(10, -10) rotate(45)');
this.boundingBox.attr('x', 10);
this.circleBlue.attr('cx', 26);
this.circleRed.attr('cx', 26);
this.icon.attr('x', 24);
this.consumedText.attr('x', 43);
this.capacityText.attr('x', 43);
this.consumed.attr('x', adjustedWidth - this.consumedTextWidth);
this.capacity.attr('x', adjustedWidth - this.capacityTextWidth);
this.date.attr('x', 20);
}
this.toolTipBase.style('opacity', 1);
this.toolTipBase.interrupt();
};
handleMouseOut = () => {
this.toolTipBase
.transition()
.delay(15)
.style('opacity', 0)
.style('pointer-events', 'none');
};
}
export default UsageChartTooltip;

View File

@@ -0,0 +1,53 @@
import React from 'react';
import styled from 'styled-components';
import { t, Trans } from '@lingui/macro';
import { Banner, Card, PageSection } from '@patternfly/react-core';
import { InfoCircleIcon } from '@patternfly/react-icons';
import { useConfig } from 'contexts/Config';
import useBrandName from 'hooks/useBrandName';
import ScreenHeader from 'components/ScreenHeader';
import SubscriptionUsageChart from './SubscriptionUsageChart';
const MainPageSection = styled(PageSection)`
padding-top: 24px;
padding-bottom: 0;
& .spacer {
margin-bottom: var(--pf-global--spacer--lg);
}
`;
function SubscriptionUsage() {
const config = useConfig();
const brandName = useBrandName();
return (
<>
{config?.ui_next && (
<Banner variant="info">
<Trans>
<p>
<InfoCircleIcon /> A tech preview of the new {brandName} user
interface can be found <a href="/ui_next/dashboard">here</a>.
</p>
</Trans>
</Banner>
)}
<ScreenHeader
streamType="all"
breadcrumbConfig={{ '/subscription_usage': t`Subscription Usage` }}
/>
<MainPageSection>
<div className="spacer">
<Card id="dashboard-main-container">
<SubscriptionUsageChart />
</Card>
</div>
</MainPageSection>
</>
);
}
export default SubscriptionUsage;

View File

@@ -0,0 +1,167 @@
import React, { useCallback, useEffect, useState } from 'react';
import styled from 'styled-components';
import { t } from '@lingui/macro';
import {
Card,
CardHeader,
CardActions,
CardBody,
CardTitle,
Flex,
FlexItem,
PageSection,
Select,
SelectVariant,
SelectOption,
Text,
} from '@patternfly/react-core';
import useRequest from 'hooks/useRequest';
import { SubscriptionUsageAPI } from 'api';
import { useUserProfile } from 'contexts/Config';
import ContentLoading from 'components/ContentLoading';
import UsageChart from './ChartComponents/UsageChart';
const GraphCardHeader = styled(CardHeader)`
margin-bottom: var(--pf-global--spacer--lg);
`;
const ChartCardTitle = styled(CardTitle)`
padding-right: 24px;
font-size: 20px;
font-weight: var(--pf-c-title--m-xl--FontWeight);
`;
const CardText = styled(Text)`
padding-right: 24px;
`;
const GraphCardActions = styled(CardActions)`
margin-left: initial;
padding-left: 0;
`;
function SubscriptionUsageChart() {
const [isPeriodDropdownOpen, setIsPeriodDropdownOpen] = useState(false);
const [periodSelection, setPeriodSelection] = useState('year');
const userProfile = useUserProfile();
const calculateDateRange = () => {
const today = new Date();
let date = '';
switch (periodSelection) {
case 'year':
date =
today.getMonth() < 10
? `${today.getFullYear() - 1}-0${today.getMonth() + 1}-01`
: `${today.getFullYear() - 1}-${today.getMonth() + 1}-01`;
break;
case 'two_years':
date =
today.getMonth() < 10
? `${today.getFullYear() - 2}-0${today.getMonth() + 1}-01`
: `${today.getFullYear() - 2}-${today.getMonth() + 1}-01`;
break;
case 'three_years':
date =
today.getMonth() < 10
? `${today.getFullYear() - 3}-0${today.getMonth() + 1}-01`
: `${today.getFullYear() - 3}-${today.getMonth() + 1}-01`;
break;
default:
date =
today.getMonth() < 10
? `${today.getFullYear() - 1}-0${today.getMonth() + 1}-01`
: `${today.getFullYear() - 1}-${today.getMonth() + 1}-01`;
break;
}
return date;
};
const {
isLoading,
result: subscriptionUsageChartData,
request: fetchSubscriptionUsageChart,
} = useRequest(
useCallback(async () => {
const data = await SubscriptionUsageAPI.readSubscriptionUsageChart(
calculateDateRange()
);
return data.data.results;
}, [periodSelection]),
[]
);
useEffect(() => {
fetchSubscriptionUsageChart();
}, [fetchSubscriptionUsageChart, periodSelection]);
if (isLoading) {
return (
<PageSection>
<Card>
<ContentLoading />
</Card>
</PageSection>
);
}
return (
<Card>
<Flex style={{ justifyContent: 'space-between' }}>
<FlexItem>
<ChartCardTitle>{t`Subscription Compliance`}</ChartCardTitle>
</FlexItem>
<FlexItem>
<CardText component="small">
{t`Last recalculation date:`}{' '}
{userProfile.systemConfig.HOST_METRIC_SUMMARY_TASK_LAST_TS.slice(
0,
10
)}
</CardText>
</FlexItem>
</Flex>
<GraphCardHeader>
<GraphCardActions>
<Select
variant={SelectVariant.single}
placeholderText={t`Select period`}
aria-label={t`Select period`}
typeAheadAriaLabel={t`Select period`}
className="periodSelect"
onToggle={setIsPeriodDropdownOpen}
onSelect={(event, selection) => {
setIsPeriodDropdownOpen(false);
setPeriodSelection(selection);
}}
selections={periodSelection}
isOpen={isPeriodDropdownOpen}
noResultsFoundText={t`No results found`}
ouiaId="subscription-usage-period-select"
>
<SelectOption key="year" value="year">
{t`Past year`}
</SelectOption>
<SelectOption key="two_years" value="two_years">
{t`Past two years`}
</SelectOption>
<SelectOption key="three_years" value="three_years">
{t`Past three years`}
</SelectOption>
</Select>
</GraphCardActions>
</GraphCardHeader>
<CardBody>
<UsageChart
period={periodSelection}
height={600}
id="d3-usage-line-chart-root"
data={subscriptionUsageChartData}
/>
</CardBody>
</Card>
);
}
export default SubscriptionUsageChart;

View File

@@ -2,16 +2,9 @@ export default function getDocsBaseUrl(config) {
let version = 'latest';
const licenseType = config?.license_info?.license_type;
if (licenseType && licenseType !== 'open') {
if (config?.version) {
if (parseFloat(config?.version.split('-')[0]) >= 4.3) {
version = parseFloat(config?.version.split('-')[0]);
} else {
version = config?.version.split('-')[0];
}
}
} else {
version = 'latest';
if (licenseType && licenseType !== 'open' && config?.version) {
version = parseFloat(config?.version.split('-')[0]).toFixed(1);
}
return `https://docs.ansible.com/automation-controller/${version}`;
}

View File

@@ -6,7 +6,7 @@ describe('getDocsBaseUrl', () => {
license_info: {
license_type: 'open',
},
version: '18.0.0',
version: '18.4.4',
});
expect(result).toEqual(
@@ -19,11 +19,11 @@ describe('getDocsBaseUrl', () => {
license_info: {
license_type: 'enterprise',
},
version: '4.0.0',
version: '18.4.4',
});
expect(result).toEqual(
'https://docs.ansible.com/automation-controller/4.0.0'
'https://docs.ansible.com/automation-controller/18.4'
);
});
@@ -32,17 +32,17 @@ describe('getDocsBaseUrl', () => {
license_info: {
license_type: 'enterprise',
},
version: '4.0.0-beta',
version: '7.0.0-beta',
});
expect(result).toEqual(
'https://docs.ansible.com/automation-controller/4.0.0'
'https://docs.ansible.com/automation-controller/7.0'
);
});
it('should return latest version if license info missing', () => {
const result = getDocsBaseUrl({
version: '18.0.0',
version: '18.4.4',
});
expect(result).toEqual(

View File

@@ -33,7 +33,6 @@ options:
image:
description:
- The fully qualified url of the container image.
required: True
type: str
description:
description:
@@ -79,7 +78,7 @@ def main():
argument_spec = dict(
name=dict(required=True),
new_name=dict(),
image=dict(required=True),
image=dict(),
description=dict(),
organization=dict(),
credential=dict(),

View File

@@ -273,6 +273,26 @@ def main():
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
# We need to clear out the name from the search fields so we can use name_or_id in the following searches
if 'name' in search_fields:
del search_fields['name']
# Create the data that gets sent for create and update
new_fields = {}
if execution_environment is not None:
if execution_environment == '':
new_fields['execution_environment'] = ''
else:
ee = module.get_one('execution_environments', name_or_id=execution_environment, **{'data': search_fields})
if ee is None:
ee2 = module.get_one('execution_environments', name_or_id=execution_environment)
if ee2 is None or ee2['organization'] is not None:
module.fail_json(msg='could not find execution_environment entry with name {0}'.format(execution_environment))
else:
new_fields['execution_environment'] = ee2['id']
else:
new_fields['execution_environment'] = ee['id']
association_fields = {}
if credentials is not None:
@@ -280,9 +300,9 @@ def main():
for item in credentials:
association_fields['credentials'].append(module.resolve_name_to_id('credentials', item))
# We need to clear out the name from the search fields so we can use name_or_id in the following searches
if 'name' in search_fields:
del search_fields['name']
# We need to clear out the organization from the search fields the searches for labels and instance_groups doesnt support it and won't be needed anymore
if 'organization' in search_fields:
del search_fields['organization']
if labels is not None:
association_fields['labels'] = []
@@ -302,8 +322,6 @@ def main():
else:
association_fields['instance_groups'].append(instance_group_id['id'])
# Create the data that gets sent for create and update
new_fields = {}
if rrule is not None:
new_fields['rrule'] = rrule
new_fields['name'] = new_name if new_name else (module.get_item_name(existing_item) if existing_item else name)
@@ -338,16 +356,6 @@ def main():
if timeout is not None:
new_fields['timeout'] = timeout
if execution_environment is not None:
if execution_environment == '':
new_fields['execution_environment'] = ''
else:
ee = module.get_one('execution_environments', name_or_id=execution_environment, **{'data': search_fields})
if ee is None:
module.fail_json(msg='could not find execution_environment entry with name {0}'.format(execution_environment))
else:
new_fields['execution_environment'] = ee['id']
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(
existing_item,

View File

@@ -89,7 +89,7 @@ def coerce_type(module, value):
if not HAS_YAML:
module.fail_json(msg="yaml is not installed, try 'pip install pyyaml'")
return yaml.safe_load(value)
elif value.lower in ('true', 'false', 't', 'f'):
elif value.lower() in ('true', 'false', 't', 'f'):
return {'t': True, 'f': False}[value[0].lower()]
try:
return int(value)

View File

@@ -517,68 +517,63 @@ EXAMPLES = '''
workflow_nodes:
- identifier: node101
unified_job_template:
name: example-project
name: example-inventory
inventory:
organization:
name: Default
type: inventory_source
related:
success_nodes: []
failure_nodes:
- identifier: node201
always_nodes: []
credentials: []
- identifier: node201
unified_job_template:
organization:
name: Default
name: job template 1
type: job_template
credentials: []
related:
success_nodes:
- identifier: node301
failure_nodes: []
always_nodes: []
credentials: []
- identifier: node202
- identifier: node102
unified_job_template:
organization:
name: Default
name: example-project
type: project
related:
success_nodes: []
failure_nodes: []
always_nodes: []
credentials: []
- identifier: node301
all_parents_must_converge: false
success_nodes:
- identifier: node201
- identifier: node201
unified_job_template:
organization:
name: Default
name: job template 2
name: example-job template
type: job_template
execution_environment:
name: My EE
inventory:
name: Test inventory
name: Demo Inventory
organization:
name: Default
related:
success_nodes:
- identifier: node401
failure_nodes:
- identifier: node301
always_nodes: []
credentials:
- name: cyberark
organization:
name: Default
instance_groups:
- name: SunCavanaugh Cloud
- name: default
labels:
- name: Custom Label
- name: Another Custom Label
organization:
name: Default
register: result
- all_parents_must_converge: false
identifier: node301
unified_job_template:
description: Approval node for example
timeout: 900
type: workflow_approval
name: Approval Node for Demo
related:
success_nodes:
- identifier: node401
- identifier: node401
unified_job_template:
name: Cleanup Activity Stream
type: system_job_template
'''

View File

@@ -49,8 +49,8 @@
- name: Cancel the command
ad_hoc_command_cancel:
command_id: "{{ command.id }}"
request_timeout: 60
register: results
ignore_errors: true
- assert:
that:

View File

@@ -108,8 +108,9 @@
- assert:
that:
- wait_results is successful
- 'wait_results.status == "successful"'
- 'wait_results.status in ["successful", "canceled"]'
fail_msg: "Ad hoc command stdout: {{ lookup('awx.awx.controller_api', 'ad_hoc_commands/' + command.id | string + '/stdout/?format=json') }}"
success_msg: "Ad hoc command finished with status {{ wait_results.status }}"
- name: Delete the Credential
credential:

View File

@@ -33,6 +33,7 @@
name: "localhost"
inventory: "Demo Inventory"
state: present
enabled: true
variables:
ansible_connection: local
register: result

View File

@@ -21,14 +21,14 @@
name: "{{ inv_name }}"
organization: Default
state: present
register: result
register: inv_result
- name: Create a Host
host:
name: "{{ host_name4 }}"
inventory: "{{ inv_name }}"
state: present
register: result
register: host_result
- name: Add Host to Group
group:
@@ -37,16 +37,18 @@
hosts:
- "{{ host_name4 }}"
preserve_existing_hosts: true
register: result
register: group_result
- assert:
that:
- "result is changed"
- inv_result is changed
- host_result is changed
- group_result is changed
- name: Create Group 1
group:
name: "{{ group_name1 }}"
inventory: "{{ result.id }}"
inventory: "{{ inv_result.id }}"
state: present
variables:
foo: bar
@@ -165,18 +167,6 @@
that:
- group1_host_count == "3"
- name: Delete Group 2
group:
name: "{{ group_name2 }}"
inventory: "{{ inv_name }}"
state: absent
register: result
# In this case, group 2 was last a child of group1 so deleting group1 deleted group2
- assert:
that:
- "result is not changed"
- name: Delete Group 3
group:
name: "{{ group_name3 }}"
@@ -200,6 +190,18 @@
that:
- "result is changed"
- name: Delete Group 2
group:
name: "{{ group_name2 }}"
inventory: "{{ inv_name }}"
state: absent
register: result
# In this case, group 2 was last a child of group1 so deleting group1 deleted group2
- assert:
that:
- "result is not changed"
- name: Check module fails with correct msg
group:
name: test-group

View File

@@ -11,6 +11,7 @@
- name: Cancel the job
job_cancel:
job_id: "{{ job.id }}"
request_timeout: 60
register: results
- assert:
@@ -23,10 +24,10 @@
fail_if_not_running: true
register: results
ignore_errors: true
- assert:
that:
- results is failed
# This test can be flaky, so we retry it a few times
until: results is failed and results.msg == 'Job is not running'
retries: 6
delay: 5
- name: Check module fails with correct msg
job_cancel:

View File

@@ -61,6 +61,10 @@
organization: Default
state: absent
register: result
until: result is changed # wait for the project update to settle
retries: 6
delay: 5
- assert:
that:

View File

@@ -225,6 +225,7 @@
schedule:
name: "{{ sched2 }}"
state: present
organization: Default
unified_job_template: "{{ jt1 }}"
rrule: "DTSTART:20191219T130551Z RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1"
description: "This hopefully will work"

View File

@@ -1,4 +1,42 @@
---
- name: Initialize starting project vvv setting to false
awx.awx.settings:
name: "PROJECT_UPDATE_VVV"
value: false
- name: Change project vvv setting to true
awx.awx.settings:
name: "PROJECT_UPDATE_VVV"
value: true
register: result
- name: Changing setting to true should have changed the value
assert:
that:
- "result is changed"
- name: Change project vvv setting to true
awx.awx.settings:
name: "PROJECT_UPDATE_VVV"
value: true
register: result
- name: Changing setting to true again should not change the value
assert:
that:
- "result is not changed"
- name: Change project vvv setting back to false
awx.awx.settings:
name: "PROJECT_UPDATE_VVV"
value: false
register: result
- name: Changing setting back to false should have changed the value
assert:
that:
- "result is changed"
- name: Set the value of AWX_ISOLATION_SHOW_PATHS to a baseline
settings:
name: AWX_ISOLATION_SHOW_PATHS

View File

@@ -220,6 +220,7 @@
user:
controller_username: "{{ username }}-orgadmin"
controller_password: "{{ username }}-orgadmin"
controller_oauthtoken: false # Hack for CI where we use oauth in config file
username: "{{ username }}"
first_name: Joe
password: "{{ 65535 | random | to_uuid }}"

View File

@@ -169,6 +169,9 @@
name: "{{ jt1_name }}"
project: "{{ demo_project_name }}"
inventory: Demo Inventory
ask_inventory_on_launch: true
ask_credential_on_launch: true
ask_labels_on_launch: true
playbook: hello_world.yml
job_type: run
state: present
@@ -710,7 +713,7 @@
name: "{{ wfjt_name }}"
inventory: Demo Inventory
extra_vars: {'foo': 'bar', 'another-foo': {'barz': 'bar2'}}
schema:
workflow_nodes:
- identifier: node101
unified_job_template:
name: "{{ project_inv_source_result.id }}"
@@ -721,30 +724,52 @@
related:
failure_nodes:
- identifier: node201
- identifier: node102
unified_job_template:
organization:
name: "{{ org_name }}"
name: "{{ demo_project_name_2 }}"
type: project
related:
success_nodes:
- identifier: node201
- identifier: node201
unified_job_template:
organization:
name: Default
name: "{{ jt1_name }}"
type: job_template
credentials: []
inventory:
name: Demo Inventory
organization:
name: Default
related:
success_nodes:
- identifier: node401
failure_nodes:
- identifier: node301
- identifier: node202
unified_job_template:
organization:
name: "{{ org_name }}"
name: "{{ project_inv_source }}"
type: project
always_nodes: []
credentials:
- name: "{{ scm_cred_name }}"
organization:
name: Default
instance_groups:
- name: "{{ ig1 }}"
labels:
- name: "{{ lab1 }}"
organization:
name: "{{ org_name }}"
- all_parents_must_converge: false
identifier: node301
unified_job_template:
organization:
name: Default
name: "{{ jt2_name }}"
type: job_template
- identifier: Cleanup Job
description: Approval node for example
timeout: 900
type: workflow_approval
name: "{{ approval_node_name }}"
related:
success_nodes:
- identifier: node401
- identifier: node401
unified_job_template:
name: Cleanup Activity Stream
type: system_job_template

View File

@@ -18,7 +18,9 @@ documentation: https://github.com/ansible/awx/blob/devel/awx_collection/README.m
homepage: https://www.ansible.com/
issues: https://github.com/ansible/awx/issues?q=is%3Aissue+label%3Acomponent%3Aawx_collection
license:
- GPL-3.0-only
- GPL-3.0-or-later
# plugins/module_utils/tower_legacy.py
- BSD-2-Clause
name: {{ collection_package }}
namespace: {{ collection_namespace }}
readme: README.md

View File

90
docs/docsite/conf.py Normal file
View File

@@ -0,0 +1,90 @@
import sys
import os
import shlex
from datetime import datetime
from importlib import import_module
#sys.path.insert(0, os.path.abspath('./rst/rest_api/_swagger'))
project = u'Ansible AWX'
copyright = u'2023, Red Hat'
author = u'Red Hat'
pubdateshort = '2023-08-04'
pubdate = datetime.strptime(pubdateshort, '%Y-%m-%d').strftime('%B %d, %Y')
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
html_title = 'Ansible AWX community documentation'
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
html_short_title = 'AWX community documentation'
htmlhelp_basename = 'AWX_docs'
# include the swagger extension to build rest api reference
#'swagger',
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.doctest',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.ifconfig',
'sphinx_ansible_theme',
]
html_theme = 'sphinx_ansible_theme'
html_theme_path = ["_static"]
pygments_style = "ansible"
highlight_language = "YAML+Jinja"
source_suffix = '.rst'
master_doc = 'index'
version = 'latest'
shortversion = 'latest'
# The full version, including alpha/beta/rc tags.
release = 'AWX latest'
language = 'en'
locale_dirs = ['locale/'] # path is example but recommended.
gettext_compact = False # optional.
rst_epilog = """
.. |atqi| replace:: *AWX Quick Installation Guide*
.. |atqs| replace:: *AWX Quick Setup Guide*
.. |atir| replace:: *AWX Installation and Reference Guide*
.. |ata| replace:: *AWX Administration Guide*
.. |atu| replace:: *AWX User Guide*
.. |atumg| replace:: *AWX Upgrade and Migration Guide*
.. |atapi| replace:: *AWX API Guide*
.. |atrn| replace:: *AWX Release Notes*
.. |aa| replace:: Ansible Automation
.. |AA| replace:: Automation Analytics
.. |aap| replace:: Ansible Automation Platform
.. |ab| replace:: ansible-builder
.. |ap| replace:: Automation Platform
.. |at| replace:: automation controller
.. |At| replace:: Automation controller
.. |ah| replace:: Automation Hub
.. |EE| replace:: Execution Environment
.. |EEs| replace:: Execution Environments
.. |Ee| replace:: Execution environment
.. |Ees| replace:: Execution environments
.. |ee| replace:: execution environment
.. |ees| replace:: execution environments
.. |versionshortest| replace:: v%s
.. |pubdateshort| replace:: %s
.. |pubdate| replace:: %s
.. |rhel| replace:: Red Hat Enterprise Linux
.. |rhaa| replace:: Red Hat Ansible Automation
.. |rhaap| replace:: Red Hat Ansible Automation Platform
.. |RHAT| replace:: Red Hat Ansible Automation Platform controller
""" % (version, pubdateshort, pubdate)

View File

@@ -0,0 +1,7 @@
# This requirements file is used for AWX latest doc builds.
sphinx # Tooling to build HTML from RST source.
sphinx-ansible-theme # Ansible community theme for Sphinx doc builds.
docutils # Tooling for RST processing and the swagger extension.
Jinja2 # Requires investiation. Possibly inherited from previous repo with a custom theme.
PyYaml # Requires investigation. Possibly used as tooling for swagger API reference content.

View File

@@ -0,0 +1,74 @@
#
# This file is autogenerated by pip-compile with Python 3.11
# by the following command:
#
# pip-compile --allow-unsafe --output-file=docs/docsite/requirements.txt --strip-extras docs/docsite/requirements.in
#
alabaster==0.7.13
# via sphinx
ansible-pygments==0.1.1
# via sphinx-ansible-theme
babel==2.12.1
# via sphinx
certifi==2023.7.22
# via requests
charset-normalizer==3.2.0
# via requests
docutils==0.16
# via
# -r docs/docsite/requirements.in
# sphinx
# sphinx-rtd-theme
idna==3.4
# via requests
imagesize==1.4.1
# via sphinx
jinja2==3.0.3
# via
# -r docs/docsite/requirements.in
# sphinx
markupsafe==2.1.3
# via jinja2
packaging==23.1
# via sphinx
pygments==2.16.1
# via
# ansible-pygments
# sphinx
pyyaml==6.0.1
# via -r docs/docsite/requirements.in
requests==2.31.0
# via sphinx
snowballstemmer==2.2.0
# via sphinx
sphinx==5.1.1
# via
# -r docs/docsite/requirements.in
# sphinx-ansible-theme
# sphinx-rtd-theme
# sphinxcontrib-applehelp
# sphinxcontrib-devhelp
# sphinxcontrib-htmlhelp
# sphinxcontrib-jquery
# sphinxcontrib-qthelp
# sphinxcontrib-serializinghtml
sphinx-ansible-theme==0.9.1
# via -r docs/docsite/requirements.in
sphinx-rtd-theme==1.3.0
# via sphinx-ansible-theme
sphinxcontrib-applehelp==1.0.7
# via sphinx
sphinxcontrib-devhelp==1.0.5
# via sphinx
sphinxcontrib-htmlhelp==2.0.4
# via sphinx
sphinxcontrib-jquery==4.1
# via sphinx-rtd-theme
sphinxcontrib-jsmath==1.0.1
# via sphinx
sphinxcontrib-qthelp==1.0.6
# via sphinx
sphinxcontrib-serializinghtml==1.1.9
# via sphinx
urllib3==2.0.4
# via requests

View File

@@ -0,0 +1,31 @@
Changing the Default Timeout for Authentication
=================================================
.. index::
pair: troubleshooting; authentication timeout
pair: authentication timeout; changing the default
single: authentication token
single: authentication expiring
single: log
single: login timeout
single: timeout login
pair: timeout; session
The default length of time, in seconds, that your supplied token is valid can be changed in the System Settings screen of the AWX user interface:
1. Click the **Settings** from the left navigation bar.
3. Click **Miscellaneous Authentication settings** under the System settings.
3. Click **Edit**.
4. Enter the timeout period in seconds in the **Idle Time Force Log Out** text field.
.. image:: ../common/images/configure-awx-system-timeout.png
4. Click **Save** to apply your changes.
.. note::
If you are accessing AWX directly and are having trouble getting your authentication to stay, in that you have to keep logging in over and over, try clearing your web browser's cache. In situations like this, it is often found that the authentication token has been cached in the browser session and must be cleared.

View File

@@ -0,0 +1,198 @@
.. _ag_manage_utility:
The *awx-manage* Utility
-------------------------------
.. index::
single: awx-manage
The ``awx-manage`` utility is used to access detailed internal information of AWX. Commands for ``awx-manage`` should run as the ``awx`` or ``root`` user.
.. warning::
Running awx-manage commands via playbook is not recommended or supported.
Inventory Import
~~~~~~~~~~~~~~~~
.. index::
single: awx-manage; inventory import
``awx-manage`` is a mechanism by which an AWX administrator can import inventory directly into AWX, for those who cannot use Custom Inventory Scripts.
To use ``awx-manage`` properly, you must first create an inventory in AWX to use as the destination for the import.
For help with ``awx-manage``, run the following command: ``awx-manage inventory_import [--help]``
The ``inventory_import`` command synchronizes an AWX inventory object with a text-based inventory file, dynamic inventory script, or a directory of one or more of the above as supported by core Ansible.
When running this command, specify either an ``--inventory-id`` or ``--inventory-name``, and the path to the Ansible inventory source (``--source``).
::
awx-manage inventory_import --source=/ansible/inventory/ --inventory-id=1
By default, inventory data already stored in AWX blends with data from the external source. To use only the external data, specify ``--overwrite``. To specify that any existing hosts get variable data exclusively from the ``--source``, specify ``--overwrite_vars``. The default behavior adds any new variables from the external source, overwriting keys that already exist, but preserves any variables that were not sourced from the external data source.
::
awx-manage inventory_import --source=/ansible/inventory/ --inventory-id=1 --overwrite
.. include:: ../common/overwrite_var_note_2-4-0.rst
Cleanup of old data
~~~~~~~~~~~~~~~~~~~
.. index::
single: awx-manage, data cleanup
``awx-manage`` has a variety of commands used to clean old data from AWX. The AWX administrators can use the Management Jobs interface for access or use the command line.
- ``awx-manage cleanup_jobs [--help]``
This permanently deletes the job details and job output for jobs older than a specified number of days.
- ``awx-manage cleanup_activitystream [--help]``
This permanently deletes any :ref:`ug_activitystreams` data older than a specific number of days.
Cluster management
~~~~~~~~~~~~~~~~~~~~
.. index::
single: awx-manage; cluster management
Refer to the :ref:`ag_clustering` section for details on the
``awx-manage provision_instance`` and ``awx-manage deprovision_instance``
commands.
.. note::
Do not run other ``awx-manage`` commands unless instructed by Ansible Support.
.. _ag_token_utility:
Token and session management
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
single: awx-manage; token management
single: awx-manage; session management
AWX supports the following commands for OAuth2 token management:
.. contents::
:local:
``create_oauth2_token``
^^^^^^^^^^^^^^^^^^^^^^^^
Use this command to create OAuth2 tokens (specify actual username for ``example_user`` below):
::
$ awx-manage create_oauth2_token --user example_user
New OAuth2 token for example_user: j89ia8OO79te6IAZ97L7E8bMgXCON2
Make sure you provide a valid user when creating tokens. Otherwise, you will get an error message that you tried to issue the command without specifying a user, or supplying a username that does not exist.
.. _ag_manage_utility_revoke_tokens:
``revoke_oauth2_tokens``
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use this command to revoke OAuth2 tokens (both application tokens and personal access tokens (PAT)). By default, it revokes all application tokens (but not their associated refresh tokens), and revokes all personal access tokens. However, you can also specify a user for whom to revoke all tokens.
To revoke all existing OAuth2 tokens:
::
$ awx-manage revoke_oauth2_tokens
To revoke all OAuth2 tokens & their refresh tokens:
::
$ awx-manage revoke_oauth2_tokens --revoke_refresh
To revoke all OAuth2 tokens for the user with ``id=example_user`` (specify actual username for ``example_user`` below):
::
$ awx-manage revoke_oauth2_tokens --user example_user
To revoke all OAuth2 tokens and refresh token for the user with ``id=example_user``:
::
$ awx-manage revoke_oauth2_tokens --user example_user --revoke_refresh
``cleartokens``
^^^^^^^^^^^^^^^^^^^
Use this command to clear tokens which have already been revoked. Refer to `Django's Oauth Toolkit documentation on cleartokens`_ for more detail.
.. _`Django's Oauth Toolkit documentation on cleartokens`: https://django-oauth-toolkit.readthedocs.io/en/latest/management_commands.html
``expire_sessions``
^^^^^^^^^^^^^^^^^^^^^^^^
Use this command to terminate all sessions or all sessions for a specific user. Consider using this command when a user changes role in an organization, is removed from assorted groups in LDAP/AD, or the administrator wants to ensure the user can no longer execute jobs due to membership in these groups.
::
$ awx-manage expire_sessions
This command terminates all sessions by default. The users associated with those sessions will be consequently logged out. To only expire the sessions of a specific user, you can pass their username using the ``--user`` flag (specify actual username for ``example_user`` below):
::
$ awx-manage expire_sessions --user example_user
``clearsessions``
^^^^^^^^^^^^^^^^^^^^^^^^
Use this command to delete all sessions that have expired. Refer to `Django's documentation on clearsessions`_ for more detail.
.. _`Django's documentation on clearsessions`: https://docs.djangoproject.com/en/2.1/topics/http/sessions/#clearing-the-session-store
For more information on OAuth2 token management in the AWX user interface, see the :ref:`ug_applications_auth` section of the |atu|.
Analytics gathering
~~~~~~~~~~~~~~~~~~~~~
.. index::
single: awx-manage; data collection
single: awx-manage; analytics gathering
Use this command to gather analytics on-demand outside of the predefined window (default is 4 hours):
::
$ awx-manage gather_analytics --ship
For customers with disconnected environments who want to collect usage information about unique hosts automated across a time period, use this command:
::
awx-manage host_metric --since YYYY-MM-DD --until YYYY-MM-DD --json
The parameters ``--since`` and ``--until`` specify date ranges and are optional, but one of them has to be present. The ``--json`` flag specifies the output format and is optional.

View File

@@ -0,0 +1,222 @@
.. _ag_clustering:
Clustering
============
.. index::
pair: redundancy; instance groups
pair: redundancy; clustering
Clustering is sharing load between hosts. Each instance should be able to act as an entry point for UI and API access. This should enable AWX administrators to use load balancers in front of as many instances as they wish and maintain good data visibility.
.. note::
Load balancing is optional and is entirely possible to have ingress on one or all instances as needed. The ``CSRF_TRUSTED_ORIGIN`` setting may be required if you are using AWX behind a load balancer. See :ref:`ki_csrf_trusted_origin_setting` for more detail.
Each instance should be able to join AWX cluster and expand its ability to execute jobs. This is a simple system where jobs can and will run anywhere rather than be directed on where to run. Also, clustered instances can be grouped into different pools/queues, called :ref:`ag_instance_groups`.
Setup Considerations
---------------------
.. index::
single: clustering; setup considerations
pair: clustering; PostgreSQL
This section covers initial setup of clusters only. For upgrading an existing cluster, refer to the |atumg|.
Important considerations to note in the new clustering environment:
- PostgreSQL is still a standalone instance and is not clustered. AWX does not manage replica configuration or database failover (if the user configures standby replicas).
- When spinning up a cluster, the database node should be a standalone server, and PostgreSQL should not be installed on one of AWX nodes.
- PgBouncer is not recommended for connection pooling with AWX. Currently, AWX relies heavily on ``pg_notify`` for sending messages across various components, and therefore, PgBouncer cannot readily be used in transaction pooling mode.
- The maximum supported instances in a cluster is 20.
- All instances should be reachable from all other instances and they should be able to reach the database. It is also important for the hosts to have a stable address and/or hostname (depending on how the AWX host is configured).
- All instances must be geographically collocated, with reliable low-latency connections between instances.
- For purposes of upgrading to a clustered environment, your primary instance must be part of the ``default`` group in the inventory *AND* it needs to be the first host listed in the ``default`` group.
- Manual projects must be manually synced to all instances by the customer, and updated on all instances at once.
- The ``inventory`` file for platform deployments should be saved/persisted. If new instances are to be provisioned, the passwords and configuration options, as well as host names, must be made available to the installer.
Scaling the Web and Task pods independently
--------------------------------------------
You can scale replicas up or down for each deployment by using the ``web_replicas`` or ``task_replicas`` respectively. You can scale all pods across both deployments by using ``replicas`` as well. The logic behind these CRD keys acts as such:
- If you specify the ``replicas`` field, the key passed will scale both the ``web`` and ``task`` replicas to the same number.
- If ``web_replicas`` or ``task_replicas`` is ever passed, it will override the existing ``replicas`` field on the specific deployment with the new key value.
These new replicas can be constrained in a similar manner to previous single deployments by appending the particular deployment name in front of the constraint used. More about those new constraints can be found below in the :ref:`ag_assign_pods_to_nodes` section.
.. _ag_assign_pods_to_nodes:
Assigning AWX pods to specific nodes
-------------------------------------
You can constrain the AWX pods created by the operator to run on a certain subset of nodes. ``node_selector`` and ``postgres_selector`` constrains the AWX pods to run only on the nodes that match all the specified key/value pairs. ``tolerations`` and ``postgres_tolerations`` allow the AWX pods to be scheduled onto nodes with matching taints. The ability to specify ``topologySpreadConstraints`` is also allowed through ``topology_spread_constraints`` If you want to use affinity rules for your AWX pod, you can use the ``affinity`` option.
If you want to constrain the web and task pods individually, you can do so by specifying the deployment type before the specific setting. For example, specifying ``task_tolerations`` will allow the AWX task pod to be scheduled onto nodes with matching taints.
+----------------------------------+------------------------------------------+----------+
| Name | Description | Default |
+----------------------------------+------------------------------------------+----------+
| postgres_image | Path of the image to pull | postgres |
+----------------------------------+------------------------------------------+----------+
| postgres_image_version | Image version to pull | 13 |
+----------------------------------+------------------------------------------+----------+
| node_selector | AWX pods' nodeSelector | '' |
+----------------------------------+------------------------------------------+----------+
| web_node_selector | AWX web pods' nodeSelector | '' |
+----------------------------------+------------------------------------------+----------+
| task_node_selector | AWX task pods' nodeSelector | '' |
+----------------------------------+------------------------------------------+----------+
| topology_spread_constraints | AWX pods' topologySpreadConstraints | '' |
+----------------------------------+------------------------------------------+----------+
| web_topology_spread_constraints | AWX web pods' topologySpreadConstraints | '' |
+----------------------------------+------------------------------------------+----------+
| task_topology_spread_constraints | AWX task pods' topologySpreadConstraints | '' |
+----------------------------------+------------------------------------------+----------+
| affinity | AWX pods' affinity rules | '' |
+----------------------------------+------------------------------------------+----------+
| web_affinity | AWX web pods' affinity rules | '' |
+----------------------------------+------------------------------------------+----------+
| task_affinity | AWX task pods' affinity rules | '' |
+----------------------------------+------------------------------------------+----------+
| tolerations | AWX pods' tolerations | '' |
+----------------------------------+------------------------------------------+----------+
| web_tolerations | AWX web pods' tolerations | '' |
+----------------------------------+------------------------------------------+----------+
| task_tolerations | AWX task pods' tolerations | '' |
+----------------------------------+------------------------------------------+----------+
| annotations | AWX pods' annotations | '' |
+----------------------------------+------------------------------------------+----------+
| postgres_selector | Postgres pods' nodeSelector | '' |
+----------------------------------+------------------------------------------+----------+
| postgres_tolerations | Postgres pods' tolerations | '' |
+----------------------------------+------------------------------------------+----------+
Example of customization could be:
::
---
spec:
...
node_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
topology_spread_constraints: |
- maxSkew: 100
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: "ScheduleAnyway"
labelSelector:
matchLabels:
app.kubernetes.io/name: "<resourcename>"
tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
task_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX_task"
effect: "NoSchedule"
postgres_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
postgres_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
- another-node-label-value
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
Status and Monitoring via Browser API
--------------------------------------
AWX itself reports as much status as it can via the Browsable API at ``/api/v2/ping`` in order to provide validation of the health of the cluster, including:
- The instance servicing the HTTP request
- The timestamps of the last heartbeat of all other instances in the cluster
- Instance Groups and Instance membership in those groups
View more details about Instances and Instance Groups, including running jobs and membership information at ``/api/v2/instances/`` and ``/api/v2/instance_groups/``.
Instance Services and Failure Behavior
----------------------------------------
Each AWX instance is made up of several different services working collaboratively:
- HTTP Services - This includes the AWX application itself as well as external web services.
- Callback Receiver - Receives job events from running Ansible jobs.
- Dispatcher - The worker queue that processes and runs all jobs.
- Redis - This key value store is used as a queue for event data propagated from ansible-playbook to the application.
- Rsyslog - log processing service used to deliver logs to various external logging services.
AWX is configured in such a way that if any of these services or their components fail, then all services are restarted. If these fail sufficiently often in a short span of time, then the entire instance will be placed offline in an automated fashion in order to allow remediation without causing unexpected behavior.
Job Runtime Behavior
---------------------
The way jobs are run and reported to a 'normal' user of AWX does not change. On the system side, some differences are worth noting:
- When a job is submitted from the API interface it gets pushed into the dispatcher queue. Each AWX instance will connect to and receive jobs from that queue using a particular scheduling algorithm. Any instance in the cluster is just as likely to receive the work and execute the task. If a instance fails while executing jobs, then the work is marked as permanently failed.
.. image:: ../common/images/clustering-visual.png
- Project updates run successfully on any instance that could potentially run a job. Projects will sync themselves to the correct version on the instance immediately prior to running the job. If the needed revision is already locally checked out and Galaxy or Collections updates are not needed, then a sync may not be performed.
- When the sync happens, it is recorded in the database as a project update with a ``launch_type = sync`` and ``job_type = run``. Project syncs will not change the status or version of the project; instead, they will update the source tree *only* on the instance where they run.
- If updates are needed from Galaxy or Collections, a sync is performed that downloads the required roles, consuming that much more space in your /tmp file. In cases where you have a big project (around 10 GB), disk space on ``/tmp`` may be an issue.
Job Runs
^^^^^^^^^^^
By default, when a job is submitted to the AWX queue, it can be picked up by any of the workers. However, you can control where a particular job runs, such as restricting the instances from which a job runs on.
In order to support temporarily taking an instance offline, there is a property enabled defined on each instance. When this property is disabled, no jobs will be assigned to that instance. Existing jobs will finish, but no new work will be assigned.

View File

@@ -0,0 +1,100 @@
.. _ag_configure_awx:
AWX Configuration
~~~~~~~~~~~~~~~~~~~
.. index::
single: configure AWX
.. _configure_awx_overview:
You can configure various AWX settings within the Settings screen in the following tabs:
.. image:: ../common/images/ug-settings-menu-screen.png
Each tab contains fields with a **Reset** button, allowing you to revert any value entered back to the default value. **Reset All** allows you to revert all the values to their factory default values.
**Save** applies changes you make, but it does not exit the edit dialog. To return to the Settings screen, click **Settings** from the left navigation bar or use the breadcrumbs at the top of the current view.
Authentication
=================
.. index::
single: social authentication
single: authentication
single: enterprise authentication
pair: configuration; authentication
.. include:: ./configure_awx_authentication.rst
.. _configure_awx_jobs:
Jobs
=========
.. index::
single: jobs
pair: configuration; jobs
The Jobs tab allows you to configure the types of modules that are allowed to be used by AWX's Ad Hoc Commands feature, set limits on the number of jobs that can be scheduled, define their output size, and other details pertaining to working with Jobs in AWX.
1. From the left navigation bar, click **Settings** from the left navigation bar and select **Jobs settings** from the Settings screen.
2. Set the configurable options from the fields provided. Click the tooltip |help| icon next to the field that you need additional information or details about. Refer to the :ref:`ug_galaxy` section for details about configuring Galaxy settings.
.. note::
The values for all the timeouts are in seconds.
.. image:: ../common/images/configure-awx-jobs.png
3. Click **Save** to apply the settings or **Cancel** to abandon the changes.
.. _configure_awx_system:
System
======
.. index::
pair: configuration; system
The System tab allows you to define the base URL for the AWX host, configure alerts, enable activity capturing, control visibility of users, enable certain AWX features and functionality through a license file, and configure logging aggregation options.
1. From the left navigation bar, click **Settings**.
2. The right side of the Settings window is a set of configurable System settings. Select from the following options:
- **Miscellaneous System settings**: enable activity streams, specify the default execution environment, define the base URL for the AWX host, enable AWX administration alerts, set user visibility, define analytics, specify usernames and passwords, and configure proxies.
- **Miscellaneous Authentication settings**: configure options associated with authentication methods (built-in or SSO), sessions (timeout, number of sessions logged in, tokens), and social authentication mapping.
- **Logging settings**: configure logging options based on the type you choose:
.. image:: ../common/images/configure-awx-system-logging-types.png
For more information about each of the logging aggregation types, refer to the :ref:`ag_logging` section of the |ata|.
3. Set the configurable options from the fields provided. Click the tooltip |help| icon next to the field that you need additional information or details about. Below is an example of the System settings window.
.. |help| image:: ../common/images/tooltips-icon.png
.. image:: ../common/images/configure-awx-system.png
.. note::
The **Allow External Users to Create Oauth2 Tokens** setting is disabled by default. This ensures external users cannot *create* their own tokens. If you enable then disable it, any tokens created by external users in the meantime will still exist, and are not automatically revoked.
4. Click **Save** to apply the settings or **Cancel** to abandon the changes.
.. _configure_awx_ui:
User Interface
================
.. index::
pair: configuration; UI
pair: configuration; data collection
pair: configuration; custom logo
pair: configuration; custom login message
pair: logo; custom
pair: login message; custom
.. include:: ../common/logos_branding.rst

View File

@@ -0,0 +1,19 @@
Through the AWX user interface, you can set up a simplified login through various authentication types: GitHub, Google, LDAP, RADIUS, and SAML. After you create and register your developer application with the appropriate service, you can set up authorizations for them.
1. From the left navigation bar, click **Settings**.
2. The left side of the Settings window is a set of configurable Authentication settings. Select from the following options:
- :ref:`ag_auth_azure`
- :ref:`ag_auth_github`
- :ref:`ag_auth_google_oauth2`
- :ref:`LDAP settings <ag_auth_ldap>`
- :ref:`ag_auth_radius`
- :ref:`ag_auth_saml`
- :ref:`ag_auth_tacacs`
- :ref:`ag_auth_oidc`
Different authentication types require you to enter different information. Be sure to include all the information as required.
3. Click **Save** to apply the settings or **Cancel** to abandon the changes.

View File

@@ -0,0 +1,442 @@
.. _ag_ext_exe_env:
Container and Instance Groups
==================================
.. index::
pair: container; groups
pair: instance; groups
AWX allows you to execute jobs via ansible playbook runs directly on a member of the cluster or in a namespace of an Openshift cluster with the necessary service account provisioned called a Container Group. You can execute jobs in a container group only as-needed per playbook. For more information, see :ref:`ag_container_groups` towards the end of this section.
For |ees|, see :ref:`ug_execution_environments` in the |atu|.
.. _ag_instance_groups:
Instance Groups
------------------
Instances can be grouped into one or more Instance Groups. Instance groups can be assigned to one or more of the resources listed below.
- Organizations
- Inventories
- Job Templates
When a job associated with one of the resources executes, it will be assigned to the instance group associated with the resource. During the execution process, instance groups associated with Job Templates are checked before those associated with Inventories. Similarly, instance groups associated with Inventories are checked before those associated with Organizations. Thus, Instance Group assignments for the three resources form a hierarchy: Job Template **>** Inventory **>** Organization.
Here are some of the things to consider when working with instance groups:
- You may optionally define other groups and group instances in those groups. These groups should be prefixed with ``instance_group_``. Instances are required to be in the ``awx`` or ``execution_nodes`` group alongside other ``instance_group_`` groups. In a clustered setup, at least one instance **must** be present in the ``awx`` group, which will appear as ``controlplane`` in the API instance groups. See :ref:`ag_awx_group_policies` for example scenarios.
- A ``default`` API instance group is automatically created with all nodes capable of running jobs. Technically, it is like any other instance group but if a specific instance group is not associated with a specific resource, then job execution will always fall back to the ``default`` instance group. The ``default`` instance group always exists (it cannot be deleted nor renamed).
- Do not create a group named ``instance_group_default``.
- Do not name any instance the same as a group name.
.. _ag_awx_group_policies:
``awx`` group policies
^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: policies; awx groups
Use the following criteria when defining nodes:
- nodes in the ``awx`` group can define ``node_type`` hostvar to be ``hybrid`` (default) or ``control``
- nodes in the ``execution_nodes`` group can define ``node_type`` hostvar to be ``execution`` (default) or ``hop``
You can define custom groups in the inventory file by naming groups with ``instance_group_*`` where ``*`` becomes the name of the group in the API. Or, you can create custom instance groups in the API after the install has finished.
The current behavior expects a member of an ``instance_group_*`` be part of ``awx`` or ``execution_nodes`` group. Consider this example scenario:
::
[awx]
126-addr.tatu.home ansible_host=192.168.111.126 node_type=control
[awx:vars]
peers=execution_nodes
[execution_nodes]
[instance_group_test]
110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928
As a result of running the installer, you will get the error below:
.. code-block:: bash
TASK [ansible.automation_platform_installer.check_config_static : Validate mesh topology] ***
fatal: [126-addr.tatu.home -> localhost]: FAILED! => {"msg": "The host '110-addr.tatu.home' is not present in either [awx] or [execution_nodes]"}
To fix this, you could move the box ``110-addr.tatu.home`` to an ``execution_node`` group.
::
[awx]
126-addr.tatu.home ansible_host=192.168.111.126 node_type=control
[awx:vars]
peers=execution_nodes
[execution_nodes]
110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928
[instance_group_test]
110-addr.tatu.home
This results in:
.. code-block:: bash
TASK [ansible.automation_platform_installer.check_config_static : Validate mesh topology] ***
ok: [126-addr.tatu.home -> localhost] => {"changed": false, "mesh": {"110-addr.tatu.home": {"node_type": "execution", "peers": [], "receptor_control_filename": "receptor.sock", "receptor_control_service_name": "control", "receptor_listener": true, "receptor_listener_port": 8928, "receptor_listener_protocol": "tcp", "receptor_log_level": "info"}, "126-addr.tatu.home": {"node_type": "control", "peers": ["110-addr.tatu.home"], "receptor_control_filename": "receptor.sock", "receptor_control_service_name": "control", "receptor_listener": false, "receptor_listener_port": 27199, "receptor_listener_protocol": "tcp", "receptor_log_level": "info"}}}
Upon upgrading from older versions of awx, the legacy ``instance_group_`` member will most likely have the awx code installed, which would cause that node to be placed in the ``awx`` group.
Configuring Instance Groups from the API
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: instance group; configure
pair: instance group; API
Instance groups can be created by POSTing to ``/api/v2/instance_groups`` as a system administrator.
Once created, instances can be associated with an instance group with:
.. code-block:: bash
HTTP POST /api/v2/instance_groups/x/instances/ {'id': y}`
An instance that is added to an instance group will automatically reconfigure itself to listen on the group's work queue. See the following section, :ref:`ag_instance_group_policies`, for more details.
.. _ag_instance_group_policies:
Instance group policies
^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: policies; instance groups
pair: clustering; instance group policies
You can configure AWX instances to automatically join Instance Groups when they come online by defining a :term:`policy`. These policies are evaluated for every new instance that comes online.
Instance Group Policies are controlled by three optional fields on an ``Instance Group``:
- ``policy_instance_percentage``: This is a number between 0 - 100. It guarantees that this percentage of active AWX instances will be added to this Instance Group. As new instances come online, if the number of Instances in this group relative to the total number of instances is less than the given percentage, then new ones will be added until the percentage condition is satisfied.
- ``policy_instance_minimum``: This policy attempts to keep at least this many instances in the Instance Group. If the number of available instances is lower than this minimum, then all instances will be placed in this Instance Group.
- ``policy_instance_list``: This is a fixed list of instance names to always include in this Instance Group.
The Instance Groups list view from the |at| User Interface provides a summary of the capacity levels for each instance group according to instance group policies:
|Instance Group policy example|
.. |Instance Group policy example| image:: ../common/images/instance-groups_list_view.png
See :ref:`ug_instance_groups_create` for further detail.
Notable policy considerations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- ``policy_instance_percentage`` and ``policy_instance_minimum`` both set minimum allocations. The rule that results in more instances assigned to the group will take effect. For example, if you have a ``policy_instance_percentage`` of 50% and a ``policy_instance_minimum`` of 2 and you start 6 instances, 3 of them would be assigned to the Instance Group. If you reduce the number of total instances in the cluster to 2, then both of them would be assigned to the Instance Group to satisfy ``policy_instance_minimum``. This way, you can set a lower bound on the amount of available resources.
- Policies do not actively prevent instances from being associated with multiple Instance Groups, but this can effectively be achieved by making the percentages add up to 100. If you have 4 instance groups, assign each a percentage value of 25 and the instances will be distributed among them with no overlap.
Manually pinning instances to specific groups
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: pinning; instance groups
pair: clustering; pinning
If you have a special instance which needs to be exclusively assigned to a specific Instance Group but don't want it to automatically join other groups via "percentage" or "minimum" policies:
1. Add the instance to one or more Instance Groups' ``policy_instance_list``
2. Update the instance's ``managed_by_policy`` property to be ``False``.
This will prevent the Instance from being automatically added to other groups based on percentage and minimum policy; it will only belong to the groups you've manually assigned it to:
.. code-block:: bash
HTTP PATCH /api/v2/instance_groups/N/
{
"policy_instance_list": ["special-instance"]
}
HTTP PATCH /api/v2/instances/X/
{
"managed_by_policy": False
}
.. _ag_instance_groups_job_runtime_behavior:
Job Runtime Behavior
^^^^^^^^^^^^^^^^^^^^^^
When you run a job associated with a instance group, some behaviors worth noting are:
- If a cluster is divided into separate instance groups, then the behavior is similar to the cluster as a whole. If two instances are assigned to a group then either one is just as likely to receive a job as any other in the same group.
- As AWX instances are brought online, it effectively expands the work capacity of the system. If those instances are also placed into instance groups, then they also expand that group's capacity. If an instance is performing work and it is a member of multiple groups, then capacity will be reduced from all groups for which it is a member. De-provisioning an instance will remove capacity from the cluster wherever that instance was assigned.
.. note::
Not all instances are required to be provisioned with an equal capacity.
.. _ag_instance_groups_control_where_job_runs:
Control Where a Job Runs
^^^^^^^^^^^^^^^^^^^^^^^^^
If any of the job template, inventory, or organization has instance groups associated with them, a job ran from that job template will not be eligible for the default behavior. That means that if all of the instances inside of the instance groups associated with these 3 resources are out of capacity, the job will remain in the pending state until capacity becomes available.
The order of preference in determining which instance group to submit the job to is as follows:
1. job template
2. inventory
3. organization (by way of project)
If instance groups are associated with the job template, and all of these are at capacity, then the job will be submitted to instance groups specified on inventory, and then organization. Jobs should execute in those groups in preferential order as resources are available.
The global ``default`` group can still be associated with a resource, just like any of the custom instance groups defined in the playbook. This can be used to specify a preferred instance group on the job template or inventory, but still allow the job to be submitted to any instance if those are out of capacity.
As an example, by associating ``group_a`` with a Job Template and also associating the ``default`` group with its inventory, you allow the ``default`` group to be used as a fallback in case ``group_a`` gets out of capacity.
In addition, it is possible to not associate an instance group with one resource but designate another resource as the fallback. For example, not associating an instance group with a job template and have it fall back to the inventory and/or the organization's instance group.
This presents two other great use cases:
1. Associating instance groups with an inventory (omitting assigning the job template to an instance group) will allow the user to ensure that any playbook run against a specific inventory will run only on the group associated with it. This can be super useful in the situation where only those instances have a direct link to the managed nodes.
2. An administrator can assign instance groups to organizations. This effectively allows the administrator to segment out the entire infrastructure and guarantee that each organization has capacity to run jobs without interfering with any other organization's ability to run jobs.
Likewise, an administrator could assign multiple groups to each organization as desired, as in the following scenario:
- There are three instance groups: A, B, and C. There are two organizations: Org1 and Org2.
- The administrator assigns group A to Org1, group B to Org2 and then assign group C to both Org1 and Org2 as an overflow for any extra capacity that may be needed.
- The organization administrators are then free to assign inventory or job templates to whichever group they want (or just let them inherit the default order from the organization).
|Instance Group example|
.. |Instance Group example| image:: ../common/images/instance-groups-scenarios.png
Arranging resources in this way offers a lot of flexibility. Also, you can create instance groups with only one instance, thus allowing you to direct work towards a very specific Host in the AWX cluster.
.. _ag_instancegrp_cpacity:
Instance group capacity limits
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: instance groups; capacity
pair: instance groups; limits
pair: instance groups; forks
pair: instance groups; jobs
Sometimes there is external business logic which may drive the desire to limit the concurrency of jobs sent to an instance group, or the maximum number of forks to be consumed.
For traditional instances and instance groups, there could be a desire to allow two organizations to run jobs on the same underlying instances, but limit each organization's total number of concurrent jobs. This can be achieved by creating an instance group for each organization and assigning the value for ``max_concurrent_jobs``.
For container groups, AWX is generally not aware of the resource limits of the OpenShift cluster. There may be limits set on the number of pods on a namespace, or only resources available to schedule a certain number of pods at a time if no auto-scaling is in place. Again, in this case, we can adjust the value for ``max_concurrent_jobs``.
Another parameter available is ``max_forks``. This provides additional flexibility for capping the capacity consumed on an instance group or container group. This may be used if jobs with a wide variety of inventory sizes and "forks" values are being run. This way, you can limit an organization to run up to 10 jobs concurrently, but consume no more than 50 forks at a time.
::
max_concurrent_jobs: 10
max_forks: 50
If 10 jobs that use 5 forks each are run, an 11th job will wait until one of these finishes to run on that group (or be scheduled on a different group with capacity).
If 2 jobs are running with 20 forks each, then a 3rd job with a ``task_impact`` of 11 or more will wait until one of these finishes to run on that group (or be scheduled on a different group with capacity).
For container groups, using the ``max_forks`` value is useful given that all jobs are submitted using the same ``pod_spec`` with the same resource requests, irrespective of the "forks" value of the job. The default ``pod_spec`` sets requests and not limits, so the pods can "burst" above their requested value without being throttled or reaped. By setting the ``max_forks`` value, you can help prevent a scenario where too many jobs with large forks values get scheduled concurrently and cause the OpenShift nodes to be oversubscribed with multiple pods using more resources than their requested value.
To set the maximum values for the concurrent jobs and forks in an instance group, see :ref:`ug_instance_groups_create` in the |atu|.
.. _ag_instancegrp_deprovision:
Deprovision Instance Groups
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: groups; deprovisioning
Re-running the setup playbook does not automatically deprovision instances since clusters do not currently distinguish between an instance that was taken offline intentionally or due to failure. Instead, shut down all services on the AWX instance and then run the deprovisioning tool from any other instance:
#. Shut down the instance or stop the service with the command, ``automation-awx-service stop``.
#. Run the deprovision command ``$ awx-manage deprovision_instance --hostname=<name used in inventory file>`` from another instance to remove it from the AWX cluster registry.
Example: ``awx-manage deprovision_instance --hostname=hostB``
Similarly, deprovisioning instance groups in AWX does not automatically deprovision or remove instance groups, even though re-provisioning will often cause these to be unused. They may still show up in API endpoints and stats monitoring. These groups can be removed with the following command:
Example: ``awx-manage unregister_queue --queuename=<name>``
Removing an instance's membership from an instance group in the inventory file and re-running the setup playbook does not ensure the instance won't be added back to a group. To be sure that an instance will not be added back to a group, remove via the API and also remove it in your inventory file, or you can stop defining instance groups in the inventory file altogether. You can also manage instance group topology through the |at| User Interface. For more information on managing instance groups in the UI, refer to :ref:`Instance Groups <ug_instance_groups>` in the |atu|.
.. _ag_container_groups:
Container Groups
-----------------
.. index::
single: container groups
pair: containers; instance groups
AWX supports :term:`Container Groups`, which allow you to execute jobs in AWX regardless of whether AWX is installed as a standalone, in a virtual environment, or in a container. Container groups act as a pool of resources within a virtual environment. You can create instance groups to point to an OpenShift container, which are job environments that are provisioned on-demand as a Pod that exists only for the duration of the playbook run. This is known as the ephemeral execution model and ensures a clean environment for every job run.
In some cases, it is desirable to have container groups be "always-on", which is configured through the creation of an instance.
.. note::
Container Groups upgraded from versions prior to |at| 4.0 will revert back to default and completely remove the old pod definition, clearing out all custom pod definitions in the migration.
Container groups are different from |ees| in that |ees| are container images and do not use a virtual environment. See :ref:`ug_execution_environments` in the |atu| for further detail.
Create a container group
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: ../common/get-creds-from-service-account.rst
To create a container group:
1. Use the AWX user interface to create an :ref:`ug_credentials_ocp_k8s` credential that will be used with your container group, see :ref:`ug_credentials_add` in the |atu| for detail.
2. Create a new container group by navigating to the Instance Groups configuration window by clicking **Instance Groups** from the left navigation bar.
3. Click the **Add** button and select **Create Container Group**.
|IG - create new CG|
.. |IG - create new CG| image:: ../common/images/instance-group-create-new-cg.png
4. Enter a name for your new container group and select the credential previously created to associate it to the container group.
.. _ag_customize_pod_spec:
Customize the Pod spec
^^^^^^^^^^^^^^^^^^^^^^^^
AWX provides a simple default Pod specification, however, you can provide a custom YAML (or JSON) document that overrides the default Pod spec. This field uses any custom fields (i.e. ``ImagePullSecrets``) that can be "serialized" as valid Pod JSON or YAML. A full list of options can be found in the `OpenShift documentation <https://docs.openshift.com/online/pro/architecture/core_concepts/pods_and_services.html>`_.
To customize the Pod spec, specify the namespace in the **Pod Spec Override** field by using the toggle to enable and expand the **Pod Spec Override** field and click **Save** when done.
|IG - CG customize pod|
.. |IG - CG customize pod| image:: ../common/images/instance-group-customize-cg-pod.png
You may provide additional customizations, if needed. Click **Expand** to view the entire customization window.
.. image:: ../common/images/instance-group-customize-cg-pod-expanded.png
.. note::
The image used at job launch time is determined by which |ee| is associated with the job. If a Container Registry credential is associated with the |ee|, then AWX will attempt to make a ``ImagePullSecret`` to pull the image. If you prefer not to give the service account permission to manage secrets, you must pre-create the ``ImagePullSecret`` and specify it on the pod spec, and omit any credential from the |ee| used.
Once the container group is successfully created, the **Details** tab of the newly created container group remains, which allows you to review and edit your container group information. This is the same menu that is opened if the Edit (|edit-button|) button is clicked from the **Instance Group** link. You can also edit **Instances** and review **Jobs** associated with this instance group.
.. |edit-button| image:: ../common/images/edit-button.png
|IG - example CG successfully created|
.. |IG - example CG successfully created| image:: ../common/images/instance-group-example-cg-successfully-created.png
Container groups and instance groups are labeled accordingly.
.. note::
Despite the fact that customers have custom Pod specs, upgrades may be difficult if the default ``pod_spec`` changes. Most any manifest can be applied to any namespace, with the namespace specified separately, most likely you will only need to override the namespace. Similarly, pinning a default image for different releases of the platform to different versions of the default job runner container is tricky. If the default image is specified in the Pod spec, then upgrades do not pick up the new default changes are made to the default Pod spec.
Verify container group functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To verify the deployment and termination of your container:
1. Create a mock inventory and associate the container group to it by populating the name of the container group in the **Instance Group** field. See :ref:`ug_inventories_add` in the |atu| for detail.
|Dummy inventory|
.. |Dummy inventory| image:: ../common/images/inventories-create-new-cg-test-inventory.png
2. Create "localhost" host in inventory with variables:
::
{'ansible_host': '127.0.0.1', 'ansible_connection': 'local'}
|Inventory with localhost|
.. |Inventory with localhost| image:: ../common/images/inventories-create-new-cg-test-localhost.png
3. Launch an ad hoc job against the localhost using the *ping* or *setup* module. Even though the **Machine Credential** field is required, it does not matter which one is selected for this simple test.
|Launch inventory with localhost|
.. |Launch inventory with localhost| image:: ../common/images/inventories-launch-adhoc-cg-test-localhost.png
.. image:: ../common/images/inventories-launch-adhoc-cg-test-localhost2.png
You can see in the jobs detail view the container was reached successfully using one of ad hoc jobs.
|Inventory with localhost ping success|
.. |Inventory with localhost ping success| image:: ../common/images/inventories-launch-adhoc-cg-test-localhost-success.png
If you have an OpenShift UI, you can see Pods appear and disappear as they deploy and terminate. Alternatively, you can use the CLI to perform a ``get pod`` operation on your namespace to watch these same events occurring in real-time.
View container group jobs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you run a job associated with a container group, you can see the details of that job in the **Details** view and its associated container group and the execution environment that spun up.
|IG - instances jobs|
.. |IG - instances jobs| image:: ../common/images/instance-group-job-details-with-cgs.png
Kubernetes API failure conditions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running a container group and the Kubernetes API responds that the resource quota has been exceeded, AWX keeps the job in pending state. Other failures result in the traceback of the **Error Details** field showing the failure reason, similar to the example here:
::
Error creating pod: pods is forbidden: User "system: serviceaccount: aap:example" cannot create resource "pods" in API group "" in the namespace "aap"
.. _ag_container_capacity:
Container capacity limits
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: container groups; capacity
pair: container groups; limits
Capacity limits and quotas for containers are defined via objects in the Kubernetes API:
- To set limits on all pods within a given namespace, use the ``LimitRange`` object. Refer to the OpenShift documentation for `Quotas and Limit Ranges <https://docs.openshift.com/online/pro/dev_guide/compute_resources.html#overview>`_.
- To set limits directly on the pod definition launched by AWX, see :ref:`ag_customize_pod_spec` and refer to the OpenShift documentation to set the options to `compute resources <https://docs.openshift.com/online/pro/dev_guide/compute_resources.html#dev-compute-resources>`_.
.. Note::
Container groups do not use the capacity algorithm that normal nodes use. You would need to explicitly set the number of forks at the job template level, for instance. If forks are configured in AWX, that setting will be passed along to the container.

View File

@@ -0,0 +1,21 @@
.. _ag_custom_inventory_script:
Custom Inventory Scripts
--------------------------
.. index::
single: custom inventory scripts
single: inventory scripts; custom
Inventory scripts have been discontinued. For more information, see :ref:`ug_customscripts` in the |atu|.
If you use custom inventory scripts, migrate to sourcing these scripts from a project. See :ref:`ag_inv_import` in the subsequent chapter, and also refer to :ref:`ug_inventory_sources` in the |atu| for more detail.
If you are migrating to |ees|, see:
- :ref:`upgrade_venv`
- :ref:`mesh_topology_ee` in the |atumg| to validate your topology
If you already have a mesh topology set up and want to view node type, node health, and specific details about each node, see :ref:`ag_topology_viewer` later in this guide.

View File

@@ -0,0 +1,13 @@
.. _ag_custom_rebranding:
***************************
Using Custom Logos in AWX
***************************
.. index::
single: custom logo
single: rebranding
pair: logo; custom
.. include:: ../common/logos_branding.rst

View File

@@ -0,0 +1,577 @@
.. _ag_ent_auth:
Setting up Enterprise Authentication
==================================================
.. index::
single: enterprise authentication
single: authentication
This section describes setting up authentication for the following enterprise systems:
.. contents::
:local:
.. note::
For LDAP authentication, see :ref:`ag_auth_ldap`.
SAML, RADIUS, and TACACS+ users are categorized as 'Enterprise' users. The following rules apply to Enterprise users:
- Enterprise users can only be created via the first successful login attempt from remote authentication backend.
- Enterprise users cannot be created/authenticated if non-enterprise users with the same name has already been created in AWX.
- AWX passwords of enterprise users should always be empty and cannot be set by any user if there are enterprise backend-enabled.
- If enterprise backends are disabled, an enterprise user can be converted to a normal AWX user by setting the password field. However, this operation is irreversible, as the converted AWX user can no longer be treated as enterprise user.
.. _ag_auth_azure:
Azure AD settings
-------------------
.. index::
pair: authentication; Azure AD
To set up enterprise authentication for Microsoft Azure Active Directory (AD), you will need to obtain an OAuth2 key and secret by registering your organization-owned application from Azure at https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app. Each key and secret must belong to a unique application and cannot be shared or reused between different authentication backends. In order to register the application, you must supply it with your webpage URL, which is the Callback URL shown in the Settings Authentication screen.
1. Click **Settings** from the left navigation bar.
2. On the left side of the Settings window, click **Azure AD settings** from the list of Authentication options.
3. The **Azure AD OAuth2 Callback URL** field is already pre-populated and non-editable.
Once the application is registered, Azure displays the Application ID and Object ID.
4. Click **Edit** and copy and paste Azure's Application ID to the **Azure AD OAuth2 Key** field.
Following Azure AD's documentation for connecting your app to Microsoft Azure Active Directory, supply the key (shown at one time only) to the client for authentication.
5. Copy and paste the actual secret key created for your Azure AD application to the **Azure AD OAuth2 Secret** field of the Settings - Authentication screen.
6. For details on completing the mapping fields, see :ref:`ag_org_team_maps`.
7. Click **Save** when done.
8. To verify that the authentication was configured correctly, logout of AWX and the login screen will now display the Microsoft Azure logo to allow logging in with those credentials.
.. image:: ../common/images/configure-awx-auth-azure-logo.png
For application registering basics in Azure AD, refer to the `Azure AD Identity Platform (v2)`_ overview.
.. _`Azure AD Identity Platform (v2)`: https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-overview
LDAP Authentication
---------------------
Refer to the :ref:`ag_auth_ldap` section.
.. _ag_auth_radius:
RADIUS settings
------------------
.. index::
pair: authentication; RADIUS Authentication Settings
AWX can be configured to centrally use RADIUS as a source for authentication information.
1. Click **Settings** from the left navigation bar.
2. On the left side of the Settings window, click **RADIUS settings** from the list of Authentication options.
3. Click **Edit** and enter the Host or IP of the Radius server in the **Radius Server** field. If this field is left blank, Radius authentication is disabled.
4. Enter the port and secret information in the next two fields.
5. Click **Save** when done.
.. _ag_auth_saml:
SAML settings
----------------
.. index::
pair: authentication; SAML Service Provider
SAML allows the exchange of authentication and authorization data between an Identity Provider (IdP - a system of servers that provide the Single Sign On service) and a Service Provider (in this case, AWX). AWX can be configured to talk with SAML in order to authenticate (create/login/logout) AWX users. User Team and Organization membership can be embedded in the SAML response to AWX.
.. image:: ../common/images/configure-awx-auth-saml-topology.png
The following instructions describe AWX as the service provider.
To setup SAML authentication:
1. Click **Settings** from the left navigation bar.
2. On the left side of the Settings window, click **SAML settings** from the list of Authentication options.
3. The **SAML Assertion Consume Service (ACS) URL** and **SAML Service Provider Metadata URL** fields are pre-populated and are non-editable. Contact the Identity Provider administrator and provide the information contained in these fields.
4. Click **Edit** and set the **SAML Service Provider Entity ID** to be the same as the **Base URL of the service** field that can be found in the Miscellaneous System settings screen by clicking **Settings** from the left navigation bar. Through the API, it can be viewed in the ``/api/v2/settings/system``, under the ``TOWER_URL_BASE`` variable. The Entity ID can be set to any one of the individual AWX cluster nodes, but it is good practice to set it to the URL of the Service Provider. Ensure that the Base URL matches the FQDN of the load balancer (if used).
.. note::
The Base URL is different for each node in a cluster. Commonly, a load balancer will sit in front of many AWX cluster nodes to provide a single entry point, the AWX Cluster FQDN. The SAML Service Provider must be able establish an outbound connection and route to the AWX Cluster Node or the AWX Cluster FQDN set in the SAML Service Provider Entity ID.
In this example, the Service Provider is the AWX cluster, and therefore, the ID is set to the AWX Cluster FQDN.
.. image:: ../common/images/configure-awx-auth-saml-spentityid.png
5. Create a server certificate for the Ansible cluster. Typically when an Ansible cluster is configured, AWX nodes will be configured to handle HTTP traffic only and the load balancer will be an SSL Termination Point. In this case, an SSL certificate is required for the load balancer, and not for the individual AWX Cluster Nodes. SSL can either be enabled or disabled per individual AWX node, but should be disabled when using an SSL terminated load balancer. It is recommended to use a non-expiring self signed certificate to avoid periodically updating certificates. This way, authentication will not fail in case someone forgets to update the certificate.
.. note::
The **SAML Service Provider Public Certificate** field should contain the entire certificate, including the "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----".
If you are using a CA bundle with your certificate, include the entire bundle in this field.
.. image:: ../common/images/configure-awx-auth-saml-cert.png
As an example for public certs:
::
-----BEGIN CERTIFICATE——
... cert text ...
-----END CERTIFICATE——
6. Create an optional private key for AWX to use as a service provider (SP) and enter it in the **SAML Service Provider Private Key** field.
As an example for private keys:
::
-----BEGIN PRIVATE KEY--
... key text ...
-----END PRIVATE KEY——
7. Provide the IdP with some details about the AWX cluster during the SSO process in the **SAML Service Provider Organization Info** field.
::
{
"en-US": {
"url": "http://www.example.com",
"displayname": "Example",
"name": "example"
}
}
For example:
.. image:: ../common/images/configure-awx-auth-saml-org-info.png
.. note::
These fields are required in order to properly configure SAML within AWX.
8. Provide the IdP with the technical contact information in the **SAML Service Provider Technical Contact** field. Do not remove the contents of this field.
::
{
"givenName": "Some User",
"emailAddress": "suser@example.com"
}
For example:
.. image:: ../common/images/configure-awx-auth-saml-techcontact-info.png
9. Provide the IdP with the support contact information in the **SAML Service Provider Support Contact** field. Do not remove the contents of this field.
::
{
"givenName": "Some User",
"emailAddress": "suser@example.com"
}
For example:
.. image:: ../common/images/configure-awx-auth-saml-suppcontact-info.png
10. In the **SAML Enabled Identity Providers** field, provide information on how to connect to each Identity Provider listed. AWX expects the following SAML attributes in the example below:
::
Username(urn:oid:0.9.2342.19200300.100.1.1)
Email(urn:oid:0.9.2342.19200300.100.1.3)
FirstName(urn:oid:2.5.4.42)
LastName(urn:oid:2.5.4.4)
If these attributes are not known, map existing SAML attributes to lastname, firstname, email and username.
Configure the required keys for each IDp:
- ``attr_user_permanent_id`` - the unique identifier for the user. It can be configured to match any of the attribute sent from the IdP. Usually, it is set to ``name_id`` if ``SAML:nameid`` attribute is sent to the AWX node or it can be the username attribute, or a custom unique identifier.
- ``entity_id`` - the Entity ID provided by the Identity Provider administrator. The admin creates a SAML profile for AWX and it generates a unique URL.
- ``url`` - the Single Sign On (SSO) URL AWX redirects the user to, when SSO is activated.
- ``x509_cert`` - the certificate provided by the IdP admin generated from the SAML profile created on the Identity Provider. Remove the ``--BEGIN CERTIFICATE--`` and ``--END CERTIFICATE--`` headers, then enter the cert as one non-breaking string.
Multiple SAML IdPs are supported. Some IdPs may provide user data using attribute names that differ from the default OIDs (https://github.com/omab/python-social-auth/blob/master/social/backends/saml.py). The SAML ``NameID`` is a special attribute used by some Identity Providers to tell the Service Provider (AWX cluster) what the unique user identifier is. If it is used, set the ``attr_user_permanent_id`` to ``name_id`` as shown in the example. Other attribute names may be overridden for each IdP as shown below.
::
{
"myidp": {
"entity_id": "https://idp.example.com",
"url": "https://myidp.example.com/sso",
"x509cert": ""
},
"onelogin": {
"entity_id": "https://app.onelogin.com/saml/metadata/123456",
"url": "https://example.onelogin.com/trust/saml2/http-post/sso/123456",
"x509cert": "",
"attr_user_permanent_id": "name_id",
"attr_first_name": "User.FirstName",
"attr_last_name": "User.LastName",
"attr_username": "User.email",
"attr_email": "User.email"
}
}
.. image:: ../common/images/configure-awx-auth-saml-idps.png
.. warning::
Do not create a SAML user that shares the same email with another user (including a non-SAML user). Doing so will result in the accounts being merged. Be aware that this same behavior exists for System Admin users, thus a SAML login with the same email address as the System Admin user will login with System Admin privileges. For future reference, you can remove (or add) Admin Privileges based on SAML mappings, as described in subsequent steps.
.. note::
The IdP provides the email, last name and firstname using the well known SAML urn. The IdP uses a custom SAML attribute to identify a user, which is an attribute that AWX is unable to read. Instead, AWX can understand the unique identifier name, which is the URN. Use the URN listed in the SAML “Name” attribute for the user attributes as shown in the example below.
.. image:: ../common/images/configure-awx-auth-saml-idps-urn.png
11. Optionally provide the **SAML Organization Map**. For further detail, see :ref:`ag_org_team_maps`.
12. AWX can be configured to look for particular attributes that contain Team and Organization membership to associate with users when they log into AWX. The attribute names are defined in the **SAML Organization Attribute Mapping** and the **SAML Team Attribute Mapping** fields.
**Example SAML Organization Attribute Mapping**
Below is an example SAML attribute that embeds user organization membership in the attribute *member-of*.
::
<saml2:AttributeStatement>
<saml2:Attribute FriendlyName="member-of" Name="member-of"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue>Engineering</saml2:AttributeValue>
<saml2:AttributeValue>IT</saml2:AttributeValue>
<saml2:AttributeValue>HR</saml2:AttributeValue>
<saml2:AttributeValue>Sales</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute FriendlyName="admin-of" Name="admin-of"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue>Engineering</saml2:AttributeValue>
</saml2:Attribute>
</saml2:AttributeStatement>
Below is the corresponding AWX configuration.
::
{
"saml_attr": "member-of",
"saml_admin_attr": "admin-of",
"remove": true,
"remove_admins": false
}
``saml_attr``: is the SAML attribute name where the organization array can be found and ``remove`` is set to **True** to remove a user from all organizations before adding the user to the list of Organizations. To keep the user in whatever Organization(s) they are in while adding the user to the Organization(s) in the SAML attribute, set ``remove`` to **False**.
``saml_admin_attr``: Similar to the ``saml_attr`` attribute, but instead of conveying organization membership, this attribute conveys admin organization permissions.
**Example SAML Team Attribute Mapping**
Below is another example of a SAML attribute that contains a Team membership in a list.
::
<saml:AttributeStatement>
<saml:Attribute
xmlns:x500="urn:oasis:names:tc:SAML:2.0:profiles:attribute:X500"
x500:Encoding="LDAP"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"
Name="urn:oid:1.3.6.1.4.1.5923.1.1.1.1"
FriendlyName="eduPersonAffiliation">
<saml:AttributeValue
xsi:type="xs:string">member</saml:AttributeValue>
<saml:AttributeValue
xsi:type="xs:string">staff</saml:AttributeValue>
</saml:Attribute>
</saml:AttributeStatement>
::
{
"saml_attr": "eduPersonAffiliation",
"remove": true,
"team_org_map": [
{
"team": "member",
"organization": "Default1"
},
{
"team": "staff",
"organization": "Default2"
}
]
}
- ``saml_attr``: The SAML attribute name where the team array can be found.
- ``remove``: Set ``remove`` to **True** to remove user from all Teams before adding the user to the list of Teams. To keep the user in whatever Team(s) they are in while adding the user to the Team(s) in the SAML attribute, set ``remove`` to **False**.
- ``team_org_map``: An array of dictionaries of the form ``{ "team": "<AWX Team Name>", "organization": "<AWX Org Name>" }`` that defines mapping from AWX Team -> AWX Organization. This is needed because the same named Team can exist in multiple Organizations in AWX. The organization to which a team listed in a SAML attribute belongs to, would be ambiguous without this mapping.
You could create an alias to override both Teams and Orgs in the **SAML Team Attribute Mapping**. This option becomes very handy in cases when the SAML backend sends out complex group names, like in the example below:
::
{
"remove": false,
"team_org_map": [
{
"team": "internal:unix:domain:admins",
"organization": "Default",
"team_alias": "Administrators"
},
{
"team": "Domain Users",
"organization_alias": "OrgAlias",
"organization": "Default"
}
],
"saml_attr": "member-of"
}
Once the user authenticates, AWX creates organization and team aliases, as expected.
13. Optionally provide team membership mapping in the **SAML Team Map** field. For further detail, see :ref:`ag_org_team_maps`.
14. Optionally provide security settings in the **SAML Security Config** field. This field is the equivalent to the ``SOCIAL_AUTH_SAML_SECURITY_CONFIG`` field in the API. Refer to the `OneLogin's SAML Python Toolkit`_ for further detail.
.. _`OneLogin's SAML Python Toolkit`: https://github.com/onelogin/python-saml#settings
AWX uses the ``python-social-auth`` library when users log in through SAML. This library relies on the ``python-saml`` library to make available the settings for the next two optional fields, **SAML Service Provider Extra Configuration Data** and **SAML IDP to EXTRA_DATA Attribute Mapping**.
15. The **SAML Service Provider Extra Configuration Data** field is equivalent to the ``SOCIAL_AUTH_SAML_SP_EXTRA`` in the API. Refer to the `python-saml library documentation`_ to learn about the valid service provider extra (``SP_EXTRA``) parameters.
.. _`python-saml library documentation`: https://github.com/onelogin/python-saml#settings
16. The **SAML IDP to EXTRA_DATA Attribute Mapping** field is equivalent to the ``SOCIAL_AUTH_SAML_EXTRA_DATA`` in the API. See Python's `SAML Advanced Settings`_ documentation for more information.
.. _`SAML Advanced Settings`: https://python-social-auth.readthedocs.io/en/latest/backends/saml.html#advanced-settings
.. _ag_auth_saml_user_flags_attr_map:
17. The **SAML User Flags Attribute Mapping** field allows you to map SAML roles and attributes to special user flags. The following attributes are valid in this field:
- ``is_superuser_role``: Specifies one or more SAML roles which will grant a user the superuser flag
- ``is_superuser_attr``: Specifies a SAML attribute which will grant a user the superuser flag
- ``is_superuser_value``: Specifies one or more values required for ``is_superuser_attr`` that is required for the user to be a superuser
- ``remove_superusers``: Boolean indicating if the superuser flag should be removed for users or not. Defaults to ``true``. (See below for more details)
- ``is_system_auditor_role``: Specifies one or more SAML roles which will grant a user the system auditor flag
- ``is_system_auditor_attr``: Specifies a SAML attribute which will grant a user the system auditor flag
- ``is_system_auditor_value``: Specifies one or more values required for ``is_system_auditor_attr`` that is required for the user to be a system auditor
- ``remove_system_auditors``: Boolean indicating if the system_auditor flag should be removed for users or not. Defaults to ``true``. (See below for more details)
The ``role`` and ``value`` fields are lists and are `or` logic. So if you specify two roles: `[ "Role 1", "Role 2" ]` and the SAML user has either role the logic will consider them to have the required role for the flag. This is the same with the ``value`` field, if you specify: `[ "Value 1", "Value 2"]` and the SAML user has either value for their attribute the logic will consider their attribute value to have matched.
If ``role`` and ``attr`` are both specified for either ``superuser`` or ``system_auditor``, the settings for ``attr`` will take precedence over a ``role``. System Admin and System Auditor roles are evaluated at login for a SAML user. If you grant a SAML user one of these roles through the UI and not through the SAML settings, the roles will be removed on the user's next login unless the ``remove`` flag is set to false. The remove flag, if ``false``, will never allow the SAML adapter to remove the corresponding flag from a user. The following table describes how the logic works.
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Has one or more roles | Has Attr | Has one or more Attr Values | Remove Flag | Previous Flag | Is Flagged |
+=======================+===========+=============================+=============+===============+============+
| No | No | N/A | True | False | No |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | No | N/A | False | False | No |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | No | N/A | True | True | No |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | No | N/A | False | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | No | N/A | True | False | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | No | N/A | False | False | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | No | N/A | True | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | No | N/A | False | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | Yes | Yes | True | False | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | Yes | Yes | False | False | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | Yes | Yes | True | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | Yes | Yes | False | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | Yes | No | True | False | No |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | Yes | No | False | False | No |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | Yes | No | True | True | No |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | Yes | No | False | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | Yes | Unset | True | False | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | Yes | Unset | False | False | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | Yes | Unset | True | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| No | Yes | Unset | False | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | Yes | Yes | True | False | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | Yes | Yes | False | False | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | Yes | Yes | True | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | Yes | Yes | False | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | Yes | No | True | False | No |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | Yes | No | False | False | No |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | Yes | No | True | True | No |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | Yes | No | False | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | Yes | Unset | True | False | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | Yes | Unset | False | False | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | Yes | Unset | True | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
| Yes | Yes | Unset | False | True | Yes |
+-----------------------+-----------+-----------------------------+-------------+---------------+------------+
Each time a SAML user authenticates to AWX, these checks will be performed and the user flags will be altered as needed. If ``System Administrator`` or ``System Auditor`` is set for a SAML user within the UI, the SAML adapter will override the UI setting based on the rules above. If you would prefer that the user flags for SAML users do not get removed when a SAML user logs in, you can set the ``remove_`` flag to ``false``. With the remove flag set to ``false``, a user flag set to ``true`` through either the UI, API or SAML adapter will not be removed. However, if a user does not have the flag, and the above rules determine the flag should be added, it will be added, even if the flag is ``false``.
Example::
{
"is_superuser_attr": "blueGroups",
"is_superuser_role": ["is_superuser"],
"is_superuser_value": ["cn=My-Sys-Admins,ou=memberlist,ou=mygroups,o=myco.com"],
"is_system_auditor_attr": "blueGroups",
"is_system_auditor_role": ["is_system_auditor"],
"is_system_auditor_value": ["cn=My-Auditors,ou=memberlist,ou=mygroups,o=myco.com"]
}
18. Click **Save** when done.
19. To verify that the authentication was configured correctly, load the auto-generated URL found in the **SAML Service Provider Metadata URL** into a browser. It should output XML output, otherwise, it is not configured correctly.
Alternatively, logout of AWX and the login screen will now display the SAML logo to indicate it as a alternate method of logging into AWX.
.. image:: ../common/images/configure-awx-auth-saml-logo.png
Transparent SAML Logins
^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: authentication; SAML
pair: SAML; transparent
For transparent logins to work, you must first get IdP-initiated logins to work. To achieve this:
1. Set the ``RelayState`` on the IdP to the key of the IdP definition in the ``SAML Enabled Identity Providers`` field as previously described. In the example given above, ``RelayState`` would need to be either ``myidp`` or ``onelogin``.
2. Once this is working, specify the redirect URL for non-logged-in users to somewhere other than the default AWX login page by using the **Login redirect override URL** field in the Miscellaneous Authentication settings window of the **Settings** menu, accessible from the left navigation bar. This should be set to ``/sso/login/saml/?idp=<name-of-your-idp>`` for transparent SAML login, as shown in the example.
.. image:: ../common/images/configure-awx-system-login-redirect-url.png
.. note::
The above is a sample of a typical IdP format, but may not be the correct format for your particular case. You may need to reach out to your IdP for the correct transparent redirect URL as that URL is not the same for all IdPs.
3. After transparent SAML login is configured, to log in using local credentials or a different SSO, go directly to ``https://<your-awx-server>/login``. This provides the standard AWX login page, including SSO authentication buttons, and allows you to log in with any configured method.
Enabling Logging for SAML
^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can enable logging messages for the SAML adapter the same way you can enable logging for LDAP. Refer to the :ref:`ldap_logging` section.
.. _ag_auth_tacacs:
TACACS+ settings
-----------------
.. index::
pair: authentication; TACACS+ Authentication Settings
Terminal Access Controller Access-Control System Plus (TACACS+) is a protocol that handles remote authentication and related services for networked access control through a centralized server. In particular, TACACS+ provides authentication, authorization and accounting (AAA) services, in which you can configure AWX to use as a source for authentication.
.. note::
This feature is deprecated and will be removed in a future release.
1. Click **Settings** from the left navigation bar.
2. On the left side of the Settings window, click **TACACs+ settings** from the list of Authentication options.
3. Click **Edit** and enter information in the following fields:
- **TACACS+ Server**: Provide the hostname or IP address of the TACACS+ server with which to authenticate. If this field is left blank, TACACS+ authentication is disabled.
- **TACACS+ Port**: TACACS+ uses port 49 by default, which is already pre-populated.
- **TACACS+ Secret**: Secret key for TACACS+ authentication server.
- **TACACS+ Auth Session Timeout**: Session timeout value in seconds. The default is 5 seconds.
- **TACACS+ Authentication Protocol**: The protocol used by TACACS+ client. Options are **ascii** or **pap**.
.. image:: ../common/images/configure-awx-auth-tacacs.png
4. Click **Save** when done.
.. _ag_auth_oidc:
Generic OIDC settings
----------------------
Similar to SAML, OpenID Connect (OIDC) is uses the OAuth 2.0 framework. It allows third-party applications to verify the identity and obtain basic end-user information. The main difference between OIDC and SMAL is that SAML has a service provider (SP)-to-IdP trust relationship, whereas OIDC establishes the trust with the channel (HTTPS) that is used to obtain the security token. To obtain the credentials needed to setup OIDC with AWX, refer to the documentation from the identity provider (IdP) of your choice that has OIDC support.
To configure OIDC in AWX:
1. Click **Settings** from the left navigation bar.
2. On the left side of the Settings window, click **Generic OIDC settings** from the list of Authentication options.
3. Click **Edit** and enter information in the following fields:
- **OIDC Key**: Client ID from your 3rd-party IdP.
- **OIDC Secret**: Client Secret from your IdP.
- **OIDC Provider URL**: URL for your OIDC provider.
- **Verify OIDC Provider Certificate**: Use the toggle to enable/disable the OIDC provider SSL certificate verification.
The example below shows specific values associated to GitHub as the generic IdP:
.. image:: ../common/images/configure-awx-auth-oidc.png
4. Click **Save** when done.
.. note::
There is currently no support for team and organization mappings for OIDC at this time. The OIDC adapter does authentication only and not authorization. In other words, it is only capable of authenticating whether this user is who they say they are, not authorizing what this user is allowed to do. Configuring generic OIDC creates the UserID appended with an ID/key to differentiate the same user ID originating from two different sources and therefore, considered different users. So one will get an ID of just the user name and the second will be the ``username-<random number>``.
5. To verify that the authentication was configured correctly, logout of AWX and the login screen will now display the OIDC logo to indicate it as a alternate method of logging into AWX.
.. image:: ../common/images/configure-awx-auth-oidc-logo.png

View File

@@ -0,0 +1,54 @@
.. _ag_start:
=============================
Administering AWX Deployments
=============================
Learn how to administer AWX deployments through custom scripts, management jobs, and DevOps workflows.
This guide assumes at least basic understanding of the systems that you manage and maintain with AWX.
This guide applies to the latest version of AWX only.
The content in this guide is updated frequently and might contain functionality that is not available in previous versions.
Likewise content in this guide can be removed or replaced if it applies to functionality that is no longer available in the latest version.
**Join us online**
We talk about AWX documentation on Matrix at `#docs:ansible.im <https://matrix.to/#/#docs:ansible.im>`_ and on libera IRC at ``#ansible-docs`` if you ever want to join us and chat about the docs!
You can also find lots of AWX discussion and get answers to questions at `forum.ansible.com <https://forum.ansible.com/>`_.
.. toctree::
:maxdepth: 2
:numbered:
self
init_script
custom_inventory_script
scm-inv-source
multi-creds-assignment
management_jobs
clustering
containers_instance_groups
instances
topology_viewer
logfiles
logging
metrics
performance
secret_handling
security_best_practices
awx-manage
configure_awx
isolation_variables
oauth2_token_auth
social_auth
ent_auth
ldap_auth
authentication_timeout
kerberos_auth
session_limits
custom_rebranding
troubleshooting
tipsandtricks
.. monitoring

View File

@@ -0,0 +1,29 @@
.. _ag_restart_awx:
Starting, Stopping, and Restarting AWX
----------------------------------------
To install AWX: https://github.com/ansible/awx-operator/tree/devel/docs/installation
.. these instructions will be ported over to here in the near future (TBD)
To migrate from an old AWX to a new AWX instance: https://github.com/ansible/awx-operator/blob/devel/docs/migration/migration.md
.. these instructions will be ported over to here in the near future (TBD)
To upgrade you AWX instance: https://github.com/ansible/awx-operator/blob/devel/docs/upgrade/upgrading.md
.. these instructions will be ported over to here in the near future (TBD)
To restart an AWX instance, you must first kill the container and restart it. Access the web-task container in the Operator to invoke the supervisord restart.
.. these instructions will need to be fleshed out (TBD)
To uninstall you AWX instance: https://github.com/ansible/awx-operator/blob/devel/docs/uninstall/uninstall.md
.. these instructions will be ported over to here in the near future (TBD)

View File

@@ -0,0 +1,185 @@
.. _ag_instances:
Managing Capacity With Instances
----------------------------------
.. index::
pair: topology;capacity
pair: mesh;capacity
pair: remove;capacity
pair: add;capacity
Scaling your mesh is only available on Openshift deployments of AWX and is possible through adding or removing nodes from your cluster dynamically, through the **Instances** resource of the AWX User Interface, without running the installation script.
Prerequisites
~~~~~~~~~~~~~~
- The system that is going to run the ``ansible-playbook`` requires the collection ``ansible.receptor`` to be installed:
- If machine has access to the internet:
::
ansible-galaxy install -r requirements.yml
Installing the receptor collection dependency from the ``requirements.yml`` file will consistently retrieve the receptor version specified there, as well as any other collection dependencies that may be needed in the future.
- If machine does not have access to the internet, refer to `Downloading a collection from Automation Hub <https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#downloading-a-collection-from-automation-hub>`_ to configure `Automation Hub <https://console.redhat.com/ansible/automation-hub>`_ in Ansible Galaxy locally.
- If you are using the default |ee| (provided with AWX) to run on remote execution nodes, you must add a pull secret in AWX that contains the credential for pulling the |ee| image. To do this, create a pull secret on the AWX namespace and configure the ``ee_pull_credentials_secret`` parameter in the Operator:
1. Create a secret:
::
oc create secret generic ee-pull-secret \
--from-literal=username=<username> \
--from-literal=password=<password> \
--from-literal=url=registry.redhat.io
::
oc edit awx <instance name>
2. Add ``ee_pull_credentials_secret ee-pull-secret`` to the spec:
::
spec.ee_pull_credentials_secret=ee-pull-secret
- To manage instances from the AWX user interface, you must have System Administrator or System Auditor permissions.
Manage instances
~~~~~~~~~~~~~~~~~~
Click **Instances** from the left side navigation menu to access the Instances list.
.. image:: ../common/images/instances_list_view.png
The Instances list displays all the current nodes in your topology, along with relevant details:
- **Host Name**
.. _node_statuses:
- **Status** indicates the state of the node:
- **Installed**: a node that has successfully installed and configured, but has not yet passed the periodic health check
- **Ready**: a node that is available to run jobs or route traffic between nodes on the mesh. This replaces the previously “Healthy” node state used in the mesh topology
- **Provisioning**: a node that is in the process of being added to a current mesh, but is awaiting the job to install all of the packages (currently not yet supported and is subject to change in a future release)
- **Deprovisioning**: a node that is in the process of being removed from a current mesh and is finishing up jobs currently running on it
- **Unavailable**: a node that did not pass the most recent health check, indicating connectivity or receptor problems
- **Provisioning Failure**: a node that failed during provisioning (currently not yet supported and is subject to change in a future release)
- **De-provisioning Failure**: a node that failed during deprovisioning (currently not yet supported and is subject to change in a future release)
- **Node Type** specifies whether the node is a control, hybrid, hop, or execution node. See :term:`node` for further detail.
- **Capacity Adjustment** allows you to adjust the number of forks in your nodes
- **Used Capacity** indicates how much capacity has been used
- **Actions** allow you to enable or disable the instance to control whether jobs can be assigned to it
From this page, you can add, remove or run health checks on your nodes. Use the check boxes next to an instance to select it to remove or run a health check against. When a button is grayed-out, you do not have permission for that particular action. Contact your Administrator to grant you the required level of access. If you are able to remove an instance, you will receive a prompt for confirmation, like the one below:
.. image:: ../common/images/instances_delete_prompt.png
.. note::
You can still remove an instance even if it is active and jobs are running on it. AWXwill attempt to wait for any jobs running on this node to complete before actually removing it.
Click **Remove** to confirm.
.. _health_check:
If running a health check on an instance, at the top of the Details page, a message displays that the health check is in progress.
.. image:: ../common/images/instances_health_check.png
Click **Reload** to refresh the instance status.
.. note::
Health checks are ran asynchronously, and may take up to a minute for the instance status to update, even with a refresh. The status may or may not change after the health check. At the bottom of the Details page, a timer/clock icon displays next to the last known health check date and time stamp if the health check task is currently running.
.. image:: ../common/images/instances_health_check_pending.png
The example health check shows the status updates with an error on node 'one':
.. image:: ../common/images/topology-viewer-instance-with-errors.png
Add an instance
~~~~~~~~~~~~~~~~
One of the ways to expand capacity is to create an instance, which serves as a node in your topology.
1. Click **Instances** from the left side navigation menu.
2. In the Instances list view, click the **Add** button and the Create new Instance window opens.
.. image:: ../common/images/instances_create_new.png
An instance has several attributes that may be configured:
- Enter a fully qualified domain name (ping-able DNS) or IP address for your instance in the **Host Name** field (required). This field is equivalent to ``hostname`` in the API.
- Optionally enter a **Description** for the instance
- The **Instance State** field is auto-populated, indicating that it is being installed, and cannot be modified
- The **Listener Port** is pre-populated with the most optimal port, however you can change the port to one that is more appropriate for your configuration. This field is equivalent to ``listener_port`` in the API.
- The **Instance Type** field is auto-populated and cannot be modified. Only execution nodes can be created at this time.
- Check the **Enable Instance** box to make it available for jobs to run on it
3. Once the attributes are configured, click **Save** to proceed.
Upon successful creation, the Details of the created instance opens.
.. image:: ../common/images/instances_create_details.png
.. note::
The proceeding steps 4-8 are intended to be ran from any computer that has SSH access to the newly created instance.
4. Click the download button next to the **Install Bundle** field to download the tarball that includes this new instance and the files relevant to install the node into the mesh.
.. image:: ../common/images/instances_install_bundle.png
5. Extract the downloaded ``tar.gz`` file from the location you downloaded it. The install bundle contains yaml files, certificates, and keys that will be used in the installation process.
6. Before running the ``ansible-playbook`` command, edit the following fields in the ``inventory.yml`` file:
- ``ansible_user`` with the username running the installation
- ``ansible_ssh_private_key_file`` to contain the filename of the private key used to connect to the instance
::
---
all:
hosts:
remote-execution:
ansible_host: 18.206.206.34
ansible_user: <username> # user provided
ansible_ssh_private_key_file: ~/.ssh/id_rsa
The content of the ``inventory.yml`` file serves as a template and contains variables for roles that are applied during the installation and configuration of a receptor node in a mesh topology. You may modify some of the other fields, or replace the file in its entirety for advanced scenarios. Refer to `Role Variables <https://github.com/ansible/receptor-collection/blob/main/README.md>`_ for more information on each variable.
7. Save the file to continue.
8. Run the following command on the machine you want to update your mesh:
::
ansible-playbook -i inventory.yml install_receptor.yml
9. To view other instances within the same topology, click the **Peers** tab associated with the control node.
.. note::
You will only be able to view peers of the control plane nodes at this time, which are the execution nodes. Since you are limited to creating execution nodes in this release, you will be unable to create or view peers of execution nodes.
.. image:: ../common/images/instances_peers_tab.png
You may run a health check by selecting the node and clicking the **Run health check** button from its Details page.
10. To view a graphical representation of your updated topology, refer to the :ref:`ag_topology_viewer` section of this guide.

View File

@@ -0,0 +1,12 @@
.. _ag_isolation_variables:
Isolation functionality and variables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: troubleshooting; isolation
pair: isolation; functionality
pair: isolation; variables
.. include:: ../common/isolation_variables.rst

View File

@@ -0,0 +1,117 @@
User Authentication with Kerberos
==================================
.. index::
pair: user authentication; Kerberos
pair: Kerberos; Active Directory (AD)
User authentication via Active Directory (AD), also referred to as authentication through Kerberos, is supported through AWX.
To get started, first set up the Kerberos packages in AWX so that you can successfully generate a Kerberos ticket. To install the packages, use the following steps:
::
yum install krb5-workstation
yum install krb5-devel
yum install krb5-libs
Once installed, edit the ``/etc/krb5.conf`` file, as follows, to provide the address of the AD, the domain, etc.:
::
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = WEBSITE.COM
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
WEBSITE.COM = {
kdc = WIN-SA2TXZOTVMV.website.com
admin_server = WIN-SA2TXZOTVMV.website.com
}
[domain_realm]
.website.com = WEBSITE.COM
website.com = WEBSITE.COM
After the configuration file has been updated, you should be able to successfully authenticate and get a valid token.
The following steps show how to authenticate and get a token:
::
[root@ip-172-31-26-180 ~]# kinit username
Password for username@WEBSITE.COM:
[root@ip-172-31-26-180 ~]#
Check if we got a valid ticket.
[root@ip-172-31-26-180 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: username@WEBSITE.COM
Valid starting Expires Service principal
01/25/16 11:42:56 01/25/16 21:42:53 krbtgt/WEBSITE.COM@WEBSITE.COM
renew until 02/01/16 11:42:56
[root@ip-172-31-26-180 ~]#
Once you have a valid ticket, you can check to ensure that everything is working as expected from command line. To test this, make sure that your inventory looks like the following:
::
[windows]
win01.WEBSITE.COM
[windows:vars]
ansible_user = username@WEBSITE.COM
ansible_connection = winrm
ansible_port = 5986
You should also:
- Ensure that the hostname is the proper client hostname matching the entry in AD and is not the IP address.
- In the username declaration, ensure that the domain name (the text after ``@``) is properly entered with regard to upper- and lower-case letters, as Kerberos is case sensitive. For AWX, you should also ensure that the inventory looks the same.
.. note::
If you encounter a ``Server not found in Kerberos database`` error message, and your inventory is configured using FQDNs (**not IP addresses**), ensure that the service principal name is not missing or mis-configured.
Now, running a playbook should run as expected. You can test this by running the playbook as the ``awx`` user.
Once you have verified that playbooks work properly, integration with AWX is easy. Generate the Kerberos ticket as the ``awx`` user and AWX should automatically pick up the generated ticket for authentication.
.. note::
The python ``kerberos`` package must be installed. Ansible is designed to check if ``kerberos`` package is installed and, if so, it uses kerberos authentication.
AD and Kerberos Credentials
------------------------------
Active Directory only:
- If you are only planning to run playbooks against Windows machines with AD usernames and passwords as machine credentials, you can use "user@<domain>" format for the username and an associated password.
With Kerberos:
- If Kerberos is installed, you can create a machine credential with the username and password, using the "user@<domain>" format for the username.
Working with Kerberos Tickets
-------------------------------
Ansible defaults to automatically managing Kerberos tickets when both the username and password are specified in the machine credential for a host that is configured for kerberos. A new ticket is created in a temporary credential cache for each host, before each task executes (to minimize the chance of ticket expiration). The temporary credential caches are deleted after each task, and will not interfere with the default credential cache.
To disable automatic ticket management (e.g., to use an existing SSO ticket or call ``kinit`` manually to populate the default credential cache), set ``ansible_winrm_kinit_mode=manual`` via the inventory.
Automatic ticket management requires a standard kinit binary on the control host system path. To specify a different location or binary name, set the ``ansible_winrm_kinit_cmd`` inventory variable to the fully-qualified path to an MIT krbv5 kinit-compatible binary.

View File

@@ -0,0 +1,358 @@
.. _ag_auth_ldap:
Setting up LDAP Authentication
================================
.. index::
single: LDAP
pair: authentication; LDAP
.. note::
If the LDAP server you want to connect to has a certificate that is self-signed or signed by a corporate internal certificate authority (CA), the CA certificate must be added to the system's trusted CAs. Otherwise, connection to the LDAP server will result in an error that the certificate issuer is not recognized.
Administrators use LDAP as a source for account authentication information for AWX users. User authentication is provided, but not the synchronization of user permissions and credentials. Organization membership (as well as the organization admin) and team memberships can be synchronized.
When so configured, a user who logs in with an LDAP username and password automatically gets an AWX account created for them and they can be automatically placed into organizations as either regular users or organization administrators.
Users created via an LDAP login cannot change their username, first name, last name, or set a local password for themselves. This is also tunable to restrict editing of other field names.
To configure LDAP integration for AWX:
1. First, create a user in LDAP that has access to read the entire LDAP structure.
2. Test if you can make successful queries to the LDAP server, use the ``ldapsearch`` command, which is a command line tool that can be installed on AWX command line as well as on other Linux and OSX systems. Use the following command to query the ldap server, where *josie* and *Josie4Cloud* are replaced by attributes that work for your setup:
::
ldapsearch -x -H ldap://win -D "CN=josie,CN=Users,DC=website,DC=com" -b "dc=website,dc=com" -w Josie4Cloud
Here ``CN=josie,CN=users,DC=website,DC=com`` is the Distinguished Name of the connecting user.
.. note::
The ``ldapsearch`` utility is not automatically pre-installed with AWX, however, you can install it from the ``openldap-clients`` package.
3. In the AWX User Interface, click **Settings** from the left navigation and click to select **LDAP settings** from the list of Authentication options.
Multiple LDAP configurations are not needed per LDAP server, but you can configure multiple LDAP servers from this page, otherwise, leave the server at **Default**:
.. image:: ../common/images/configure-awx-auth-ldap-servers.png
|
The equivalent API endpoints will show ``AUTH_LDAP_*`` repeated: ``AUTH_LDAP_1_*``, ``AUTH_LDAP_2_*``, ..., ``AUTH_LDAP_5_*`` to denote server designations.
4. To enter or modify the LDAP server address to connect to, click **Edit** and enter in the **LDAP Server URI** field using the same format as the one prepopulated in the text field:
.. image:: ../common/images/configure-awx-auth-ldap-server-uri.png
.. note::
Multiple LDAP servers may be specified by separating each with spaces or commas. Click the |help| icon to comply with proper syntax and rules.
.. |help| image:: ../common/images/tooltips-icon.png
5. Enter the password to use for the Binding user in the **LDAP Bind Password** text field. In this example, the password is 'passme':
.. image:: ../common/images/configure-awx-auth-ldap-bind-pwd.png
6. Click to select a group type from the **LDAP Group Type** drop-down menu list.
LDAP Group Types include:
- ``PosixGroupType``
- ``GroupOfNamesType``
- ``GroupOfUniqueNamesType``
- ``ActiveDirectoryGroupType``
- ``OrganizationalRoleGroupType``
- ``MemberDNGroupType``
- ``NISGroupType``
- ``NestedGroupOfNamesType``
- ``NestedGroupOfUniqueNamesType``
- ``NestedActiveDirectoryGroupType``
- ``NestedOrganizationalRoleGroupType``
- ``NestedMemberDNGroupType``
- ``PosixUIDGroupType``
The LDAP Group Types that are supported by leveraging the underlying `django-auth-ldap library`_. To specify the parameters for the selected group type, see :ref:`Step 15 <ldap_grp_params>` below.
.. _`django-auth-ldap library`: https://django-auth-ldap.readthedocs.io/en/latest/groups.html#types-of-groups
7. The **LDAP Start TLS** is disabled by default. To enable TLS when the LDAP connection is not using SSL, click the toggle to **ON**.
.. image:: ../common/images/configure-awx-auth-ldap-start-tls.png
8. Enter the Distinguished Name in the **LDAP Bind DN** text field to specify the user that AWX uses to connect (Bind) to the LDAP server. Below uses the example, ``CN=josie,CN=users,DC=website,DC=com``:
.. image:: ../common/images/configure-awx-auth-ldap-bind-dn.png
9. If that name is stored in key ``sAMAccountName``, the **LDAP User DN Template** populates with ``(sAMAccountName=%(user)s)``. Active Directory stores the username to ``sAMAccountName``. Similarly, for OpenLDAP, the key is ``uid``--hence the line becomes ``(uid=%(user)s)``.
10. Enter the group distinguish name to allow users within that group to access AWX in the **LDAP Require Group** field, using the same format as the one shown in the text field, ``CN=awx Users,OU=Users,DC=website,DC=com``.
.. image:: ../common/images/configure-awx-auth-ldap-req-group.png
11. Enter the group distinguish name to prevent users within that group to access AWX in the **LDAP Deny Group** field, using the same format as the one shown in the text field. In this example, leave the field blank.
12. Enter where to search for users while authenticating in the **LDAP User Search** field using the same format as the one shown in the text field. In this example, use:
::
[
"OU=Users,DC=website,DC=com",
"SCOPE_SUBTREE",
"(cn=%(user)s)"
]
The first line specifies where to search for users in the LDAP tree. In the above example, the users are searched recursively starting from ``DC=website,DC=com``.
The second line specifies the scope where the users should be searched:
- SCOPE_BASE: This value is used to indicate searching only the entry at the base DN, resulting in only that entry being returned
- SCOPE_ONELEVEL: This value is used to indicate searching all entries one level under the base DN - but not including the base DN and not including any entries under that one level under the base DN.
- SCOPE_SUBTREE: This value is used to indicate searching of all entries at all levels under and including the specified base DN.
The third line specifies the key name where the user name is stored.
.. image:: ../common/images/configure-awx-authen-ldap-user-search.png
.. note::
For multiple search queries, the proper syntax is:
::
[
[
"OU=Users,DC=northamerica,DC=acme,DC=com",
"SCOPE_SUBTREE",
"(sAMAccountName=%(user)s)"
],
[
"OU=Users,DC=apac,DC=corp,DC=com",
"SCOPE_SUBTREE",
"(sAMAccountName=%(user)s)"
],
[
"OU=Users,DC=emea,DC=corp,DC=com",
"SCOPE_SUBTREE",
"(sAMAccountName=%(user)s)"
]
]
13. In the **LDAP Group Search** text field, specify which groups should be searched and how to search them. In this example, use:
::
[
"dc=example,dc=com",
"SCOPE_SUBTREE",
"(objectClass=group)"
]
- The first line specifies the BASE DN where the groups should be searched.
- The second lines specifies the scope and is the same as that for the user directive.
- The third line specifies what the ``objectclass`` of a group object is in the LDAP you are using.
.. image:: ../common/images/configure-awx-authen-ldap-group-search.png
14. Enter the user attributes in the **LDAP User Attribute Map** the text field. In this example, use:
::
{
"first_name": "givenName",
"last_name": "sn",
"email": "mail"
}
The above example retrieves users by last name from the key ``sn``. You can use the same LDAP query for the user to figure out what keys they are stored under.
.. image:: ../common/images/configure-awx-auth-ldap-user-attrb-map.png
.. _ldap_grp_params:
15. Depending on the selected **LDAP Group Type**, different parameters are available in the **LDAP Group Type Parameters** field to account for this. ``LDAP_GROUP_TYPE_PARAMS`` is a dictionary, which will be converted by AWX to kwargs and passed to the LDAP Group Type class selected. There are two common parameters used by any of the LDAP Group Type; ``name_attr`` and ``member_attr``. Where ``name_attr`` defaults to ``cn`` and ``member_attr`` defaults to ``member``:
::
{"name_attr": "cn", "member_attr": "member"}
To determine what parameters a specific LDAP Group Type expects. refer to the `django_auth_ldap`_ documentation around the classes ``init`` parameters.
.. _`django_auth_ldap`: https://django-auth-ldap.readthedocs.io/en/latest/reference.html#django_auth_ldap.config.LDAPGroupType
16. Enter the user profile flags in the **LDAP User Flags by Group** the text field. In this example, use the following syntax to set LDAP users as "Superusers" and "Auditors":
::
{
"is_superuser": "cn=superusers,ou=groups,dc=website,dc=com",
"is_system_auditor": "cn=auditors,ou=groups,dc=website,dc=com"
}
The above example retrieves users who are flagged as superusers or as auditor in their profile.
.. image:: ../common/images/configure-awx-auth-ldap-user-flags.png
17. For details on completing the mapping fields, see :ref:`ag_ldap_org_team_maps`.
.. image:: ../common/images/configure-ldap-orgs-teams-mapping.png
18. Click **Save** when done.
With these values entered on this form, you can now make a successful authentication with LDAP.
.. note::
AWX does not actively sync users, but they are created during their initial login.
To improve performance associated with LDAP authentication, see :ref:`ldap_auth_perf_tips` at the end of this chapter.
.. _ag_ldap_org_team_maps:
LDAP Organization and Team Mapping
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
single: organization mapping
single: LDAP mapping
pair: authentication; LDAP mapping
pair: authentication; organization mapping
pair: authentication; LDAP team mapping
pair: authentication; team mapping
single: team mapping
You can control which users are placed into which organizations based on LDAP attributes (mapping out between your organization admins/users and LDAP groups).
Keys are organization names. Organizations will be created if not present. Values are dictionaries defining the options for each organization's membership. For each organization, it is possible to specify what groups are automatically users of the organization and also what groups can administer the organization.
**admins**: None, True/False, string or list/tuple of strings.
- If **None**, organization admins will not be updated based on LDAP values.
- If **True**, all users in LDAP will automatically be added as admins of the organization.
- If **False**, no LDAP users will be automatically added as admins of the organization.
- If a string or list of strings, specifies the group DN(s) that will be added of the organization if they match any of the specified groups.
**remove_admins**: True/False. Defaults to **False**.
- When **True**, a user who is not an member of the given groups will be removed from the organization's administrative list.
**users**: None, True/False, string or list/tuple of strings. Same rules apply as for **admins**.
**remove_users**: True/False. Defaults to **False**. Same rules apply as **remove_admins**.
::
{
"LDAP Organization": {
"admins": "cn=engineering_admins,ou=groups,dc=example,dc=com",
"remove_admins": false,
"users": [
"cn=engineering,ou=groups,dc=example,dc=com",
"cn=sales,ou=groups,dc=example,dc=com",
"cn=it,ou=groups,dc=example,dc=com"
],
"remove_users": false
},
"LDAP Organization 2": {
"admins": [
"cn=Administrators,cn=Builtin,dc=example,dc=com"
],
"remove_admins": false,
"users": true,
"remove_users": false
}
}
Mapping between team members (users) and LDAP groups. Keys are team names (will be created if not present). Values are dictionaries of options for each team's membership, where each can contain the following parameters:
**organization**: string. The name of the organization to which the team belongs. The team will be created if the combination of organization and team name does not exist. The organization will first be created if it does not exist.
**users**: None, True/False, string or list/tuple of strings.
- If **None**, team members will not be updated.
- If **True/False**, all LDAP users will be added/removed as team members.
- If a string or list of strings, specifies the group DN(s). User will be added as a team member if the user is a member of ANY of these groups.
**remove**: True/False. Defaults to **False**. When **True**, a user who is not a member of the given groups will be removed from the team.
::
{
"LDAP Engineering": {
"organization": "LDAP Organization",
"users": "cn=engineering,ou=groups,dc=example,dc=com",
"remove": true
},
"LDAP IT": {
"organization": "LDAP Organization",
"users": "cn=it,ou=groups,dc=example,dc=com",
"remove": true
},
"LDAP Sales": {
"organization": "LDAP Organization",
"users": "cn=sales,ou=groups,dc=example,dc=com",
"remove": true
}
}
.. _ldap_logging:
Enabling Logging for LDAP
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
single: LDAP
pair: authentication; LDAP
To enable logging for LDAP, you must set the level to ``DEBUG`` in the Settings configuration window:
1. Click **Settings** from the left navigation pane and click to select **Logging settings** from the System list of options.
2. Click **Edit**.
3. Set the **Logging Aggregator Level Threshold** field to **Debug**.
.. image:: ../common/images/settings-system-logging-debug.png
4. Click **Save** to save your changes.
Referrals
~~~~~~~~~~~
.. index::
pair: LDAP; referrals
pair: troubleshooting; LDAP referrals
Active Directory uses "referrals" in case the queried object is not available in its database. It has been noted that this does not work properly with the django LDAP client and, most of the time, it helps to disable referrals. Disable LDAP referrals by adding the following lines to your ``/etc/awx/conf.d/custom.py`` file:
.. code-block:: bash
AUTH_LDAP_GLOBAL_OPTIONS = {
ldap.OPT_REFERRALS: False,
}
.. _ldap_auth_perf_tips:
LDAP authentication performance tips
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: best practices; ldap
When an LDAP user authenticates, by default, all user-related attributes will be updated in the database on each log in. In some environments, this operation can be skipped due to performance issues. To avoid it, you can disable the option `AUTH_LDAP_ALWAYS_UPDATE_USER`.
.. warning::
With this option set to False, no changes to LDAP user's attributes will be updated. Attributes will only be updated the first time the user is created.

View File

@@ -0,0 +1,9 @@
**************
AWX Logfiles
**************
.. index::
single: logfiles
The AWX logfiles are streamed real-time on the console.

Some files were not shown because too many files have changed in this diff Show More