Compare commits

..

45 Commits

Author SHA1 Message Date
jessicamack
a689f87f1c add licenses 2023-12-06 12:31:14 -05:00
jessicamack
7501ad6836 add django-ansible-base
Signed-off-by: jessicamack <jmack@redhat.com>
2023-12-06 12:31:14 -05:00
Don Naro
dd00bbba42 separate tox calls in readthedocs config (#14673) 2023-12-06 17:17:12 +00:00
Rick Elrod
fe6bac6d9e [CI] Reduce GHA timeouts from 6h default (#14704)
* [CI] Reduce GHA timeouts from 6h default

The goal here is to never interfere with a real run (so most of the
timeout-minutes values seem rather high) but to avoid having 6h long
runs if something goes crazy and never ends.

Signed-off-by: Rick Elrod <rick@elrod.me>

* Do bash hackery instead

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-12-06 14:43:45 +00:00
jainnikhil30
87abbd4b10 Fix the bulk Job Launch Integration test in awx collection (#14702)
* fix the integration tests
2023-12-06 18:50:33 +05:30
lucas-benedito
fb04e5d9f6 Fixing wsrelay connection loop (#14692)
* Fixing wsrelay connection loop

* The loop was being interrupted when reaching the return statements, causing a race condition that would make nodes remain disconnected from their websockets
* Added log messages for the previous return state to improve the logging from this state.

* Added logging for malformed payload

* Update awx/main/wsrelay.py

Co-authored-by: Rick Elrod <rick@elrod.me>

* Moved logmsg outside condition

---------

Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
Co-authored-by: Rick Elrod <rick@elrod.me>
2023-12-04 09:33:05 -05:00
Hao Liu
478e2cb28d Fix awx collection publishing on galaxy (#14642)
--location (-L) parameter will prompt curl to submit a new request if the URL is a redirect.

After moving to galaxy-NG without -L the curl falsely return 302 for any version

Co-authored-by: John Barker <john@johnrbarker.com>
2023-11-29 20:28:22 +00:00
Chris Meyers
2ac304d289 allow pytest --migrations to succeed (#14663)
* allow pytest --migrations to succeed

* We actually subvert migrations from running in test via pytest.ini
  --no-migrations option. This has led to bit rot for the sqlite
  migrations happy path. This changeset pays off that tech debt and
  allows for an sqlite migration happy path.
* This paves the way for programatic invocation of individual migrations
  and weaving of the creation of resources (i.e. Instance, Job Template,
  etc). With this, a developer can instantiate various database states,
  trigger a migration, assert the state of the db, and then have pytest
  rollback all of that.
* I will note that in practice, running these migrations is dog shit
  slow BUT this work also opens up the possibility of saving and
  re-using sqlite3 database files. Normally, caching is not THE answer
  and causes more harm than good. But in this case, our migrations are
  mostly write-once (I say mostly because this change set violates
  that :) so cache invalidation isn't a major issue.

* functional test for migrations on sqlite

* We commonly subvert running migrations in test land. Test land uses
  sqlite. By not constantly exercising this code path it atrophies. The
  smoke test here is to continuously exercise that code path.
* Add ci test to run migration tests separately, they take =~ 2-3
  minutes each on my laptop.
* The smoke tests also serves as an example of how to write migration
  tests.

* run migration tests in ci
2023-11-17 13:33:08 -05:00
Don Naro
3e5851f3af Upgrade doc requirements (#14669)
* upgrade when pip compiling doc reqs

* upgrade doc requirements
2023-11-16 13:04:21 -07:00
Alan Rominger
adb1b12074 Update RBAC docs, remove unused get_permissions (#14492)
* Update RBAC docs, remove unused get_permissions

* Add back in section for get_roles_on_resource
2023-11-16 11:29:33 -05:00
Alan Rominger
8fae20c48a Remove unused methods we attach to user model (#14668) 2023-11-16 11:21:21 -05:00
Hao Liu
ec364cc60e Make vault init more idempotent (#14664)
Currently if you cleanup docker volume for vault and bring docker-compose development back up with vault enabled we will not initialize vault because the secret files still exist.

This change will attempt to initialize vault reguardless and update the secret file if vault is initialized
2023-11-16 09:43:45 -06:00
TVo
1cfd51764e Added missing pointers to release notes (#14659)
* Replaced with larger graphic.

* Revert "Replaced with larger graphic."

This reverts commit 1214b00052.

* Added missing pointers to release notes.
2023-11-15 14:24:11 -07:00
Steffen Scheib
0b8fedfd04 Adding the possibility to decode base64 decoded strings to Delinea's Devops Secret Vault (DSV) (#14646)
Adding the possibility to decode base64 decoded strings to Delinea's Devops Secret Vault (DSV).
This is necessary as uploading files to DSV is not possible (and not meant to be) and files should be added base64 encoded.
The commit is making sure to remain backward compatible (no secret decoding), as a default is supplied.

This has been tested with DSV and works for secrets that are base64 encoded and secrets that are not base64 encoded (which is the default).

Signed-off-by: Steffen Scheib <sscheib@redhat.com>
2023-11-15 15:28:34 -05:00
Don Naro
72a8173462 issue #14653 heading does not render correctly (#14665) 2023-11-15 15:05:52 -05:00
Tong He
873b1fbe07 Set subscription type as developer for developer subscriptions. (#14584)
* Set subscription type as developer for developer subscriptions.

Signed-off-by: Tong He <the@redhat.com>

* Set subscription type as developer for developer subscription manifests.

Signed-off-by: Tong He <the@redhat.com>

* Remedy the wrong character to assign value.

Signed-off-by: Tong He <the@redhat.com>

* Reformat licensing.py by black.

Signed-off-by: Tong He <the@redhat.com>

---------

Signed-off-by: Tong He <the@redhat.com>
2023-11-15 10:33:57 +00:00
Alan Rominger
1f36e84b45 Correctly handle case where unpartitioned table does not exist (#14648) 2023-11-14 08:38:48 -05:00
TVo
8c4bff2b86 Replaced with larger graphic. (#14647) 2023-11-13 09:55:04 -06:00
lucas-benedito
14f636af84 Setting credential_type as required (#14651)
* Setting credential_type as required

* Added test for missing credential_type in credential module

* Corrected test assertion

---------

Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
2023-11-13 09:54:32 -06:00
Don Naro
0057c8daf6 Docs: Include REST API reference content from swagger.json (#14607) 2023-11-11 08:33:41 -05:00
TVo
d8a28b3c06 Added alt text for settings-menu.rst (#14639)
* Re-do for PR #14595 to fix CI issues.

* Added alt text to settings-menu.rst

* Update docs/docsite/rst/common/settings-menu.rst

Co-authored-by: Don Naro <dnaro@redhat.com>

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-11-08 15:17:57 -07:00
Ratan Gulati
40c2b700fe Fix: #14523 Add alt-text codeblock to Images for workflow_template.rst (#14604)
* add alt to images in workflow_templates.rst

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>

* add alt to images in workflow_templates.rst

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>

* Update workflow_templates.rst

* Revised proposed alt text for workflow_templates.rst

---------

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-11-07 10:51:11 -07:00
Thanhnguyet Vo
71d548f9e5 Removed references to images that were deleted. 2023-11-07 08:55:27 -07:00
Thanhnguyet Vo
dd98963f86 Updated images - Workflow Templates chapter of Userguide. 2023-11-07 08:55:27 -07:00
TVo
4b467dfd8d Revised proposed alt text for insights.rst 2023-11-07 08:14:34 -07:00
BHANUTEJA
456b56778e Update insights.rst 2023-11-07 08:14:34 -07:00
BHANUTEJA
5b3cb20f92 Update insights.rst 2023-11-07 08:14:34 -07:00
TVo
d7086a3c88 Revised the proposed Alt text for main_menu.rst 2023-11-06 13:09:26 -07:00
Ratan Gulati
21e7ab078c Fix: #14511 Add alt-text codeblock to Images for Userguide: main_menu.rst
Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>
2023-11-06 13:09:26 -07:00
Elijah DeLee
946ca0b3b8 fix wsrelay connection in ipv6 environments 2023-11-06 13:58:41 -05:00
TVo
b831dbd608 Removed mailing list from triage_replies.md 2023-11-03 14:30:30 -06:00
Thanhnguyet Vo
943e455f9d Re-do for PR #14595 to fix CI issues. 2023-11-03 08:35:22 -06:00
Seth Foster
53bc88abe2 Fix python_paths error in CI(#14622)
Remove outdated lines from pytest.ini

Was causing KeyError 'python_paths' in CI

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2023-11-03 09:36:21 -04:00
Rick Elrod
3b4d95633e [rsyslog] remove main_queue, add more action queue params (#14532)
* [rsyslog] remove main_queue, add more action queue params

Signed-off-by: Rick Elrod <rick@elrod.me>

* Remove now-unused LOG_AGGREGATOR_MAX_DISK_USAGE_GB, add LOG_AGGREGATOR_ACTION_QUEUE_SIZE

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-10-31 14:49:17 -04:00
Alan Rominger
93c329d9d5 Fix cancel bug - WorkflowManager cancel in transaction (#14608)
This fixes a bug where jobs within a workflow job were not canceled
  when the workflow job was canceled by the user

The fix is to submit the cancel request as a part of the
  transaction that WorkflowManager commits its work in
  this requires that we send the message without expecting a reply
  so this changes the control-with-reply cancel to just a control function
2023-10-30 15:30:18 -04:00
Hao Liu
f4c53aaf22 Update receptor-collection version to 2.0.2 (#14613) 2023-10-30 17:24:02 +00:00
Alan Rominger
333ef76cbd Send notifications for dependency failures (#14603)
* Send notifications for dependency failures

* Delete tests for deleted method

* Remove another test for removed method
2023-10-30 10:42:37 -04:00
Alan Rominger
fc0b58fd04 Fix bug that prevented dispatcher exit with downed DB (#14469)
* Separate handling of original sitTERM and sigINT
2023-10-26 14:34:25 -04:00
Andrii Zakurenyi
bef0a8b23a Fix DevOps Secrets Vault credential plugin to work with python-dsv-sdk>=1.0.4
Signed-off-by: Andrii Zakurenyi <andrii.zakurenyi@c.delinea.com>
2023-10-25 15:48:24 -04:00
lmo5
a5f33456b6 Fix missing service account secret in docker-compose-minikube role (#14596)
* Fix missing service account secret

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2023-10-25 19:27:21 +00:00
Surav Shrestha
21fb395912 fix typos in docs/development/minikube.md 2023-10-25 15:23:23 -04:00
jessicamack
44255f378d Fix extra_vars bug in ansible.controller.ad_hoc_command (#14585)
* convert to valid type for serializer

* check that extra_vars are in request

* remove doubled line

* add integration test for change

* move change to the ad_hoc_command module

Signed-off-by: jessicamack <jmack@redhat.com>

* fix imports

Signed-off-by: jessicamack <jmack@redhat.com>

---------

Signed-off-by: jessicamack <jmack@redhat.com>
2023-10-25 10:38:45 -04:00
Parikshit Adhikari
71a6d48612 Fix: typos inside /docs directory (#14594)
fix typos inside docs
2023-10-24 19:01:21 +00:00
nmiah1
b7e5f5d1e1 Typo in export.py example (#14598) 2023-10-24 18:33:38 +00:00
Alan Rominger
b6b167627c Fix Boolean values defaulting to False in collection (#14493)
* Fix Boolean values defaulting to False in collection

* Remove null values in other cases, fix null handling for WFJT nodes

* Only remove null values if it is a boolean field

* Reset changes to WFJT node field processing

* Use test content from sean-m-sullivan to fix lookups in assert
2023-10-24 14:29:16 -04:00
132 changed files with 1382 additions and 447 deletions

View File

@@ -43,10 +43,14 @@ runs:
- name: Update default AWX password
shell: bash
run: |
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]
do
echo "Waiting for AWX..."
sleep 5
SECONDS=0
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]; do
if [[ $SECONDS -gt 600 ]]; then
echo "Timing out, AWX never came up"
exit 1
fi
echo "Waiting for AWX..."
sleep 5
done
echo "AWX is up, updating the password..."
docker exec -i tools_awx_1 sh <<-EOSH

View File

@@ -7,8 +7,8 @@
## PRs/Issues
### Visit our mailing list
- Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on our mailing list? See https://github.com/ansible/awx/#get-involved for information for ways to connect with us.
### Visit the Forum or Matrix
- Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on either the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com) or the [Ansible Community Forum](https://forum.ansible.com/tag/awx)?
### Denied Submission

View File

@@ -11,6 +11,7 @@ jobs:
common-tests:
name: ${{ matrix.tests.name }}
runs-on: ubuntu-latest
timeout-minutes: 60
permissions:
packages: write
contents: read
@@ -20,6 +21,8 @@ jobs:
tests:
- name: api-test
command: /start_tests.sh
- name: api-migrations
command: /start_tests.sh test_migrations
- name: api-lint
command: /var/lib/awx/venv/awx/bin/tox -e linters
- name: api-swagger
@@ -47,6 +50,7 @@ jobs:
dev-env:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
@@ -61,6 +65,7 @@ jobs:
awx-operator:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- name: Checkout awx
uses: actions/checkout@v3
@@ -110,6 +115,7 @@ jobs:
collection-sanity:
name: awx_collection sanity
runs-on: ubuntu-latest
timeout-minutes: 30
strategy:
fail-fast: false
steps:
@@ -129,6 +135,7 @@ jobs:
collection-integration:
name: awx_collection integration
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
@@ -180,6 +187,7 @@ jobs:
collection-integration-coverage-combine:
name: combine awx_collection integration coverage
runs-on: ubuntu-latest
timeout-minutes: 10
needs:
- collection-integration
strategy:

View File

@@ -12,6 +12,7 @@ jobs:
push:
if: endsWith(github.repository, '/awx') || startsWith(github.ref, 'refs/heads/release_')
runs-on: ubuntu-latest
timeout-minutes: 60
permissions:
packages: write
contents: read

View File

@@ -6,6 +6,7 @@ jobs:
docsite-build:
name: docsite test build
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v3

View File

@@ -9,6 +9,7 @@ on:
jobs:
push:
runs-on: ubuntu-latest
timeout-minutes: 20
permissions:
packages: write
contents: read

View File

@@ -13,6 +13,7 @@ permissions:
jobs:
triage:
runs-on: ubuntu-latest
timeout-minutes: 20
name: Label Issue
steps:
@@ -26,6 +27,7 @@ jobs:
community:
runs-on: ubuntu-latest
timeout-minutes: 20
name: Label Issue - Community
steps:
- uses: actions/checkout@v3

View File

@@ -14,6 +14,7 @@ permissions:
jobs:
triage:
runs-on: ubuntu-latest
timeout-minutes: 20
name: Label PR
steps:
@@ -25,6 +26,7 @@ jobs:
community:
runs-on: ubuntu-latest
timeout-minutes: 20
name: Label PR - Community
steps:
- uses: actions/checkout@v3

View File

@@ -10,6 +10,7 @@ jobs:
if: github.repository_owner == 'ansible' && endsWith(github.repository, 'awx')
name: Scan PR description for semantic versioning keywords
runs-on: ubuntu-latest
timeout-minutes: 20
permissions:
packages: write
contents: read

View File

@@ -13,8 +13,9 @@ permissions:
jobs:
promote:
if: endsWith(github.repository, '/awx')
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Checkout awx
uses: actions/checkout@v3
@@ -46,7 +47,7 @@ jobs:
COLLECTION_TEMPLATE_VERSION: true
run: |
make build_collection
if [ "$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
if [ "$(curl -L --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
echo "Galaxy release already done"; \
else \
ansible-galaxy collection publish \

View File

@@ -23,6 +23,7 @@ jobs:
stage:
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest
timeout-minutes: 90
permissions:
packages: write
contents: write

View File

@@ -9,6 +9,7 @@ jobs:
name: Update Dependabot Prs
if: contains(github.event.pull_request.labels.*.name, 'dependencies') && contains(github.event.pull_request.labels.*.name, 'component:ui')
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: Checkout branch

View File

@@ -13,6 +13,7 @@ on:
jobs:
push:
runs-on: ubuntu-latest
timeout-minutes: 60
permissions:
packages: write
contents: read

View File

@@ -10,6 +10,7 @@ build:
3.11
commands:
- pip install --user tox
- python3 -m tox -e docs
- python3 -m tox -e docs --notest -v
- python3 -m tox -e docs --skip-pkg-install -q
- mkdir -p _readthedocs/html/
- mv docs/docsite/build/html/* _readthedocs/html/

View File

@@ -324,6 +324,12 @@ test:
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
test_migrations:
if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider --migrations -m migration_test $(PYTEST_ARGS) $(TEST_DIRS)
## Runs AWX_DOCKER_CMD inside a new docker container.
docker-runner:
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)

View File

@@ -1,4 +1,4 @@
---
collections:
- name: ansible.receptor
version: 2.0.0
version: 2.0.2

View File

@@ -128,6 +128,10 @@ logger = logging.getLogger('awx.api.views')
def unpartitioned_event_horizon(cls):
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE table_name = '_unpartitioned_{cls._meta.db_table}';")
if not cursor.fetchone():
return 0
with connection.cursor() as cursor:
try:
cursor.execute(f'SELECT MAX(id) FROM _unpartitioned_{cls._meta.db_table}')

View File

@@ -79,7 +79,6 @@ __all__ = [
'get_user_queryset',
'check_user_access',
'check_user_access_with_errors',
'user_accessible_objects',
'consumer_access',
]
@@ -136,10 +135,6 @@ def register_access(model_class, access_class):
access_registry[model_class] = access_class
def user_accessible_objects(user, role_name):
return ResourceMixin._accessible_objects(User, user, role_name)
def get_user_queryset(user, model_class):
"""
Return a queryset for the given model_class containing only the instances

View File

@@ -694,16 +694,18 @@ register(
category_slug='logging',
)
register(
'LOG_AGGREGATOR_MAX_DISK_USAGE_GB',
'LOG_AGGREGATOR_ACTION_QUEUE_SIZE',
field_class=fields.IntegerField,
default=1,
default=131072,
min_value=1,
label=_('Maximum disk persistence for external log aggregation (in GB)'),
label=_('Maximum number of messages that can be stored in the log action queue'),
help_text=_(
'Amount of data to store (in gigabytes) during an outage of '
'the external log aggregator (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. '
'Notably, this is used for the rsyslogd main queue (for input messages).'
'Defines how large the rsyslog action queue can grow in number of messages '
'stored. This can have an impact on memory utilization. When the queue '
'reaches 75% of this number, the queue will start writing to disk '
'(queue.highWatermark in rsyslog). When it reaches 90%, NOTICE, INFO, and '
'DEBUG messages will start to be discarded (queue.discardMark with '
'queue.discardSeverity=5).'
),
category=_('Logging'),
category_slug='logging',
@@ -718,8 +720,7 @@ register(
'Amount of data to store (in gigabytes) if an rsyslog action takes time '
'to process an incoming message (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). '
'Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified '
'by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
'It stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
),
category=_('Logging'),
category_slug='logging',

View File

@@ -2,29 +2,29 @@ from .plugin import CredentialPlugin
from django.conf import settings
from django.utils.translation import gettext_lazy as _
try:
from delinea.secrets.vault import SecretsVault
except ImportError:
from thycotic.secrets.vault import SecretsVault
from delinea.secrets.vault import PasswordGrantAuthorizer, SecretsVault
from base64 import b64decode
dsv_inputs = {
'fields': [
{
'id': 'tenant',
'label': _('Tenant'),
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretservercloud.com'),
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretsvaultcloud.com'),
'type': 'string',
},
{
'id': 'tld',
'label': _('Top-level Domain (TLD)'),
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretservercloud.com'),
'choices': ['ca', 'com', 'com.au', 'com.sg', 'eu'],
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretsvaultcloud.com'),
'choices': ['ca', 'com', 'com.au', 'eu'],
'default': 'com',
},
{'id': 'client_id', 'label': _('Client ID'), 'type': 'string'},
{
'id': 'client_id',
'label': _('Client ID'),
'type': 'string',
},
{
'id': 'client_secret',
'label': _('Client Secret'),
@@ -45,8 +45,16 @@ dsv_inputs = {
'help_text': _('The field to extract from the secret'),
'type': 'string',
},
{
'id': 'secret_decoding',
'label': _('Should the secret be base64 decoded?'),
'help_text': _('Specify whether the secret should be base64 decoded, typically used for storing files, such as SSH keys'),
'choices': ['No Decoding', 'Decode Base64'],
'type': 'string',
'default': 'No Decoding',
},
],
'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field'],
'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field', 'secret_decoding'],
}
if settings.DEBUG:
@@ -55,12 +63,32 @@ if settings.DEBUG:
'id': 'url_template',
'label': _('URL template'),
'type': 'string',
'default': 'https://{}.secretsvaultcloud.{}/v1',
'default': 'https://{}.secretsvaultcloud.{}',
}
)
dsv_plugin = CredentialPlugin(
'Thycotic DevOps Secrets Vault',
dsv_inputs,
lambda **kwargs: SecretsVault(**{k: v for (k, v) in kwargs.items() if k in [field['id'] for field in dsv_inputs['fields']]}).get_secret(kwargs['path'])['data'][kwargs['secret_field']], # fmt: skip
)
def dsv_backend(**kwargs):
tenant_name = kwargs['tenant']
tenant_tld = kwargs.get('tld', 'com')
tenant_url_template = kwargs.get('url_template', 'https://{}.secretsvaultcloud.{}')
client_id = kwargs['client_id']
client_secret = kwargs['client_secret']
secret_path = kwargs['path']
secret_field = kwargs['secret_field']
# providing a default value to remain backward compatible for secrets that have not specified this option
secret_decoding = kwargs.get('secret_decoding', 'No Decoding')
tenant_url = tenant_url_template.format(tenant_name, tenant_tld.strip("."))
authorizer = PasswordGrantAuthorizer(tenant_url, client_id, client_secret)
dsv_secret = SecretsVault(tenant_url, authorizer).get_secret(secret_path)
# files can be uploaded base64 decoded to DSV and thus decoding it only, when asked for
if secret_decoding == 'Decode Base64':
return b64decode(dsv_secret['data'][secret_field]).decode()
return dsv_secret['data'][secret_field]
dsv_plugin = CredentialPlugin(name='Thycotic DevOps Secrets Vault', inputs=dsv_inputs, backend=dsv_backend)

View File

@@ -37,8 +37,11 @@ class Control(object):
def running(self, *args, **kwargs):
return self.control_with_reply('running', *args, **kwargs)
def cancel(self, task_ids, *args, **kwargs):
return self.control_with_reply('cancel', *args, extra_data={'task_ids': task_ids}, **kwargs)
def cancel(self, task_ids, with_reply=True):
if with_reply:
return self.control_with_reply('cancel', extra_data={'task_ids': task_ids})
else:
self.control({'control': 'cancel', 'task_ids': task_ids, 'reply_to': None}, extra_data={'task_ids': task_ids})
def schedule(self, *args, **kwargs):
return self.control_with_reply('schedule', *args, **kwargs)

View File

@@ -89,8 +89,9 @@ class AWXConsumerBase(object):
if task_ids and not msg:
logger.info(f'Could not locate running tasks to cancel with ids={task_ids}')
with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
if reply_queue is not None:
with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
elif control == 'reload':
for worker in self.pool.workers:
worker.quit()

View File

@@ -9,6 +9,7 @@ from django.conf import settings
# AWX
import awx.main.fields
from awx.main.models import Host
from ._sqlite_helper import dbawaremigrations
def replaces():
@@ -131,9 +132,11 @@ class Migration(migrations.Migration):
help_text='If enabled, Tower will act as an Ansible Fact Cache Plugin; persisting facts at the end of a playbook run to the database and caching facts for use by Ansible.',
),
),
migrations.RunSQL(
dbawaremigrations.RunSQL(
sql="CREATE INDEX host_ansible_facts_default_gin ON {} USING gin(ansible_facts jsonb_path_ops);".format(Host._meta.db_table),
reverse_sql='DROP INDEX host_ansible_facts_default_gin;',
sqlite_sql=dbawaremigrations.RunSQL.noop,
sqlite_reverse_sql=dbawaremigrations.RunSQL.noop,
),
# SCM file-based inventories
migrations.AddField(

View File

@@ -3,24 +3,27 @@ from __future__ import unicode_literals
from django.db import migrations
from ._sqlite_helper import dbawaremigrations
tables_to_drop = [
'celery_taskmeta',
'celery_tasksetmeta',
'djcelery_crontabschedule',
'djcelery_intervalschedule',
'djcelery_periodictask',
'djcelery_periodictasks',
'djcelery_taskstate',
'djcelery_workerstate',
'djkombu_message',
'djkombu_queue',
]
postgres_sql = ([("DROP TABLE IF EXISTS {} CASCADE;".format(table))] for table in tables_to_drop)
sqlite_sql = ([("DROP TABLE IF EXISTS {};".format(table))] for table in tables_to_drop)
class Migration(migrations.Migration):
dependencies = [
('main', '0049_v330_validate_instance_capacity_adjustment'),
]
operations = [
migrations.RunSQL([("DROP TABLE IF EXISTS {} CASCADE;".format(table))])
for table in (
'celery_taskmeta',
'celery_tasksetmeta',
'djcelery_crontabschedule',
'djcelery_intervalschedule',
'djcelery_periodictask',
'djcelery_periodictasks',
'djcelery_taskstate',
'djcelery_workerstate',
'djkombu_message',
'djkombu_queue',
)
]
operations = [dbawaremigrations.RunSQL(p, sqlite_sql=s) for p, s in zip(postgres_sql, sqlite_sql)]

View File

@@ -2,6 +2,8 @@
from django.db import migrations, models, connection
from ._sqlite_helper import dbawaremigrations
def migrate_event_data(apps, schema_editor):
# see: https://github.com/ansible/awx/issues/6010
@@ -24,6 +26,11 @@ def migrate_event_data(apps, schema_editor):
cursor.execute(f'ALTER TABLE {tblname} ALTER COLUMN id TYPE bigint USING id::bigint;')
def migrate_event_data_sqlite(apps, schema_editor):
# TODO: cmeyers fill this in
return
class FakeAlterField(migrations.AlterField):
def database_forwards(self, *args):
# this is intentionally left blank, because we're
@@ -37,7 +44,7 @@ class Migration(migrations.Migration):
]
operations = [
migrations.RunPython(migrate_event_data),
dbawaremigrations.RunPython(migrate_event_data, sqlite_code=migrate_event_data_sqlite),
FakeAlterField(
model_name='adhoccommandevent',
name='id',

View File

@@ -1,5 +1,7 @@
from django.db import migrations, models, connection
from ._sqlite_helper import dbawaremigrations
def migrate_event_data(apps, schema_editor):
# see: https://github.com/ansible/awx/issues/9039
@@ -59,6 +61,10 @@ def migrate_event_data(apps, schema_editor):
cursor.execute('DROP INDEX IF EXISTS main_jobevent_job_id_idx')
def migrate_event_data_sqlite(apps, schema_editor):
return None
class FakeAddField(migrations.AddField):
def database_forwards(self, *args):
# this is intentionally left blank, because we're
@@ -72,7 +78,7 @@ class Migration(migrations.Migration):
]
operations = [
migrations.RunPython(migrate_event_data),
dbawaremigrations.RunPython(migrate_event_data, sqlite_code=migrate_event_data_sqlite),
FakeAddField(
model_name='jobevent',
name='job_created',

View File

@@ -3,6 +3,8 @@
import awx.main.models.notifications
from django.db import migrations, models
from ._sqlite_helper import dbawaremigrations
class Migration(migrations.Migration):
dependencies = [
@@ -104,11 +106,12 @@ class Migration(migrations.Migration):
name='deleted_actor',
field=models.JSONField(null=True),
),
migrations.RunSQL(
dbawaremigrations.RunSQL(
"""
ALTER TABLE main_activitystream RENAME setting TO setting_old;
ALTER TABLE main_activitystream ALTER COLUMN setting_old DROP NOT NULL;
""",
sqlite_sql="ALTER TABLE main_activitystream RENAME setting TO setting_old",
state_operations=[
migrations.RemoveField(
model_name='activitystream',
@@ -121,11 +124,12 @@ class Migration(migrations.Migration):
name='setting',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
dbawaremigrations.RunSQL(
"""
ALTER TABLE main_job RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_job ALTER COLUMN survey_passwords_old DROP NOT NULL;
""",
sqlite_sql="ALTER TABLE main_job RENAME survey_passwords TO survey_passwords_old",
state_operations=[
migrations.RemoveField(
model_name='job',
@@ -138,11 +142,12 @@ class Migration(migrations.Migration):
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.RunSQL(
dbawaremigrations.RunSQL(
"""
ALTER TABLE main_joblaunchconfig RENAME char_prompts TO char_prompts_old;
ALTER TABLE main_joblaunchconfig ALTER COLUMN char_prompts_old DROP NOT NULL;
""",
sqlite_sql="ALTER TABLE main_joblaunchconfig RENAME char_prompts TO char_prompts_old",
state_operations=[
migrations.RemoveField(
model_name='joblaunchconfig',
@@ -155,11 +160,12 @@ class Migration(migrations.Migration):
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
dbawaremigrations.RunSQL(
"""
ALTER TABLE main_joblaunchconfig RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_joblaunchconfig ALTER COLUMN survey_passwords_old DROP NOT NULL;
""",
sqlite_sql="ALTER TABLE main_joblaunchconfig RENAME survey_passwords TO survey_passwords_old;",
state_operations=[
migrations.RemoveField(
model_name='joblaunchconfig',
@@ -172,11 +178,12 @@ class Migration(migrations.Migration):
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.RunSQL(
dbawaremigrations.RunSQL(
"""
ALTER TABLE main_notification RENAME body TO body_old;
ALTER TABLE main_notification ALTER COLUMN body_old DROP NOT NULL;
""",
sqlite_sql="ALTER TABLE main_notification RENAME body TO body_old",
state_operations=[
migrations.RemoveField(
model_name='notification',
@@ -189,11 +196,12 @@ class Migration(migrations.Migration):
name='body',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
dbawaremigrations.RunSQL(
"""
ALTER TABLE main_unifiedjob RENAME job_env TO job_env_old;
ALTER TABLE main_unifiedjob ALTER COLUMN job_env_old DROP NOT NULL;
""",
sqlite_sql="ALTER TABLE main_unifiedjob RENAME job_env TO job_env_old",
state_operations=[
migrations.RemoveField(
model_name='unifiedjob',
@@ -206,11 +214,12 @@ class Migration(migrations.Migration):
name='job_env',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.RunSQL(
dbawaremigrations.RunSQL(
"""
ALTER TABLE main_workflowjob RENAME char_prompts TO char_prompts_old;
ALTER TABLE main_workflowjob ALTER COLUMN char_prompts_old DROP NOT NULL;
""",
sqlite_sql="ALTER TABLE main_workflowjob RENAME char_prompts TO char_prompts_old",
state_operations=[
migrations.RemoveField(
model_name='workflowjob',
@@ -223,11 +232,12 @@ class Migration(migrations.Migration):
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
dbawaremigrations.RunSQL(
"""
ALTER TABLE main_workflowjob RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_workflowjob ALTER COLUMN survey_passwords_old DROP NOT NULL;
""",
sqlite_sql="ALTER TABLE main_workflowjob RENAME survey_passwords TO survey_passwords_old",
state_operations=[
migrations.RemoveField(
model_name='workflowjob',
@@ -240,11 +250,12 @@ class Migration(migrations.Migration):
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.RunSQL(
dbawaremigrations.RunSQL(
"""
ALTER TABLE main_workflowjobnode RENAME char_prompts TO char_prompts_old;
ALTER TABLE main_workflowjobnode ALTER COLUMN char_prompts_old DROP NOT NULL;
""",
sqlite_sql="ALTER TABLE main_workflowjobnode RENAME char_prompts TO char_prompts_old",
state_operations=[
migrations.RemoveField(
model_name='workflowjobnode',
@@ -257,11 +268,12 @@ class Migration(migrations.Migration):
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
dbawaremigrations.RunSQL(
"""
ALTER TABLE main_workflowjobnode RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_workflowjobnode ALTER COLUMN survey_passwords_old DROP NOT NULL;
""",
sqlite_sql="ALTER TABLE main_workflowjobnode RENAME survey_passwords TO survey_passwords_old",
state_operations=[
migrations.RemoveField(
model_name='workflowjobnode',

View File

@@ -3,6 +3,8 @@ from __future__ import unicode_literals
from django.db import migrations
from ._sqlite_helper import dbawaremigrations
def delete_taggit_contenttypes(apps, schema_editor):
ContentType = apps.get_model('contenttypes', 'ContentType')
@@ -20,8 +22,8 @@ class Migration(migrations.Migration):
]
operations = [
migrations.RunSQL("DROP TABLE IF EXISTS taggit_tag CASCADE;"),
migrations.RunSQL("DROP TABLE IF EXISTS taggit_taggeditem CASCADE;"),
dbawaremigrations.RunSQL("DROP TABLE IF EXISTS taggit_tag CASCADE;", sqlite_sql="DROP TABLE IF EXISTS taggit_tag;"),
dbawaremigrations.RunSQL("DROP TABLE IF EXISTS taggit_taggeditem CASCADE;", sqlite_sql="DROP TABLE IF EXISTS taggit_taggeditem;"),
migrations.RunPython(delete_taggit_contenttypes),
migrations.RunPython(delete_taggit_migration_records),
]

View File

@@ -0,0 +1,61 @@
from django.db import migrations
class RunSQL(migrations.operations.special.RunSQL):
"""
Bit of a hack here. Django actually wants this decision made in the router
and we can pass **hints.
"""
def __init__(self, *args, **kwargs):
if 'sqlite_sql' not in kwargs:
raise ValueError("sqlite_sql parameter required")
sqlite_sql = kwargs.pop('sqlite_sql')
self.sqlite_sql = sqlite_sql
self.sqlite_reverse_sql = kwargs.pop('sqlite_reverse_sql', None)
super().__init__(*args, **kwargs)
def database_forwards(self, app_label, schema_editor, from_state, to_state):
if not schema_editor.connection.vendor.startswith('postgres'):
self.sql = self.sqlite_sql or migrations.RunSQL.noop
super().database_forwards(app_label, schema_editor, from_state, to_state)
def database_backwards(self, app_label, schema_editor, from_state, to_state):
if not schema_editor.connection.vendor.startswith('postgres'):
self.reverse_sql = self.sqlite_reverse_sql or migrations.RunSQL.noop
super().database_backwards(app_label, schema_editor, from_state, to_state)
class RunPython(migrations.operations.special.RunPython):
"""
Bit of a hack here. Django actually wants this decision made in the router
and we can pass **hints.
"""
def __init__(self, *args, **kwargs):
if 'sqlite_code' not in kwargs:
raise ValueError("sqlite_code parameter required")
sqlite_code = kwargs.pop('sqlite_code')
self.sqlite_code = sqlite_code
self.sqlite_reverse_code = kwargs.pop('sqlite_reverse_code', None)
super().__init__(*args, **kwargs)
def database_forwards(self, app_label, schema_editor, from_state, to_state):
if not schema_editor.connection.vendor.startswith('postgres'):
self.code = self.sqlite_code or migrations.RunPython.noop
super().database_forwards(app_label, schema_editor, from_state, to_state)
def database_backwards(self, app_label, schema_editor, from_state, to_state):
if not schema_editor.connection.vendor.startswith('postgres'):
self.reverse_code = self.sqlite_reverse_code or migrations.RunPython.noop
super().database_backwards(app_label, schema_editor, from_state, to_state)
class _sqlitemigrations:
RunPython = RunPython
RunSQL = RunSQL
dbawaremigrations = _sqlitemigrations()

View File

@@ -57,7 +57,6 @@ from awx.main.models.ha import ( # noqa
from awx.main.models.rbac import ( # noqa
Role,
batch_role_ancestor_rebuilding,
get_roles_on_resource,
role_summary_fields_generator,
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
ROLE_SINGLETON_SYSTEM_AUDITOR,
@@ -91,13 +90,12 @@ from oauth2_provider.models import Grant, RefreshToken # noqa -- needed django-
# Add custom methods to User model for permissions checks.
from django.contrib.auth.models import User # noqa
from awx.main.access import get_user_queryset, check_user_access, check_user_access_with_errors, user_accessible_objects # noqa
from awx.main.access import get_user_queryset, check_user_access, check_user_access_with_errors # noqa
User.add_to_class('get_queryset', get_user_queryset)
User.add_to_class('can_access', check_user_access)
User.add_to_class('can_access_with_errors', check_user_access_with_errors)
User.add_to_class('accessible_objects', user_accessible_objects)
def convert_jsonfields():

View File

@@ -19,7 +19,7 @@ from django.utils.translation import gettext_lazy as _
# AWX
from awx.main.models.base import prevent_search
from awx.main.models.rbac import Role, RoleAncestorEntry, get_roles_on_resource
from awx.main.models.rbac import Role, RoleAncestorEntry
from awx.main.utils import parse_yaml_or_json, get_custom_venv_choices, get_licenser, polymorphic
from awx.main.utils.execution_environments import get_default_execution_environment
from awx.main.utils.encryption import decrypt_value, get_encryption_key, is_encrypted
@@ -54,10 +54,7 @@ class ResourceMixin(models.Model):
Use instead of `MyModel.objects` when you want to only consider
resources that a user has specific permissions for. For example:
MyModel.accessible_objects(user, 'read_role').filter(name__istartswith='bar');
NOTE: This should only be used for list type things. If you have a
specific resource you want to check permissions on, it is more
performant to resolve the resource in question then call
`myresource.get_permissions(user)`.
NOTE: This should only be used for list type things.
"""
return ResourceMixin._accessible_objects(cls, accessor, role_field)
@@ -86,15 +83,6 @@ class ResourceMixin(models.Model):
def _accessible_objects(cls, accessor, role_field):
return cls.objects.filter(pk__in=ResourceMixin._accessible_pk_qs(cls, accessor, role_field))
def get_permissions(self, accessor):
"""
Returns a string list of the roles a accessor has for a given resource.
An accessor can be either a User, Role, or an arbitrary resource that
contains one or more Roles associated with it.
"""
return get_roles_on_resource(self, accessor)
class SurveyJobTemplateMixin(models.Model):
class Meta:

View File

@@ -1439,6 +1439,11 @@ class UnifiedJob(
if not self.celery_task_id:
return
canceled = []
if not connection.get_autocommit():
# this condition is purpose-written for the task manager, when it cancels jobs in workflows
ControlDispatcher('dispatcher', self.controller_node).cancel([self.celery_task_id], with_reply=False)
return True # task manager itself needs to act under assumption that cancel was received
try:
# Use control and reply mechanism to cancel and obtain confirmation
timeout = 5

View File

@@ -270,6 +270,9 @@ class WorkflowManager(TaskBase):
job.status = 'failed'
job.save(update_fields=['status', 'job_explanation'])
job.websocket_emit_status('failed')
# NOTE: sending notification templates here is slightly worse performance
# this is not yet optimized in the same way as for the TaskManager
job.send_notification_templates('failed')
ScheduleWorkflowManager().schedule()
# TODO: should we emit a status on the socket here similar to tasks.py awx_periodic_scheduler() ?
@@ -430,6 +433,25 @@ class TaskManager(TaskBase):
self.tm_models = TaskManagerModels()
self.controlplane_ig = self.tm_models.instance_groups.controlplane_ig
def process_job_dep_failures(self, task):
"""If job depends on a job that has failed, mark as failed and handle misc stuff."""
for dep in task.dependent_jobs.all():
# if we detect a failed or error dependency, go ahead and fail this task.
if dep.status in ("error", "failed"):
task.status = 'failed'
logger.warning(f'Previous task failed task: {task.id} dep: {dep.id} task manager')
task.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(dep)),
dep.name,
dep.id,
)
task.save(update_fields=['status', 'job_explanation'])
task.websocket_emit_status('failed')
self.pre_start_failed.append(task.id)
return True
return False
def job_blocked_by(self, task):
# TODO: I'm not happy with this, I think blocking behavior should be decided outside of the dependency graph
# in the old task manager this was handled as a method on each task object outside of the graph and
@@ -441,20 +463,6 @@ class TaskManager(TaskBase):
for dep in task.dependent_jobs.all():
if dep.status in ACTIVE_STATES:
return dep
# if we detect a failed or error dependency, go ahead and fail this
# task. The errback on the dependency takes some time to trigger,
# and we don't want the task to enter running state if its
# dependency has failed or errored.
elif dep.status in ("error", "failed"):
task.status = 'failed'
task.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(dep)),
dep.name,
dep.id,
)
task.save(update_fields=['status', 'job_explanation'])
task.websocket_emit_status('failed')
return dep
return None
@@ -474,7 +482,6 @@ class TaskManager(TaskBase):
if self.start_task_limit == 0:
# schedule another run immediately after this task manager
ScheduleTaskManager().schedule()
from awx.main.tasks.system import handle_work_error, handle_work_success
task.status = 'waiting'
@@ -485,7 +492,7 @@ class TaskManager(TaskBase):
task.job_explanation += ' '
task.job_explanation += 'Task failed pre-start check.'
task.save()
# TODO: run error handler to fail sub-tasks and send notifications
self.pre_start_failed.append(task.id)
else:
if type(task) is WorkflowJob:
task.status = 'running'
@@ -507,19 +514,16 @@ class TaskManager(TaskBase):
# apply_async does a NOTIFY to the channel dispatcher is listening to
# postgres will treat this as part of the transaction, which is what we want
if task.status != 'failed' and type(task) is not WorkflowJob:
task_actual = {'type': get_type_for_model(type(task)), 'id': task.id}
task_cls = task._get_task_class()
task_cls.apply_async(
[task.pk],
opts,
queue=task.get_queue_name(),
uuid=task.celery_task_id,
callbacks=[{'task': handle_work_success.name, 'kwargs': {'task_actual': task_actual}}],
errbacks=[{'task': handle_work_error.name, 'kwargs': {'task_actual': task_actual}}],
)
# In exception cases, like a job failing pre-start checks, we send the websocket status message
# for jobs going into waiting, we omit this because of performance issues, as it should go to running quickly
# In exception cases, like a job failing pre-start checks, we send the websocket status message.
# For jobs going into waiting, we omit this because of performance issues, as it should go to running quickly
if task.status != 'waiting':
task.websocket_emit_status(task.status) # adds to on_commit
@@ -540,6 +544,11 @@ class TaskManager(TaskBase):
if self.timed_out():
logger.warning("Task manager has reached time out while processing pending jobs, exiting loop early")
break
has_failed = self.process_job_dep_failures(task)
if has_failed:
continue
blocked_by = self.job_blocked_by(task)
if blocked_by:
self.subsystem_metrics.inc(f"{self.prefix}_tasks_blocked", 1)
@@ -653,6 +662,11 @@ class TaskManager(TaskBase):
reap_job(j, 'failed')
def process_tasks(self):
# maintain a list of jobs that went to an early failure state,
# meaning the dispatcher never got these jobs,
# that means we have to handle notifications for those
self.pre_start_failed = []
running_tasks = [t for t in self.all_tasks if t.status in ['waiting', 'running']]
self.process_running_tasks(running_tasks)
self.subsystem_metrics.inc(f"{self.prefix}_running_processed", len(running_tasks))
@@ -662,6 +676,11 @@ class TaskManager(TaskBase):
self.process_pending_tasks(pending_tasks)
self.subsystem_metrics.inc(f"{self.prefix}_pending_processed", len(pending_tasks))
if self.pre_start_failed:
from awx.main.tasks.system import handle_failure_notifications
handle_failure_notifications.delay(self.pre_start_failed)
def timeout_approval_node(self, task):
if self.timed_out():
logger.warning("Task manager has reached time out while processing approval nodes, exiting loop early")

View File

@@ -74,6 +74,8 @@ from awx.main.utils.common import (
extract_ansible_vars,
get_awx_version,
create_partition,
ScheduleWorkflowManager,
ScheduleTaskManager,
)
from awx.conf.license import get_license
from awx.main.utils.handlers import SpecialInventoryHandler
@@ -450,6 +452,12 @@ class BaseTask(object):
instance.ansible_version = ansible_version_info
instance.save(update_fields=['ansible_version'])
# Run task manager appropriately for speculative dependencies
if instance.unifiedjob_blocked_jobs.exists():
ScheduleTaskManager().schedule()
if instance.spawned_by_workflow:
ScheduleWorkflowManager().schedule()
def should_use_fact_cache(self):
return False

View File

@@ -16,7 +16,9 @@ class SignalExit(Exception):
class SignalState:
def reset(self):
self.sigterm_flag = False
self.is_active = False
self.sigint_flag = False
self.is_active = False # for nested context managers
self.original_sigterm = None
self.original_sigint = None
self.raise_exception = False
@@ -24,23 +26,36 @@ class SignalState:
def __init__(self):
self.reset()
def set_flag(self, *args):
"""Method to pass into the python signal.signal method to receive signals"""
self.sigterm_flag = True
def raise_if_needed(self):
if self.raise_exception:
self.raise_exception = False # so it is not raised a second time in error handling
raise SignalExit()
def set_sigterm_flag(self, *args):
self.sigterm_flag = True
self.raise_if_needed()
def set_sigint_flag(self, *args):
self.sigint_flag = True
self.raise_if_needed()
def connect_signals(self):
self.original_sigterm = signal.getsignal(signal.SIGTERM)
self.original_sigint = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGTERM, self.set_flag)
signal.signal(signal.SIGINT, self.set_flag)
signal.signal(signal.SIGTERM, self.set_sigterm_flag)
signal.signal(signal.SIGINT, self.set_sigint_flag)
self.is_active = True
def restore_signals(self):
signal.signal(signal.SIGTERM, self.original_sigterm)
signal.signal(signal.SIGINT, self.original_sigint)
# if we got a signal while context manager was active, call parent methods.
if self.sigterm_flag:
if callable(self.original_sigterm):
self.original_sigterm()
if self.sigint_flag:
if callable(self.original_sigint):
self.original_sigint()
self.reset()
@@ -48,7 +63,7 @@ signal_state = SignalState()
def signal_callback():
return signal_state.sigterm_flag
return bool(signal_state.sigterm_flag or signal_state.sigint_flag)
def with_signal_handling(f):

View File

@@ -53,13 +53,7 @@ from awx.main.models import (
from awx.main.constants import ACTIVE_STATES
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_task_queuename, reaper
from awx.main.utils.common import (
get_type_for_model,
ignore_inventory_computed_fields,
ignore_inventory_group_removal,
ScheduleWorkflowManager,
ScheduleTaskManager,
)
from awx.main.utils.common import ignore_inventory_computed_fields, ignore_inventory_group_removal
from awx.main.utils.reload import stop_local_services
from awx.main.utils.pglock import advisory_lock
@@ -765,63 +759,19 @@ def awx_periodic_scheduler():
emit_channel_notification('schedules-changed', dict(id=schedule.id, group_name="schedules"))
def schedule_manager_success_or_error(instance):
if instance.unifiedjob_blocked_jobs.exists():
ScheduleTaskManager().schedule()
if instance.spawned_by_workflow:
ScheduleWorkflowManager().schedule()
@task(queue=get_task_queuename)
def handle_work_success(task_actual):
try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
except ObjectDoesNotExist:
logger.warning('Missing {} `{}` in success callback.'.format(task_actual['type'], task_actual['id']))
return
if not instance:
return
schedule_manager_success_or_error(instance)
@task(queue=get_task_queuename)
def handle_work_error(task_actual):
try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
except ObjectDoesNotExist:
logger.warning('Missing {} `{}` in error callback.'.format(task_actual['type'], task_actual['id']))
return
if not instance:
return
subtasks = instance.get_jobs_fail_chain() # reverse of dependent_jobs mostly
logger.debug(f'Executing error task id {task_actual["id"]}, subtasks: {[subtask.id for subtask in subtasks]}')
deps_of_deps = {}
for subtask in subtasks:
if subtask.celery_task_id != instance.celery_task_id and not subtask.cancel_flag and not subtask.status in ('successful', 'failed'):
# If there are multiple in the dependency chain, A->B->C, and this was called for A, blame B for clarity
blame_job = deps_of_deps.get(subtask.id, instance)
subtask.status = 'failed'
subtask.failed = True
if not subtask.job_explanation:
subtask.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(blame_job)),
blame_job.name,
blame_job.id,
)
subtask.save()
subtask.websocket_emit_status("failed")
for sub_subtask in subtask.get_jobs_fail_chain():
deps_of_deps[sub_subtask.id] = subtask
# We only send 1 job complete message since all the job completion message
# handling does is trigger the scheduler. If we extend the functionality of
# what the job complete message handler does then we may want to send a
# completion event for each job here.
schedule_manager_success_or_error(instance)
def handle_failure_notifications(task_ids):
"""A task-ified version of the method that sends notifications."""
found_task_ids = set()
for instance in UnifiedJob.objects.filter(id__in=task_ids):
found_task_ids.add(instance.id)
try:
instance.send_notification_templates('failed')
except Exception:
logger.exception(f'Error preparing notifications for task {instance.id}')
deleted_tasks = set(task_ids) - found_task_ids
if deleted_tasks:
logger.warning(f'Could not send notifications for {deleted_tasks} because they were not found in the database')
@task(queue=get_task_queuename)

View File

@@ -0,0 +1,44 @@
import pytest
from django_test_migrations.plan import all_migrations, nodes_to_tuples
"""
Most tests that live in here can probably be deleted at some point. They are mainly
for a developer. When AWX versions that users upgrade from falls out of support that
is when migration tests can be deleted. This is also a good time to squash. Squashing
will likely mess with the tests that live here.
The smoke test should be kept in here. The smoke test ensures that our migrations
continue to work when sqlite is the backing database (vs. the default DB of postgres).
"""
@pytest.mark.django_db
class TestMigrationSmoke:
def test_happy_path(self, migrator):
"""
This smoke test runs all the migrations.
Example of how to use django-test-migration to invoke particular migration(s)
while weaving in object creation and assertions.
Note that this is more than just an example. It is a smoke test because it runs ALL
the migrations. Our "normal" unit tests subvert the migrations running because it is slow.
"""
migration_nodes = all_migrations('default')
migration_tuples = nodes_to_tuples(migration_nodes)
final_migration = migration_tuples[-1]
migrator.apply_initial_migration(('main', None))
# I just picked a newish migration at the time of writing this.
# If someone from the future finds themselves here because the are squashing migrations
# it is fine to change the 0180_... below to some other newish migration
intermediate_state = migrator.apply_tested_migration(('main', '0180_add_hostmetric_fields'))
Instance = intermediate_state.apps.get_model('main', 'Instance')
# Create any old object in the database
Instance.objects.create(hostname='foobar', node_type='control')
final_state = migrator.apply_tested_migration(final_migration)
Instance = final_state.apps.get_model('main', 'Instance')
assert Instance.objects.filter(hostname='foobar').count() == 1

View File

@@ -122,25 +122,6 @@ def test_team_org_resource_role(ext_auth, organization, rando, org_admin, team):
] == [True for i in range(2)]
@pytest.mark.django_db
def test_user_accessible_objects(user, organization):
"""
We cannot directly use accessible_objects for User model because
both editing and read permissions are obligated to complex business logic
"""
admin = user('admin', False)
u = user('john', False)
access = UserAccess(admin)
assert access.get_queryset().count() == 1 # can only see himself
organization.member_role.members.add(u)
organization.member_role.members.add(admin)
assert access.get_queryset().count() == 2
organization.member_role.members.remove(u)
assert access.get_queryset().count() == 1
@pytest.mark.django_db
def test_org_admin_create_sys_auditor(org_admin):
access = UserAccess(org_admin)

View File

@@ -5,8 +5,8 @@ import tempfile
import shutil
from awx.main.tasks.jobs import RunJob
from awx.main.tasks.system import execution_node_health_check, _cleanup_images_and_files, handle_work_error
from awx.main.models import Instance, Job, InventoryUpdate, ProjectUpdate
from awx.main.tasks.system import execution_node_health_check, _cleanup_images_and_files
from awx.main.models import Instance, Job
@pytest.fixture
@@ -73,17 +73,3 @@ def test_does_not_run_reaped_job(mocker, mock_me):
job.refresh_from_db()
assert job.status == 'failed'
mock_run.assert_not_called()
@pytest.mark.django_db
def test_handle_work_error_nested(project, inventory_source):
pu = ProjectUpdate.objects.create(status='failed', project=project, celery_task_id='1234')
iu = InventoryUpdate.objects.create(status='pending', inventory_source=inventory_source, source='scm')
job = Job.objects.create(status='pending')
iu.dependent_jobs.add(pu)
job.dependent_jobs.add(pu, iu)
handle_work_error({'type': 'project_update', 'id': pu.id})
iu.refresh_from_db()
job.refresh_from_db()
assert iu.job_explanation == f'Previous Task Failed: {{"job_type": "project_update", "job_name": "", "job_id": "{pu.id}"}}'
assert job.job_explanation == f'Previous Task Failed: {{"job_type": "inventory_update", "job_name": "", "job_id": "{iu.id}"}}'

View File

@@ -47,7 +47,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
]
),
),
@@ -61,7 +61,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
'action(type="omfwd" target="localhost" port="9000" protocol="udp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa
'action(type="omfwd" target="localhost" port="9000" protocol="udp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5")', # noqa
]
),
),
@@ -75,7 +75,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
'action(type="omfwd" target="localhost" port="9000" protocol="tcp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa
'action(type="omfwd" target="localhost" port="9000" protocol="tcp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5")', # noqa
]
),
),
@@ -89,7 +89,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -103,7 +103,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -117,7 +117,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -131,7 +131,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -145,7 +145,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -159,7 +159,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -173,7 +173,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
]
),
),

View File

@@ -1,8 +1,43 @@
import signal
import functools
from awx.main.tasks.signals import signal_state, signal_callback, with_signal_handling
def pytest_sigint():
pytest_sigint.called_count += 1
def pytest_sigterm():
pytest_sigterm.called_count += 1
def tmp_signals_for_test(func):
"""
When we run our internal signal handlers, it will call the original signal
handlers when its own work is finished.
This would crash the test runners normally, because those methods will
shut down the process.
So this is a decorator to safely replace existing signal handlers
with new signal handlers that do nothing so that tests do not crash.
"""
@functools.wraps(func)
def wrapper():
original_sigterm = signal.getsignal(signal.SIGTERM)
original_sigint = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGTERM, pytest_sigterm)
signal.signal(signal.SIGINT, pytest_sigint)
pytest_sigterm.called_count = 0
pytest_sigint.called_count = 0
func()
signal.signal(signal.SIGTERM, original_sigterm)
signal.signal(signal.SIGINT, original_sigint)
return wrapper
@tmp_signals_for_test
def test_outer_inner_signal_handling():
"""
Even if the flag is set in the outer context, its value should persist in the inner context
@@ -15,17 +50,22 @@ def test_outer_inner_signal_handling():
@with_signal_handling
def f1():
assert signal_callback() is False
signal_state.set_flag()
signal_state.set_sigterm_flag()
assert signal_callback()
f2()
original_sigterm = signal.getsignal(signal.SIGTERM)
assert signal_callback() is False
assert pytest_sigterm.called_count == 0
assert pytest_sigint.called_count == 0
f1()
assert signal_callback() is False
assert signal.getsignal(signal.SIGTERM) is original_sigterm
assert pytest_sigterm.called_count == 1
assert pytest_sigint.called_count == 0
@tmp_signals_for_test
def test_inner_outer_signal_handling():
"""
Even if the flag is set in the inner context, its value should persist in the outer context
@@ -34,7 +74,7 @@ def test_inner_outer_signal_handling():
@with_signal_handling
def f2():
assert signal_callback() is False
signal_state.set_flag()
signal_state.set_sigint_flag()
assert signal_callback()
@with_signal_handling
@@ -45,6 +85,10 @@ def test_inner_outer_signal_handling():
original_sigterm = signal.getsignal(signal.SIGTERM)
assert signal_callback() is False
assert pytest_sigterm.called_count == 0
assert pytest_sigint.called_count == 0
f1()
assert signal_callback() is False
assert signal.getsignal(signal.SIGTERM) is original_sigterm
assert pytest_sigterm.called_count == 0
assert pytest_sigint.called_count == 1

View File

@@ -143,13 +143,6 @@ def test_send_notifications_job_id(mocker):
assert UnifiedJob.objects.get.called_with(id=1)
def test_work_success_callback_missing_job():
task_data = {'type': 'project_update', 'id': 9999}
with mock.patch('django.db.models.query.QuerySet.get') as get_mock:
get_mock.side_effect = ProjectUpdate.DoesNotExist()
assert system.handle_work_success(task_data) is None
@mock.patch('awx.main.models.UnifiedJob.objects.get')
@mock.patch('awx.main.models.Notification.objects.filter')
def test_send_notifications_list(mock_notifications_filter, mock_job_get, mocker):

View File

@@ -17,11 +17,26 @@ def construct_rsyslog_conf_template(settings=settings):
port = getattr(settings, 'LOG_AGGREGATOR_PORT', '')
protocol = getattr(settings, 'LOG_AGGREGATOR_PROTOCOL', '')
timeout = getattr(settings, 'LOG_AGGREGATOR_TCP_TIMEOUT', 5)
max_disk_space_main_queue = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_GB', 1)
action_queue_size = getattr(settings, 'LOG_AGGREGATOR_ACTION_QUEUE_SIZE', 131072)
max_disk_space_action_queue = getattr(settings, 'LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB', 1)
spool_directory = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_PATH', '/var/lib/awx').rstrip('/')
error_log_file = getattr(settings, 'LOG_AGGREGATOR_RSYSLOGD_ERROR_LOG_FILE', '')
queue_options = [
f'queue.spoolDirectory="{spool_directory}"',
'queue.filename="awx-external-logger-action-queue"',
f'queue.maxDiskSpace="{max_disk_space_action_queue}g"', # overall disk space for all queue files
'queue.maxFileSize="100m"', # individual file size
'queue.type="LinkedList"',
'queue.saveOnShutdown="on"',
'queue.syncqueuefiles="on"', # (f)sync when checkpoint occurs
'queue.checkpointInterval="1000"', # Update disk queue every 1000 messages
f'queue.size="{action_queue_size}"', # max number of messages in queue
f'queue.highwaterMark="{int(action_queue_size * 0.75)}"', # 75% of queue.size
f'queue.discardMark="{int(action_queue_size * 0.9)}"', # 90% of queue.size
'queue.discardSeverity="5"', # Only discard notice, info, debug if we must discard anything
]
if not os.access(spool_directory, os.W_OK):
spool_directory = '/var/lib/awx'
@@ -33,7 +48,6 @@ def construct_rsyslog_conf_template(settings=settings):
'$WorkDirectory /var/lib/awx/rsyslog',
f'$MaxMessageSize {max_bytes}',
'$IncludeConfig /var/lib/awx/rsyslog/conf.d/*.conf',
f'main_queue(queue.spoolDirectory="{spool_directory}" queue.maxdiskspace="{max_disk_space_main_queue}g" queue.type="Disk" queue.filename="awx-external-logger-backlog")', # noqa
'module(load="imuxsock" SysSock.Use="off")',
'input(type="imuxsock" Socket="' + settings.LOGGING['handlers']['external_logger']['address'] + '" unlink="on" RateLimit.Burst="0")',
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
@@ -79,12 +93,7 @@ def construct_rsyslog_conf_template(settings=settings):
'action.resumeRetryCount="-1"',
'template="awx"',
f'action.resumeInterval="{timeout}"',
f'queue.spoolDirectory="{spool_directory}"',
'queue.filename="awx-external-logger-action-queue"',
f'queue.maxdiskspace="{max_disk_space_action_queue}g"',
'queue.type="LinkedList"',
'queue.saveOnShutdown="on"',
]
] + queue_options
if error_log_file:
params.append(f'errorfile="{error_log_file}"')
if parsed.path:
@@ -112,9 +121,18 @@ def construct_rsyslog_conf_template(settings=settings):
params = ' '.join(params)
parts.extend(['module(load="omhttp")', f'action({params})'])
elif protocol and host and port:
parts.append(
f'action(type="omfwd" target="{host}" port="{port}" protocol="{protocol}" action.resumeRetryCount="-1" action.resumeInterval="{timeout}" template="awx")' # noqa
)
params = [
'type="omfwd"',
f'target="{host}"',
f'port="{port}"',
f'protocol="{protocol}"',
'action.resumeRetryCount="-1"',
f'action.resumeInterval="{timeout}"',
'template="awx"',
] + queue_options
params = ' '.join(params)
parts.append(f'action({params})')
else:
parts.append('action(type="omfile" file="/dev/null")') # rsyslog needs *at least* one valid action to start
tmpl = '\n'.join(parts)

View File

@@ -199,6 +199,8 @@ class Licenser(object):
license['support_level'] = attr.get('value')
elif attr.get('name') == 'usage':
license['usage'] = attr.get('value')
elif attr.get('name') == 'ph_product_name' and attr.get('value') == 'RHEL Developer':
license['license_type'] = 'developer'
if not license:
logger.error("No valid subscriptions found in manifest")
@@ -322,7 +324,9 @@ class Licenser(object):
def generate_license_options_from_entitlements(self, json):
from dateutil.parser import parse
ValidSub = collections.namedtuple('ValidSub', 'sku name support_level end_date trial quantity pool_id satellite subscription_id account_number usage')
ValidSub = collections.namedtuple(
'ValidSub', 'sku name support_level end_date trial developer_license quantity pool_id satellite subscription_id account_number usage'
)
valid_subs = []
for sub in json:
satellite = sub.get('satellite')
@@ -350,6 +354,7 @@ class Licenser(object):
sku = sub['productId']
trial = sku.startswith('S') # i.e.,, SER/SVC
developer_license = False
support_level = ''
usage = ''
pool_id = sub['id']
@@ -364,9 +369,24 @@ class Licenser(object):
support_level = attr.get('value')
elif attr.get('name') == 'usage':
usage = attr.get('value')
elif attr.get('name') == 'ph_product_name' and attr.get('value') == 'RHEL Developer':
developer_license = True
valid_subs.append(
ValidSub(sku, sub['productName'], support_level, end_date, trial, quantity, pool_id, satellite, subscription_id, account_number, usage)
ValidSub(
sku,
sub['productName'],
support_level,
end_date,
trial,
developer_license,
quantity,
pool_id,
satellite,
subscription_id,
account_number,
usage,
)
)
if valid_subs:
@@ -381,6 +401,8 @@ class Licenser(object):
if sub.trial:
license._attrs['trial'] = True
license._attrs['license_type'] = 'trial'
if sub.developer_license:
license._attrs['license_type'] = 'developer'
license._attrs['instance_count'] = min(MAX_INSTANCES, license._attrs['instance_count'])
human_instances = license._attrs['instance_count']
if human_instances == MAX_INSTANCES:

View File

@@ -3,6 +3,8 @@ import logging
import asyncio
from typing import Dict
import ipaddress
import aiohttp
from aiohttp import client_exceptions
import aioredis
@@ -71,7 +73,16 @@ class WebsocketRelayConnection:
if not self.channel_layer:
self.channel_layer = get_channel_layer()
uri = f"{self.protocol}://{self.remote_host}:{self.remote_port}/websocket/relay/"
# figure out if what we have is an ipaddress, IPv6 Addresses must have brackets added for uri
uri_hostname = self.remote_host
try:
# Throws ValueError if self.remote_host is a hostname like example.com, not an IPv4 or IPv6 ip address
if isinstance(ipaddress.ip_address(uri_hostname), ipaddress.IPv6Address):
uri_hostname = f"[{uri_hostname}]"
except ValueError:
pass
uri = f"{self.protocol}://{uri_hostname}:{self.remote_port}/websocket/relay/"
timeout = aiohttp.ClientTimeout(total=10)
secret_val = WebsocketSecretAuthHelper.construct_secret()
@@ -216,7 +227,8 @@ class WebSocketRelayManager(object):
continue
try:
if not notif.payload or notif.channel != "web_ws_heartbeat":
return
logger.warning(f"Unexpected channel or missing payload. {notif.channel}, {notif.payload}")
continue
try:
payload = json.loads(notif.payload)
@@ -224,13 +236,15 @@ class WebSocketRelayManager(object):
logmsg = "Failed to decode message from pg_notify channel `web_ws_heartbeat`"
if logger.isEnabledFor(logging.DEBUG):
logmsg = "{} {}".format(logmsg, payload)
logger.warning(logmsg)
return
logger.warning(logmsg)
continue
# Skip if the message comes from the same host we are running on
# In this case, we'll be sharing a redis, no need to relay.
if payload.get("hostname") == self.local_hostname:
return
hostname = payload.get("hostname")
logger.debug("Received a heartbeat request for {hostname}. Skipping as we use redis for local host.")
continue
action = payload.get("action")
@@ -239,7 +253,7 @@ class WebSocketRelayManager(object):
ip = payload.get("ip") or hostname # try back to hostname if ip isn't supplied
if ip is None:
logger.warning(f"Received invalid {action} ws_heartbeat, missing hostname and ip: {payload}")
return
continue
logger.debug(f"Web host {hostname} ({ip}) {action} heartbeat received.")
if action == "online":

View File

@@ -336,6 +336,7 @@ INSTALLED_APPS = [
'awx.ui',
'awx.sso',
'solo',
'ansible_base',
]
INTERNAL_IPS = ('127.0.0.1',)
@@ -796,7 +797,7 @@ LOG_AGGREGATOR_ENABLED = False
LOG_AGGREGATOR_TCP_TIMEOUT = 5
LOG_AGGREGATOR_VERIFY_CERT = True
LOG_AGGREGATOR_LEVEL = 'INFO'
LOG_AGGREGATOR_MAX_DISK_USAGE_GB = 1 # Main queue
LOG_AGGREGATOR_ACTION_QUEUE_SIZE = 131072
LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB = 1 # Action queue
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH = '/var/lib/awx'
LOG_AGGREGATOR_RSYSLOGD_DEBUG = False

View File

@@ -29,7 +29,7 @@ SettingsAPI.readCategory.mockResolvedValue({
LOG_AGGREGATOR_TCP_TIMEOUT: 5,
LOG_AGGREGATOR_VERIFY_CERT: true,
LOG_AGGREGATOR_LEVEL: 'INFO',
LOG_AGGREGATOR_MAX_DISK_USAGE_GB: 1,
LOG_AGGREGATOR_ACTION_QUEUE_SIZE: 131072,
LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB: 1,
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH: '/var/lib/awx',
LOG_AGGREGATOR_RSYSLOGD_DEBUG: false,

View File

@@ -31,7 +31,7 @@ const mockSettings = {
LOG_AGGREGATOR_TCP_TIMEOUT: 123,
LOG_AGGREGATOR_VERIFY_CERT: true,
LOG_AGGREGATOR_LEVEL: 'ERROR',
LOG_AGGREGATOR_MAX_DISK_USAGE_GB: 1,
LOG_AGGREGATOR_ACTION_QUEUE_SIZE: 131072,
LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB: 1,
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH: '/var/lib/awx',
LOG_AGGREGATOR_RSYSLOGD_DEBUG: false,

View File

@@ -659,21 +659,21 @@
]
]
},
"LOG_AGGREGATOR_MAX_DISK_USAGE_GB": {
"LOG_AGGREGATOR_ACTION_QUEUE_SIZE": {
"type": "integer",
"required": false,
"label": "Maximum disk persistence for external log aggregation (in GB)",
"help_text": "Amount of data to store (in gigabytes) during an outage of the external log aggregator (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. Notably, this is used for the rsyslogd main queue (for input messages).",
"label": "Maximum number of messages that can be stored in the log action queue",
"help_text": "Defines how large the rsyslog action queue can grow in number of messages stored. This can have an impact on memory utilization. When the queue reaches 75% of this number, the queue will start writing to disk (queue.highWatermark in rsyslog). When it reaches 90%, NOTICE, INFO, and DEBUG messages will start to be discarded (queue.discardMark with queue.discardSeverity=5).",
"min_value": 1,
"category": "Logging",
"category_slug": "logging",
"default": 1
"default": 131072
},
"LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB": {
"type": "integer",
"required": false,
"label": "Maximum disk persistence for rsyslogd action queuing (in GB)",
"help_text": "Amount of data to store (in gigabytes) if an rsyslog action takes time to process an incoming message (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.",
"help_text": "Amount of data to store (in gigabytes) if an rsyslog action takes time to process an incoming message (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). It stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.",
"min_value": 1,
"category": "Logging",
"category_slug": "logging",
@@ -5016,10 +5016,10 @@
]
]
},
"LOG_AGGREGATOR_MAX_DISK_USAGE_GB": {
"LOG_AGGREGATOR_ACTION_QUEUE_SIZE": {
"type": "integer",
"label": "Maximum disk persistence for external log aggregation (in GB)",
"help_text": "Amount of data to store (in gigabytes) during an outage of the external log aggregator (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. Notably, this is used for the rsyslogd main queue (for input messages).",
"label": "Maximum number of messages that can be stored in the log action queue",
"help_text": "Defines how large the rsyslog action queue can grow in number of messages stored. This can have an impact on memory utilization. When the queue reaches 75% of this number, the queue will start writing to disk (queue.highWatermark in rsyslog). When it reaches 90%, NOTICE, INFO, and DEBUG messages will start to be discarded (queue.discardMark with queue.discardSeverity=5).",
"min_value": 1,
"category": "Logging",
"category_slug": "logging",
@@ -5028,7 +5028,7 @@
"LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB": {
"type": "integer",
"label": "Maximum disk persistence for rsyslogd action queuing (in GB)",
"help_text": "Amount of data to store (in gigabytes) if an rsyslog action takes time to process an incoming message (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.",
"help_text": "Amount of data to store (in gigabytes) if an rsyslog action takes time to process an incoming message (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). It stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.",
"min_value": 1,
"category": "Logging",
"category_slug": "logging",

View File

@@ -70,7 +70,7 @@
"LOG_AGGREGATOR_TCP_TIMEOUT": 5,
"LOG_AGGREGATOR_VERIFY_CERT": true,
"LOG_AGGREGATOR_LEVEL": "INFO",
"LOG_AGGREGATOR_MAX_DISK_USAGE_GB": 1,
"LOG_AGGREGATOR_ACTION_QUEUE_SIZE": 131072,
"LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB": 1,
"LOG_AGGREGATOR_MAX_DISK_USAGE_PATH": "/var/lib/awx",
"LOG_AGGREGATOR_RSYSLOGD_DEBUG": false,
@@ -548,4 +548,4 @@
"adj_list": []
}
}
}
}

View File

@@ -15,7 +15,7 @@
"LOG_AGGREGATOR_TCP_TIMEOUT": 5,
"LOG_AGGREGATOR_VERIFY_CERT": true,
"LOG_AGGREGATOR_LEVEL": "INFO",
"LOG_AGGREGATOR_MAX_DISK_USAGE_GB": 1,
"LOG_AGGREGATOR_ACTION_QUEUE_SIZE": 131072,
"LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB": 1,
"LOG_AGGREGATOR_MAX_DISK_USAGE_PATH": "/var/lib/awx",
"LOG_AGGREGATOR_RSYSLOGD_DEBUG": false,

View File

@@ -980,6 +980,15 @@ class ControllerAPIModule(ControllerModule):
def create_or_update_if_needed(
self, existing_item, new_item, endpoint=None, item_type='unknown', on_create=None, on_update=None, auto_exit=True, associations=None
):
# Remove boolean values of certain specific types
# this is needed so that boolean fields will not get a false value when not provided
for key in list(new_item.keys()):
if key in self.argument_spec:
param_spec = self.argument_spec[key]
if 'type' in param_spec and param_spec['type'] == 'bool':
if new_item[key] is None:
new_item.pop(key)
if existing_item:
return self.update_if_needed(existing_item, new_item, on_update=on_update, auto_exit=auto_exit, associations=associations)
else:

View File

@@ -118,6 +118,7 @@ status:
'''
from ..module_utils.controller_api import ControllerAPIModule
import json
def main():
@@ -161,7 +162,11 @@ def main():
}
for arg in ['job_type', 'limit', 'forks', 'verbosity', 'extra_vars', 'become_enabled', 'diff_mode']:
if module.params.get(arg):
post_data[arg] = module.params.get(arg)
# extra_var can receive a dict or a string, if a dict covert it to a string
if arg == 'extra_vars' and type(module.params.get(arg)) is not str:
post_data[arg] = json.dumps(module.params.get(arg))
else:
post_data[arg] = module.params.get(arg)
# Attempt to look up the related items the user specified (these will fail the module if not found)
post_data['inventory'] = module.resolve_name_to_id('inventories', inventory)

View File

@@ -58,6 +58,7 @@ options:
Insights, Machine, Microsoft Azure Key Vault, Microsoft Azure Resource Manager, Network, OpenShift or Kubernetes API
Bearer Token, OpenStack, Red Hat Ansible Automation Platform, Red Hat Satellite 6, Red Hat Virtualization, Source Control,
Thycotic DevOps Secrets Vault, Thycotic Secret Server, Vault, VMware vCenter, or a custom credential type
required: True
type: str
inputs:
description:
@@ -214,7 +215,7 @@ def main():
copy_from=dict(),
description=dict(),
organization=dict(),
credential_type=dict(),
credential_type=dict(required=True),
inputs=dict(type='dict', no_log=True),
update_secrets=dict(type='bool', default=True, no_log=False),
user=dict(),

View File

@@ -115,7 +115,7 @@ EXAMPLES = '''
- name: Export a job template named "My Template" and all Credentials
export:
job_templates: "My Template"
credential: 'all'
credentials: 'all'
- name: Export a list of inventories
export:

View File

@@ -72,6 +72,21 @@
- "result is changed"
- "result.status == 'successful'"
- name: Launch an Ad Hoc Command with extra_vars
ad_hoc_command:
inventory: "Demo Inventory"
credential: "{{ ssh_cred_name }}"
module_name: "ping"
extra_vars:
var1: "test var"
wait: true
register: result
- assert:
that:
- "result is changed"
- "result.status == 'successful'"
- name: Launch an Ad Hoc Command with Execution Environment specified
ad_hoc_command:
inventory: "Demo Inventory"

View File

@@ -60,7 +60,7 @@
- result['job_info']['skip_tags'] == "skipbaz"
- result['job_info']['limit'] == "localhost"
- result['job_info']['job_tags'] == "Hello World"
- result['job_info']['inventory'] == {{ inventory_id }}
- result['job_info']['inventory'] == inventory_id | int
- "result['job_info']['extra_vars'] == '{\"animal\": \"bear\", \"food\": \"carrot\"}'"
# cleanup

View File

@@ -71,6 +71,19 @@
that:
- "result is changed"
- name: Delete a credential without credential_type
credential:
name: "{{ ssh_cred_name1 }}"
organization: Default
state: absent
register: result
ignore_errors: true
- assert:
that:
- "result is failed"
- name: Create an Org-specific credential with an ID with exists
credential:
name: "{{ ssh_cred_name1 }}"

View File

@@ -42,6 +42,16 @@
that:
- "result is not changed"
- name: Modify the host as a no-op
host:
name: "{{ host_name }}"
inventory: "{{ inv_name }}"
register: result
- assert:
that:
- "result is not changed"
- name: Delete a Host
host:
name: "{{ host_name }}"
@@ -68,6 +78,15 @@
that:
- "result is changed"
- name: Use lookup to check that host was enabled
ansible.builtin.set_fact:
host_enabled_test: "lookup('awx.awx.controller_api', 'hosts/{{result.id}}/').enabled"
- name: Newly created host should have API default value for enabled
assert:
that:
- host_enabled_test
- name: Delete a Host
host:
name: "{{ result.id }}"

View File

@@ -16,7 +16,7 @@
- assert:
that:
- "{{ matching_jobs.count }} == 1"
- matching_jobs.count == 1
- name: List failed jobs (which don't exist)
job_list:
@@ -26,7 +26,7 @@
- assert:
that:
- "{{ successful_jobs.count }} == 0"
- successful_jobs.count == 0
- name: Get ALL result pages!
job_list:

View File

@@ -99,7 +99,7 @@
that:
- wait_results is failed
- 'wait_results.status == "canceled"'
- "wait_results.msg == 'Job with id {{ job.id }} failed' or 'Job with id={{ job.id }} failed, error: Job failed.'"
- "'Job with id ~ job.id failed' or 'Job with id= ~ job.id failed, error: Job failed.' is in wait_results.msg"
# workflow wait test
- name: Generate a random string for test

View File

@@ -132,7 +132,7 @@
- name: Get the ID of the first user created and verify that it is correct
assert:
that: "{{ query(plugin_name, 'users', query_params={ 'username' : user_creation_results['results'][0]['item'] }, return_ids=True)[0] }} == {{ user_creation_results['results'][0]['id'] }}"
that: "query(plugin_name, 'users', query_params={ 'username' : user_creation_results['results'][0]['item'] }, return_ids=True)[0] == user_creation_results['results'][0]['id'] | string"
- name: Try to get an ID of someone who does not exist
set_fact:

View File

@@ -11,7 +11,7 @@
- name: Call ruleset with no rules
set_fact:
complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45') }}"
complex_rule: "{{ query(ruleset_plugin_name | string, '2022-04-30 10:30:45') }}"
ignore_errors: True
register: results

View File

@@ -36,7 +36,7 @@
- assert:
that:
- result is failed
- "'Unable to create schedule {{ sched1 }}' in result.msg"
- "'Unable to create schedule '~ sched1 in result.msg"
- name: Create with options that the JT does not support
schedule:
@@ -62,7 +62,7 @@
- assert:
that:
- result is failed
- "'Unable to create schedule {{ sched1 }}' in result.msg"
- "'Unable to create schedule '~ sched1 in result.msg"
- name: Build a real schedule
schedule:
@@ -76,6 +76,15 @@
that:
- result is changed
- name: Use lookup to check that schedules was enabled
ansible.builtin.set_fact:
schedules_enabled_test: "lookup('awx.awx.controller_api', 'schedules/{{result.id}}/').enabled"
- name: Newly created schedules should have API default value for enabled
assert:
that:
- schedules_enabled_test
- name: Build a real schedule with exists
schedule:
name: "{{ sched1 }}"

View File

@@ -9,7 +9,7 @@
- name: Test too many params (failure from validation of terms)
debug:
msg: "{{ query(plugin_name, 'none', 'weekly', start_date='2020-4-16 03:45:07') }}"
msg: "{{ query(plugin_name | string, 'none', 'weekly', start_date='2020-4-16 03:45:07') }}"
ignore_errors: true
register: result

View File

@@ -57,7 +57,7 @@
- assert:
that:
- result is failed
- "'Monitoring of Workflow Job - {{ wfjt_name1 }} aborted due to timeout' in result.msg"
- "'Monitoring of Workflow Job - '~ wfjt_name1 ~ ' aborted due to timeout' in result.msg"
- name: Kick off a workflow and wait for it
workflow_launch:

View File

@@ -4,9 +4,9 @@ Bulk API endpoints allows to perform bulk operations in single web request. Ther
- /api/v2/bulk/job_launch
- /api/v2/bulk/host_create
Making individual API calls in rapid succession or at high concurrency can overwhelm AWX's ability to serve web requests. When the application's ability to serve is exausted, clients often receive 504 timeout errors.
Making individual API calls in rapid succession or at high concurrency can overwhelm AWX's ability to serve web requests. When the application's ability to serve is exhausted, clients often receive 504 timeout errors.
Allowing the client combine actions into fewer requests allows for launching more jobs or adding more hosts with fewer requests and less time without exauhsting Controller's ability to serve requests, making excessive and repetitive database queries, or using excessive database connections (each web request opens a seperate database connection).
Allowing the client combine actions into fewer requests allows for launching more jobs or adding more hosts with fewer requests and less time without exhauhsting Controller's ability to serve requests, making excessive and repetitive database queries, or using excessive database connections (each web request opens a separate database connection).
## Bulk Job Launch

View File

@@ -104,7 +104,7 @@ Given settings.AWX_CONTROL_NODE_TASK_IMPACT is 1:
This setting allows you to determine how much impact controlling jobs has. This
can be helpful if you notice symptoms of your control plane exceeding desired
CPU or memory usage, as it effectivly throttles how many jobs can be run
CPU or memory usage, as it effectively throttles how many jobs can be run
concurrently by your control plane. This is usually a concern with container
groups, which at this time effectively have infinite capacity, so it is easy to
end up with too many jobs running concurrently, overwhelming the control plane
@@ -130,10 +130,10 @@ be `18`:
By default, only Instances have capacity and we only track capacity consumed per instance. With the max_forks and max_concurrent_jobs fields now available on Instance Groups, we additionally can limit how many jobs or forks are allowed to be concurrently consumed across an entire Instance Group or Container Group.
This is especially useful for Container Groups where previously, there was no limit to how many jobs we would submit to a Container Group, which made it impossible to "overflow" job loads from one Container Group to another container group, which may be on a different Kubenetes cluster or namespace.
This is especially useful for Container Groups where previously, there was no limit to how many jobs we would submit to a Container Group, which made it impossible to "overflow" job loads from one Container Group to another container group, which may be on a different Kubernetes cluster or namespace.
One way to calculate what max_concurrent_jobs is desirable to set on a Container Group is to consider the pod_spec for that container group. In the pod_spec we indicate the resource requests and limits for the automation job pod. If you pod_spec indicates that a pod with 100MB of memory will be provisioned, and you know your Kubernetes cluster has 1 worker node with 8GB of RAM, you know that the maximum number of jobs that you would ideally start would be around 81 jobs, calculated by taking (8GB memory on node * 1024 MB) // 100 MB memory/job pod which with floor division comes out to 81.
Alternatively, instead of considering the number of job pods and the resources requested, we can consider the memory consumption of the forks in the jobs. We normally consider that 100MB of memory will be used by each fork of ansible. Therefore we also know that our 8 GB worker node should also only run 81 forks of ansible at a time -- which depending on the forks and inventory settings of the job templates, could be consumed by anywhere from 1 job to 81 jobs. So we can also set max_forks = 81. This way, either 39 jobs with 1 fork can run (task impact is always forks + 1), or 2 jobs with forks set to 39 can run.
While this feature is most useful for Container Groups where there is no other way to limit job execution, this feature is avialable for use on any instance group. This can be useful if for other business reasons you want to set a InstanceGroup wide limit on concurrent jobs. For example, if you have a job template that you only want 10 copies of running at a time -- you could create a dedicated instance group for that job template and set max_concurrent_jobs to 10.
While this feature is most useful for Container Groups where there is no other way to limit job execution, this feature is available for use on any instance group. This can be useful if for other business reasons you want to set a InstanceGroup wide limit on concurrent jobs. For example, if you have a job template that you only want 10 copies of running at a time -- you could create a dedicated instance group for that job template and set max_concurrent_jobs to 10.

View File

@@ -45,7 +45,7 @@ of the awx-operator repo. If not, continue to the next section.
```
# in awx-operator repo on the branch you want to use
$ export IMAGE_TAG_BASE=quay.io/<username>/awx-operator
$ export VERSION=<cusom-tag>
$ export VERSION=<custom-tag>
$ make docker-build
$ docker push ${IMAGE_TAG_BASE}:${VERSION}
```
@@ -118,7 +118,7 @@ To access via the web browser, run the following command:
$ minikube service awx-service --url
```
To retreive your admin password
To retrieve your admin password
```
$ kubectl get secrets awx-admin-password -o json | jq '.data.password' | xargs | base64 -d
```

View File

@@ -5,7 +5,7 @@ import shlex
from datetime import datetime
from importlib import import_module
#sys.path.insert(0, os.path.abspath('./rst/rest_api/_swagger'))
sys.path.insert(0, os.path.abspath('./rst/rest_api/_swagger'))
project = u'Ansible AWX'
copyright = u'2023, Red Hat'
@@ -35,6 +35,7 @@ extensions = [
'sphinx.ext.coverage',
'sphinx.ext.ifconfig',
'sphinx_ansible_theme',
'swagger',
]
html_theme = 'sphinx_ansible_theme'

View File

@@ -8,13 +8,13 @@ alabaster==0.7.13
# via sphinx
ansible-pygments==0.1.1
# via sphinx-ansible-theme
babel==2.12.1
babel==2.13.1
# via sphinx
certifi==2023.7.22
# via requests
charset-normalizer==3.2.0
charset-normalizer==3.3.2
# via requests
docutils==0.16
docutils==0.18.1
# via
# -r docs/docsite/requirements.in
# sphinx
@@ -23,13 +23,13 @@ idna==3.4
# via requests
imagesize==1.4.1
# via sphinx
jinja2==3.0.3
jinja2==3.1.2
# via
# -r docs/docsite/requirements.in
# sphinx
markupsafe==2.1.3
# via jinja2
packaging==23.1
packaging==23.2
# via sphinx
pygments==2.16.1
# via
@@ -41,7 +41,7 @@ requests==2.31.0
# via sphinx
snowballstemmer==2.2.0
# via sphinx
sphinx==5.1.1
sphinx==7.2.6
# via
# -r docs/docsite/requirements.in
# sphinx-ansible-theme
@@ -52,7 +52,7 @@ sphinx==5.1.1
# sphinxcontrib-jquery
# sphinxcontrib-qthelp
# sphinxcontrib-serializinghtml
sphinx-ansible-theme==0.9.1
sphinx-ansible-theme==0.10.2
# via -r docs/docsite/requirements.in
sphinx-rtd-theme==1.3.0
# via sphinx-ansible-theme
@@ -70,5 +70,5 @@ sphinxcontrib-qthelp==1.0.6
# via sphinx
sphinxcontrib-serializinghtml==1.1.9
# via sphinx
urllib3==2.0.4
urllib3==2.1.0
# via requests

View File

@@ -23,6 +23,7 @@ The default length of time, in seconds, that your supplied token is valid can be
4. Enter the timeout period in seconds in the **Idle Time Force Log Out** text field.
.. image:: ../common/images/configure-awx-system-timeout.png
:alt: Miscellaneous Authentication settings showing the Idle Time Force Logout field where you can adjust the token validity period.
4. Click **Save** to apply your changes.

View File

@@ -1,4 +1,3 @@
.. _ag_clustering:
Clustering
@@ -11,7 +10,7 @@ Clustering
Clustering is sharing load between hosts. Each instance should be able to act as an entry point for UI and API access. This should enable AWX administrators to use load balancers in front of as many instances as they wish and maintain good data visibility.
.. note::
Load balancing is optional and is entirely possible to have ingress on one or all instances as needed. The ``CSRF_TRUSTED_ORIGIN`` setting may be required if you are using AWX behind a load balancer. See :ref:`ki_csrf_trusted_origin_setting` for more detail.
Load balancing is optional and is entirely possible to have ingress on one or all instances as needed. The ``CSRF_TRUSTED_ORIGIN`` setting may be required if you are using AWX behind a load balancer. See :ref:`ki_csrf_trusted_origin_setting` for more detail.
Each instance should be able to join AWX cluster and expand its ability to execute jobs. This is a simple system where jobs can and will run anywhere rather than be directed on where to run. Also, clustered instances can be grouped into different pools/queues, called :ref:`ag_instance_groups`.
@@ -107,61 +106,61 @@ Example of customization could be:
::
---
spec:
...
node_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
topology_spread_constraints: |
- maxSkew: 100
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: "ScheduleAnyway"
labelSelector:
matchLabels:
app.kubernetes.io/name: "<resourcename>"
tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
task_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX_task"
effect: "NoSchedule"
postgres_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
postgres_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
- another-node-label-value
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
---
spec:
...
node_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
topology_spread_constraints: |
- maxSkew: 100
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: "ScheduleAnyway"
labelSelector:
matchLabels:
app.kubernetes.io/name: "<resourcename>"
tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
task_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX_task"
effect: "NoSchedule"
postgres_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
postgres_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
- another-node-label-value
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
Status and Monitoring via Browser API
@@ -204,6 +203,7 @@ The way jobs are run and reported to a 'normal' user of AWX does not change. On
- When a job is submitted from the API interface it gets pushed into the dispatcher queue. Each AWX instance will connect to and receive jobs from that queue using a particular scheduling algorithm. Any instance in the cluster is just as likely to receive the work and execute the task. If a instance fails while executing jobs, then the work is marked as permanently failed.
.. image:: ../common/images/clustering-visual.png
:alt: An illustration depicting job distribution in an AWX cluster.
- Project updates run successfully on any instance that could potentially run a job. Projects will sync themselves to the correct version on the instance immediately prior to running the job. If the needed revision is already locally checked out and Galaxy or Collections updates are not needed, then a sync may not be performed.
@@ -218,5 +218,3 @@ Job Runs
By default, when a job is submitted to the AWX queue, it can be picked up by any of the workers. However, you can control where a particular job runs, such as restricting the instances from which a job runs on.
In order to support temporarily taking an instance offline, there is a property enabled defined on each instance. When this property is disabled, no jobs will be assigned to that instance. Existing jobs will finish, but no new work will be assigned.

View File

@@ -11,6 +11,7 @@ AWX Configuration
You can configure various AWX settings within the Settings screen in the following tabs:
.. image:: ../common/images/ug-settings-menu-screen.png
:alt: Screenshot of the AWX settings menu screen.
Each tab contains fields with a **Reset** button, allowing you to revert any value entered back to the default value. **Reset All** allows you to revert all the values to their factory default values.
@@ -47,6 +48,7 @@ The Jobs tab allows you to configure the types of modules that are allowed to be
The values for all the timeouts are in seconds.
.. image:: ../common/images/configure-awx-jobs.png
:alt: Screenshot of the AWX job configuration settings.
3. Click **Save** to apply the settings or **Cancel** to abandon the changes.
@@ -69,6 +71,7 @@ The System tab allows you to define the base URL for the AWX host, configure ale
- **Logging settings**: configure logging options based on the type you choose:
.. image:: ../common/images/configure-awx-system-logging-types.png
:alt: Logging settings shown with the list of options for Logging Aggregator Types.
For more information about each of the logging aggregation types, refer to the :ref:`ag_logging` section of the |ata|.
@@ -78,6 +81,7 @@ The System tab allows you to define the base URL for the AWX host, configure ale
.. |help| image:: ../common/images/tooltips-icon.png
.. image:: ../common/images/configure-awx-system.png
:alt: Miscellaneous System settings window showing all possible configurable options.
.. note::

View File

@@ -140,6 +140,7 @@ The Instance Groups list view from the |at| User Interface provides a summary of
|Instance Group policy example|
.. |Instance Group policy example| image:: ../common/images/instance-groups_list_view.png
:alt: Instance Group list view with example instance group and container groups.
See :ref:`ug_instance_groups_create` for further detail.
@@ -231,6 +232,7 @@ Likewise, an administrator could assign multiple groups to each organization as
|Instance Group example|
.. |Instance Group example| image:: ../common/images/instance-groups-scenarios.png
:alt: Illustration showing grouping scenarios.
Arranging resources in this way offers a lot of flexibility. Also, you can create instance groups with only one instance, thus allowing you to direct work towards a very specific Host in the AWX cluster.
@@ -327,6 +329,7 @@ To create a container group:
|IG - create new CG|
.. |IG - create new CG| image:: ../common/images/instance-group-create-new-cg.png
:alt: Create new container group form.
4. Enter a name for your new container group and select the credential previously created to associate it to the container group.
@@ -342,10 +345,12 @@ To customize the Pod spec, specify the namespace in the **Pod Spec Override** fi
|IG - CG customize pod|
.. |IG - CG customize pod| image:: ../common/images/instance-group-customize-cg-pod.png
:alt: Create new container group form with the option to custom the pod spec.
You may provide additional customizations, if needed. Click **Expand** to view the entire customization window.
.. image:: ../common/images/instance-group-customize-cg-pod-expanded.png
:alt: The expanded view for customizing the pod spec.
.. note::
@@ -354,10 +359,12 @@ You may provide additional customizations, if needed. Click **Expand** to view t
Once the container group is successfully created, the **Details** tab of the newly created container group remains, which allows you to review and edit your container group information. This is the same menu that is opened if the Edit (|edit-button|) button is clicked from the **Instance Group** link. You can also edit **Instances** and review **Jobs** associated with this instance group.
.. |edit-button| image:: ../common/images/edit-button.png
:alt: Edit button.
|IG - example CG successfully created|
.. |IG - example CG successfully created| image:: ../common/images/instance-group-example-cg-successfully-created.png
:alt: Example of the successfully created instance group as shown in the Jobs tab of the Instance groups window.
Container groups and instance groups are labeled accordingly.
@@ -375,6 +382,7 @@ To verify the deployment and termination of your container:
|Dummy inventory|
.. |Dummy inventory| image:: ../common/images/inventories-create-new-cg-test-inventory.png
:alt: Example of creating a new container group test inventory.
2. Create "localhost" host in inventory with variables:
@@ -385,6 +393,7 @@ To verify the deployment and termination of your container:
|Inventory with localhost|
.. |Inventory with localhost| image:: ../common/images/inventories-create-new-cg-test-localhost.png
:alt: The new container group test inventory showing the populated variables.
3. Launch an ad hoc job against the localhost using the *ping* or *setup* module. Even though the **Machine Credential** field is required, it does not matter which one is selected for this simple test.
@@ -393,13 +402,14 @@ To verify the deployment and termination of your container:
.. |Launch inventory with localhost| image:: ../common/images/inventories-launch-adhoc-cg-test-localhost.png
.. image:: ../common/images/inventories-launch-adhoc-cg-test-localhost2.png
:alt: Launching a Ping adhoc command on the newly created inventory with localhost.
You can see in the jobs detail view the container was reached successfully using one of ad hoc jobs.
|Inventory with localhost ping success|
.. |Inventory with localhost ping success| image:: ../common/images/inventories-launch-adhoc-cg-test-localhost-success.png
:alt: Jobs output view showing a successfully ran adhoc job.
If you have an OpenShift UI, you can see Pods appear and disappear as they deploy and terminate. Alternatively, you can use the CLI to perform a ``get pod`` operation on your namespace to watch these same events occurring in real-time.
@@ -412,6 +422,7 @@ When you run a job associated with a container group, you can see the details of
|IG - instances jobs|
.. |IG - instances jobs| image:: ../common/images/instance-group-job-details-with-cgs.png
:alt: Example Job details window showing the associated execution environment and container group.
Kubernetes API failure conditions

View File

@@ -55,6 +55,7 @@ To set up enterprise authentication for Microsoft Azure Active Directory (AD), y
8. To verify that the authentication was configured correctly, logout of AWX and the login screen will now display the Microsoft Azure logo to allow logging in with those credentials.
.. image:: ../common/images/configure-awx-auth-azure-logo.png
:alt: AWX login screen displaying the Microsoft Azure logo for authentication.
For application registering basics in Azure AD, refer to the `Azure AD Identity Platform (v2)`_ overview.
@@ -102,6 +103,7 @@ SAML settings
SAML allows the exchange of authentication and authorization data between an Identity Provider (IdP - a system of servers that provide the Single Sign On service) and a Service Provider (in this case, AWX). AWX can be configured to talk with SAML in order to authenticate (create/login/logout) AWX users. User Team and Organization membership can be embedded in the SAML response to AWX.
.. image:: ../common/images/configure-awx-auth-saml-topology.png
:alt: Diagram depicting SAML topology for AWX.
The following instructions describe AWX as the service provider.
@@ -122,6 +124,7 @@ To setup SAML authentication:
In this example, the Service Provider is the AWX cluster, and therefore, the ID is set to the AWX Cluster FQDN.
.. image:: ../common/images/configure-awx-auth-saml-spentityid.png
:alt: Configuring SAML Service Provider Entity ID in AWX.
5. Create a server certificate for the Ansible cluster. Typically when an Ansible cluster is configured, AWX nodes will be configured to handle HTTP traffic only and the load balancer will be an SSL Termination Point. In this case, an SSL certificate is required for the load balancer, and not for the individual AWX Cluster Nodes. SSL can either be enabled or disabled per individual AWX node, but should be disabled when using an SSL terminated load balancer. It is recommended to use a non-expiring self signed certificate to avoid periodically updating certificates. This way, authentication will not fail in case someone forgets to update the certificate.
@@ -132,6 +135,7 @@ In this example, the Service Provider is the AWX cluster, and therefore, the ID
If you are using a CA bundle with your certificate, include the entire bundle in this field.
.. image:: ../common/images/configure-awx-auth-saml-cert.png
:alt: Configuring SAML Service Provider Public Certificate in AWX.
As an example for public certs:
@@ -167,6 +171,7 @@ As an example for private keys:
For example:
.. image:: ../common/images/configure-awx-auth-saml-org-info.png
:alt: Configuring SAML Organization information in AWX.
.. note::
These fields are required in order to properly configure SAML within AWX.
@@ -183,6 +188,7 @@ For example:
For example:
.. image:: ../common/images/configure-awx-auth-saml-techcontact-info.png
:alt: Configuring SAML Technical Contact information in AWX.
9. Provide the IdP with the support contact information in the **SAML Service Provider Support Contact** field. Do not remove the contents of this field.
@@ -196,6 +202,7 @@ For example:
For example:
.. image:: ../common/images/configure-awx-auth-saml-suppcontact-info.png
:alt: Configuring SAML Support Contact information in AWX.
10. In the **SAML Enabled Identity Providers** field, provide information on how to connect to each Identity Provider listed. AWX expects the following SAML attributes in the example below:
@@ -238,6 +245,7 @@ Configure the required keys for each IDp:
}
.. image:: ../common/images/configure-awx-auth-saml-idps.png
:alt: Configuring SAML Identity Providers (IdPs) in AWX.
.. warning::
@@ -249,6 +257,7 @@ Configure the required keys for each IDp:
The IdP provides the email, last name and firstname using the well known SAML urn. The IdP uses a custom SAML attribute to identify a user, which is an attribute that AWX is unable to read. Instead, AWX can understand the unique identifier name, which is the URN. Use the URN listed in the SAML “Name” attribute for the user attributes as shown in the example below.
.. image:: ../common/images/configure-awx-auth-saml-idps-urn.png
:alt: Configuring SAML Identity Providers (IdPs) in AWX using URNs.
11. Optionally provide the **SAML Organization Map**. For further detail, see :ref:`ag_org_team_maps`.
@@ -479,6 +488,7 @@ Example::
Alternatively, logout of AWX and the login screen will now display the SAML logo to indicate it as a alternate method of logging into AWX.
.. image:: ../common/images/configure-awx-auth-saml-logo.png
:alt: AWX login screen displaying the SAML logo for authentication.
Transparent SAML Logins
@@ -495,6 +505,7 @@ For transparent logins to work, you must first get IdP-initiated logins to work.
2. Once this is working, specify the redirect URL for non-logged-in users to somewhere other than the default AWX login page by using the **Login redirect override URL** field in the Miscellaneous Authentication settings window of the **Settings** menu, accessible from the left navigation bar. This should be set to ``/sso/login/saml/?idp=<name-of-your-idp>`` for transparent SAML login, as shown in the example.
.. image:: ../common/images/configure-awx-system-login-redirect-url.png
:alt: Configuring the login redirect URL in AWX Miscellaneous Authentication Settings.
.. note::
@@ -537,6 +548,7 @@ Terminal Access Controller Access-Control System Plus (TACACS+) is a protocol th
- **TACACS+ Authentication Protocol**: The protocol used by TACACS+ client. Options are **ascii** or **pap**.
.. image:: ../common/images/configure-awx-auth-tacacs.png
:alt: TACACS+ configuration details in AWX settings.
4. Click **Save** when done.
@@ -563,6 +575,7 @@ To configure OIDC in AWX:
The example below shows specific values associated to GitHub as the generic IdP:
.. image:: ../common/images/configure-awx-auth-oidc.png
:alt: OpenID Connect (OIDC) configuration details in AWX settings.
4. Click **Save** when done.
@@ -574,4 +587,4 @@ The example below shows specific values associated to GitHub as the generic IdP:
5. To verify that the authentication was configured correctly, logout of AWX and the login screen will now display the OIDC logo to indicate it as a alternate method of logging into AWX.
.. image:: ../common/images/configure-awx-auth-oidc-logo.png
:alt: AWX login screen displaying the OpenID Connect (OIDC) logo for authentication.

View File

@@ -1,4 +1,3 @@
.. _ag_instances:
Managing Capacity With Instances
@@ -58,6 +57,7 @@ Manage instances
Click **Instances** from the left side navigation menu to access the Instances list.
.. image:: ../common/images/instances_list_view.png
:alt: List view of instances in AWX
The Instances list displays all the current nodes in your topology, along with relevant details:
@@ -83,6 +83,7 @@ The Instances list displays all the current nodes in your topology, along with r
From this page, you can add, remove or run health checks on your nodes. Use the check boxes next to an instance to select it to remove or run a health check against. When a button is grayed-out, you do not have permission for that particular action. Contact your Administrator to grant you the required level of access. If you are able to remove an instance, you will receive a prompt for confirmation, like the one below:
.. image:: ../common/images/instances_delete_prompt.png
:alt: Prompt for deleting instances in AWX.
.. note::
@@ -95,6 +96,7 @@ Click **Remove** to confirm.
If running a health check on an instance, at the top of the Details page, a message displays that the health check is in progress.
.. image:: ../common/images/instances_health_check.png
:alt: Health check for instances in AWX
Click **Reload** to refresh the instance status.
@@ -103,10 +105,12 @@ Click **Reload** to refresh the instance status.
Health checks are ran asynchronously, and may take up to a minute for the instance status to update, even with a refresh. The status may or may not change after the health check. At the bottom of the Details page, a timer/clock icon displays next to the last known health check date and time stamp if the health check task is currently running.
.. image:: ../common/images/instances_health_check_pending.png
:alt: Health check for instance still in pending state.
The example health check shows the status updates with an error on node 'one':
.. image:: ../common/images/topology-viewer-instance-with-errors.png
:alt: Health check showing an error in one of the instances.
Add an instance
@@ -119,6 +123,7 @@ One of the ways to expand capacity is to create an instance, which serves as a n
2. In the Instances list view, click the **Add** button and the Create new Instance window opens.
.. image:: ../common/images/instances_create_new.png
:alt: Create a new instance form.
An instance has several attributes that may be configured:
@@ -134,6 +139,7 @@ An instance has several attributes that may be configured:
Upon successful creation, the Details of the created instance opens.
.. image:: ../common/images/instances_create_details.png
:alt: Details of the newly created instance.
.. note::
@@ -142,6 +148,7 @@ Upon successful creation, the Details of the created instance opens.
4. Click the download button next to the **Install Bundle** field to download the tarball that includes this new instance and the files relevant to install the node into the mesh.
.. image:: ../common/images/instances_install_bundle.png
:alt: Instance details showing the Download button in the Install Bundle field of the Details tab.
5. Extract the downloaded ``tar.gz`` file from the location you downloaded it. The install bundle contains yaml files, certificates, and keys that will be used in the installation process.
@@ -179,6 +186,7 @@ The content of the ``inventory.yml`` file serves as a template and contains vari
.. image:: ../common/images/instances_peers_tab.png
:alt: "Peers" tab showing two peers.
You may run a health check by selecting the node and clicking the **Run health check** button from its Details page.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 222 KiB

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 383 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 228 KiB

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 74 KiB

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 KiB

After

Width:  |  Height:  |  Size: 2.2 KiB

Some files were not shown because too many files have changed in this diff Show More