Compare commits
40 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fb04e5d9f6 | ||
|
|
478e2cb28d | ||
|
|
2ac304d289 | ||
|
|
3e5851f3af | ||
|
|
adb1b12074 | ||
|
|
8fae20c48a | ||
|
|
ec364cc60e | ||
|
|
1cfd51764e | ||
|
|
0b8fedfd04 | ||
|
|
72a8173462 | ||
|
|
873b1fbe07 | ||
|
|
1f36e84b45 | ||
|
|
8c4bff2b86 | ||
|
|
14f636af84 | ||
|
|
0057c8daf6 | ||
|
|
d8a28b3c06 | ||
|
|
40c2b700fe | ||
|
|
71d548f9e5 | ||
|
|
dd98963f86 | ||
|
|
4b467dfd8d | ||
|
|
456b56778e | ||
|
|
5b3cb20f92 | ||
|
|
d7086a3c88 | ||
|
|
21e7ab078c | ||
|
|
946ca0b3b8 | ||
|
|
b831dbd608 | ||
|
|
943e455f9d | ||
|
|
53bc88abe2 | ||
|
|
3b4d95633e | ||
|
|
93c329d9d5 | ||
|
|
f4c53aaf22 | ||
|
|
333ef76cbd | ||
|
|
fc0b58fd04 | ||
|
|
bef0a8b23a | ||
|
|
a5f33456b6 | ||
|
|
21fb395912 | ||
|
|
44255f378d | ||
|
|
71a6d48612 | ||
|
|
b7e5f5d1e1 | ||
|
|
b6b167627c |
4
.github/triage_replies.md
vendored
@@ -7,8 +7,8 @@
|
||||
|
||||
## PRs/Issues
|
||||
|
||||
### Visit our mailing list
|
||||
- Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on our mailing list? See https://github.com/ansible/awx/#get-involved for information for ways to connect with us.
|
||||
### Visit the Forum or Matrix
|
||||
- Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on either the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com) or the [Ansible Community Forum](https://forum.ansible.com/tag/awx)?
|
||||
|
||||
### Denied Submission
|
||||
|
||||
|
||||
2
.github/workflows/ci.yml
vendored
@@ -20,6 +20,8 @@ jobs:
|
||||
tests:
|
||||
- name: api-test
|
||||
command: /start_tests.sh
|
||||
- name: api-migrations
|
||||
command: /start_tests.sh test_migrations
|
||||
- name: api-lint
|
||||
command: /var/lib/awx/venv/awx/bin/tox -e linters
|
||||
- name: api-swagger
|
||||
|
||||
4
.github/workflows/promote.yml
vendored
@@ -13,7 +13,7 @@ permissions:
|
||||
|
||||
jobs:
|
||||
promote:
|
||||
if: endsWith(github.repository, '/awx')
|
||||
if: endsWith(github.repository, '/awx')
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout awx
|
||||
@@ -46,7 +46,7 @@ jobs:
|
||||
COLLECTION_TEMPLATE_VERSION: true
|
||||
run: |
|
||||
make build_collection
|
||||
if [ "$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
|
||||
if [ "$(curl -L --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
|
||||
echo "Galaxy release already done"; \
|
||||
else \
|
||||
ansible-galaxy collection publish \
|
||||
|
||||
6
Makefile
@@ -324,6 +324,12 @@ test:
|
||||
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
|
||||
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
|
||||
|
||||
test_migrations:
|
||||
if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/awx/bin/activate; \
|
||||
fi; \
|
||||
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider --migrations -m migration_test $(PYTEST_ARGS) $(TEST_DIRS)
|
||||
|
||||
## Runs AWX_DOCKER_CMD inside a new docker container.
|
||||
docker-runner:
|
||||
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
---
|
||||
collections:
|
||||
- name: ansible.receptor
|
||||
version: 2.0.0
|
||||
version: 2.0.2
|
||||
|
||||
@@ -128,6 +128,10 @@ logger = logging.getLogger('awx.api.views')
|
||||
|
||||
|
||||
def unpartitioned_event_horizon(cls):
|
||||
with connection.cursor() as cursor:
|
||||
cursor.execute(f"SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE table_name = '_unpartitioned_{cls._meta.db_table}';")
|
||||
if not cursor.fetchone():
|
||||
return 0
|
||||
with connection.cursor() as cursor:
|
||||
try:
|
||||
cursor.execute(f'SELECT MAX(id) FROM _unpartitioned_{cls._meta.db_table}')
|
||||
|
||||
@@ -79,7 +79,6 @@ __all__ = [
|
||||
'get_user_queryset',
|
||||
'check_user_access',
|
||||
'check_user_access_with_errors',
|
||||
'user_accessible_objects',
|
||||
'consumer_access',
|
||||
]
|
||||
|
||||
@@ -136,10 +135,6 @@ def register_access(model_class, access_class):
|
||||
access_registry[model_class] = access_class
|
||||
|
||||
|
||||
def user_accessible_objects(user, role_name):
|
||||
return ResourceMixin._accessible_objects(User, user, role_name)
|
||||
|
||||
|
||||
def get_user_queryset(user, model_class):
|
||||
"""
|
||||
Return a queryset for the given model_class containing only the instances
|
||||
|
||||
@@ -694,16 +694,18 @@ register(
|
||||
category_slug='logging',
|
||||
)
|
||||
register(
|
||||
'LOG_AGGREGATOR_MAX_DISK_USAGE_GB',
|
||||
'LOG_AGGREGATOR_ACTION_QUEUE_SIZE',
|
||||
field_class=fields.IntegerField,
|
||||
default=1,
|
||||
default=131072,
|
||||
min_value=1,
|
||||
label=_('Maximum disk persistence for external log aggregation (in GB)'),
|
||||
label=_('Maximum number of messages that can be stored in the log action queue'),
|
||||
help_text=_(
|
||||
'Amount of data to store (in gigabytes) during an outage of '
|
||||
'the external log aggregator (defaults to 1). '
|
||||
'Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. '
|
||||
'Notably, this is used for the rsyslogd main queue (for input messages).'
|
||||
'Defines how large the rsyslog action queue can grow in number of messages '
|
||||
'stored. This can have an impact on memory utilization. When the queue '
|
||||
'reaches 75% of this number, the queue will start writing to disk '
|
||||
'(queue.highWatermark in rsyslog). When it reaches 90%, NOTICE, INFO, and '
|
||||
'DEBUG messages will start to be discarded (queue.discardMark with '
|
||||
'queue.discardSeverity=5).'
|
||||
),
|
||||
category=_('Logging'),
|
||||
category_slug='logging',
|
||||
@@ -718,8 +720,7 @@ register(
|
||||
'Amount of data to store (in gigabytes) if an rsyslog action takes time '
|
||||
'to process an incoming message (defaults to 1). '
|
||||
'Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). '
|
||||
'Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified '
|
||||
'by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
|
||||
'It stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
|
||||
),
|
||||
category=_('Logging'),
|
||||
category_slug='logging',
|
||||
|
||||
@@ -2,29 +2,29 @@ from .plugin import CredentialPlugin
|
||||
|
||||
from django.conf import settings
|
||||
from django.utils.translation import gettext_lazy as _
|
||||
|
||||
try:
|
||||
from delinea.secrets.vault import SecretsVault
|
||||
except ImportError:
|
||||
from thycotic.secrets.vault import SecretsVault
|
||||
|
||||
from delinea.secrets.vault import PasswordGrantAuthorizer, SecretsVault
|
||||
from base64 import b64decode
|
||||
|
||||
dsv_inputs = {
|
||||
'fields': [
|
||||
{
|
||||
'id': 'tenant',
|
||||
'label': _('Tenant'),
|
||||
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretservercloud.com'),
|
||||
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretsvaultcloud.com'),
|
||||
'type': 'string',
|
||||
},
|
||||
{
|
||||
'id': 'tld',
|
||||
'label': _('Top-level Domain (TLD)'),
|
||||
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretservercloud.com'),
|
||||
'choices': ['ca', 'com', 'com.au', 'com.sg', 'eu'],
|
||||
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretsvaultcloud.com'),
|
||||
'choices': ['ca', 'com', 'com.au', 'eu'],
|
||||
'default': 'com',
|
||||
},
|
||||
{'id': 'client_id', 'label': _('Client ID'), 'type': 'string'},
|
||||
{
|
||||
'id': 'client_id',
|
||||
'label': _('Client ID'),
|
||||
'type': 'string',
|
||||
},
|
||||
{
|
||||
'id': 'client_secret',
|
||||
'label': _('Client Secret'),
|
||||
@@ -45,8 +45,16 @@ dsv_inputs = {
|
||||
'help_text': _('The field to extract from the secret'),
|
||||
'type': 'string',
|
||||
},
|
||||
{
|
||||
'id': 'secret_decoding',
|
||||
'label': _('Should the secret be base64 decoded?'),
|
||||
'help_text': _('Specify whether the secret should be base64 decoded, typically used for storing files, such as SSH keys'),
|
||||
'choices': ['No Decoding', 'Decode Base64'],
|
||||
'type': 'string',
|
||||
'default': 'No Decoding',
|
||||
},
|
||||
],
|
||||
'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field'],
|
||||
'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field', 'secret_decoding'],
|
||||
}
|
||||
|
||||
if settings.DEBUG:
|
||||
@@ -55,12 +63,32 @@ if settings.DEBUG:
|
||||
'id': 'url_template',
|
||||
'label': _('URL template'),
|
||||
'type': 'string',
|
||||
'default': 'https://{}.secretsvaultcloud.{}/v1',
|
||||
'default': 'https://{}.secretsvaultcloud.{}',
|
||||
}
|
||||
)
|
||||
|
||||
dsv_plugin = CredentialPlugin(
|
||||
'Thycotic DevOps Secrets Vault',
|
||||
dsv_inputs,
|
||||
lambda **kwargs: SecretsVault(**{k: v for (k, v) in kwargs.items() if k in [field['id'] for field in dsv_inputs['fields']]}).get_secret(kwargs['path'])['data'][kwargs['secret_field']], # fmt: skip
|
||||
)
|
||||
|
||||
def dsv_backend(**kwargs):
|
||||
tenant_name = kwargs['tenant']
|
||||
tenant_tld = kwargs.get('tld', 'com')
|
||||
tenant_url_template = kwargs.get('url_template', 'https://{}.secretsvaultcloud.{}')
|
||||
client_id = kwargs['client_id']
|
||||
client_secret = kwargs['client_secret']
|
||||
secret_path = kwargs['path']
|
||||
secret_field = kwargs['secret_field']
|
||||
# providing a default value to remain backward compatible for secrets that have not specified this option
|
||||
secret_decoding = kwargs.get('secret_decoding', 'No Decoding')
|
||||
|
||||
tenant_url = tenant_url_template.format(tenant_name, tenant_tld.strip("."))
|
||||
|
||||
authorizer = PasswordGrantAuthorizer(tenant_url, client_id, client_secret)
|
||||
dsv_secret = SecretsVault(tenant_url, authorizer).get_secret(secret_path)
|
||||
|
||||
# files can be uploaded base64 decoded to DSV and thus decoding it only, when asked for
|
||||
if secret_decoding == 'Decode Base64':
|
||||
return b64decode(dsv_secret['data'][secret_field]).decode()
|
||||
|
||||
return dsv_secret['data'][secret_field]
|
||||
|
||||
|
||||
dsv_plugin = CredentialPlugin(name='Thycotic DevOps Secrets Vault', inputs=dsv_inputs, backend=dsv_backend)
|
||||
|
||||
@@ -37,8 +37,11 @@ class Control(object):
|
||||
def running(self, *args, **kwargs):
|
||||
return self.control_with_reply('running', *args, **kwargs)
|
||||
|
||||
def cancel(self, task_ids, *args, **kwargs):
|
||||
return self.control_with_reply('cancel', *args, extra_data={'task_ids': task_ids}, **kwargs)
|
||||
def cancel(self, task_ids, with_reply=True):
|
||||
if with_reply:
|
||||
return self.control_with_reply('cancel', extra_data={'task_ids': task_ids})
|
||||
else:
|
||||
self.control({'control': 'cancel', 'task_ids': task_ids, 'reply_to': None}, extra_data={'task_ids': task_ids})
|
||||
|
||||
def schedule(self, *args, **kwargs):
|
||||
return self.control_with_reply('schedule', *args, **kwargs)
|
||||
|
||||
@@ -89,8 +89,9 @@ class AWXConsumerBase(object):
|
||||
if task_ids and not msg:
|
||||
logger.info(f'Could not locate running tasks to cancel with ids={task_ids}')
|
||||
|
||||
with pg_bus_conn() as conn:
|
||||
conn.notify(reply_queue, json.dumps(msg))
|
||||
if reply_queue is not None:
|
||||
with pg_bus_conn() as conn:
|
||||
conn.notify(reply_queue, json.dumps(msg))
|
||||
elif control == 'reload':
|
||||
for worker in self.pool.workers:
|
||||
worker.quit()
|
||||
|
||||
@@ -9,6 +9,7 @@ from django.conf import settings
|
||||
# AWX
|
||||
import awx.main.fields
|
||||
from awx.main.models import Host
|
||||
from ._sqlite_helper import dbawaremigrations
|
||||
|
||||
|
||||
def replaces():
|
||||
@@ -131,9 +132,11 @@ class Migration(migrations.Migration):
|
||||
help_text='If enabled, Tower will act as an Ansible Fact Cache Plugin; persisting facts at the end of a playbook run to the database and caching facts for use by Ansible.',
|
||||
),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
dbawaremigrations.RunSQL(
|
||||
sql="CREATE INDEX host_ansible_facts_default_gin ON {} USING gin(ansible_facts jsonb_path_ops);".format(Host._meta.db_table),
|
||||
reverse_sql='DROP INDEX host_ansible_facts_default_gin;',
|
||||
sqlite_sql=dbawaremigrations.RunSQL.noop,
|
||||
sqlite_reverse_sql=dbawaremigrations.RunSQL.noop,
|
||||
),
|
||||
# SCM file-based inventories
|
||||
migrations.AddField(
|
||||
|
||||
@@ -3,24 +3,27 @@ from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
from ._sqlite_helper import dbawaremigrations
|
||||
|
||||
tables_to_drop = [
|
||||
'celery_taskmeta',
|
||||
'celery_tasksetmeta',
|
||||
'djcelery_crontabschedule',
|
||||
'djcelery_intervalschedule',
|
||||
'djcelery_periodictask',
|
||||
'djcelery_periodictasks',
|
||||
'djcelery_taskstate',
|
||||
'djcelery_workerstate',
|
||||
'djkombu_message',
|
||||
'djkombu_queue',
|
||||
]
|
||||
postgres_sql = ([("DROP TABLE IF EXISTS {} CASCADE;".format(table))] for table in tables_to_drop)
|
||||
sqlite_sql = ([("DROP TABLE IF EXISTS {};".format(table))] for table in tables_to_drop)
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
('main', '0049_v330_validate_instance_capacity_adjustment'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunSQL([("DROP TABLE IF EXISTS {} CASCADE;".format(table))])
|
||||
for table in (
|
||||
'celery_taskmeta',
|
||||
'celery_tasksetmeta',
|
||||
'djcelery_crontabschedule',
|
||||
'djcelery_intervalschedule',
|
||||
'djcelery_periodictask',
|
||||
'djcelery_periodictasks',
|
||||
'djcelery_taskstate',
|
||||
'djcelery_workerstate',
|
||||
'djkombu_message',
|
||||
'djkombu_queue',
|
||||
)
|
||||
]
|
||||
operations = [dbawaremigrations.RunSQL(p, sqlite_sql=s) for p, s in zip(postgres_sql, sqlite_sql)]
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
|
||||
from django.db import migrations, models, connection
|
||||
|
||||
from ._sqlite_helper import dbawaremigrations
|
||||
|
||||
|
||||
def migrate_event_data(apps, schema_editor):
|
||||
# see: https://github.com/ansible/awx/issues/6010
|
||||
@@ -24,6 +26,11 @@ def migrate_event_data(apps, schema_editor):
|
||||
cursor.execute(f'ALTER TABLE {tblname} ALTER COLUMN id TYPE bigint USING id::bigint;')
|
||||
|
||||
|
||||
def migrate_event_data_sqlite(apps, schema_editor):
|
||||
# TODO: cmeyers fill this in
|
||||
return
|
||||
|
||||
|
||||
class FakeAlterField(migrations.AlterField):
|
||||
def database_forwards(self, *args):
|
||||
# this is intentionally left blank, because we're
|
||||
@@ -37,7 +44,7 @@ class Migration(migrations.Migration):
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(migrate_event_data),
|
||||
dbawaremigrations.RunPython(migrate_event_data, sqlite_code=migrate_event_data_sqlite),
|
||||
FakeAlterField(
|
||||
model_name='adhoccommandevent',
|
||||
name='id',
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
from django.db import migrations, models, connection
|
||||
|
||||
from ._sqlite_helper import dbawaremigrations
|
||||
|
||||
|
||||
def migrate_event_data(apps, schema_editor):
|
||||
# see: https://github.com/ansible/awx/issues/9039
|
||||
@@ -59,6 +61,10 @@ def migrate_event_data(apps, schema_editor):
|
||||
cursor.execute('DROP INDEX IF EXISTS main_jobevent_job_id_idx')
|
||||
|
||||
|
||||
def migrate_event_data_sqlite(apps, schema_editor):
|
||||
return None
|
||||
|
||||
|
||||
class FakeAddField(migrations.AddField):
|
||||
def database_forwards(self, *args):
|
||||
# this is intentionally left blank, because we're
|
||||
@@ -72,7 +78,7 @@ class Migration(migrations.Migration):
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(migrate_event_data),
|
||||
dbawaremigrations.RunPython(migrate_event_data, sqlite_code=migrate_event_data_sqlite),
|
||||
FakeAddField(
|
||||
model_name='jobevent',
|
||||
name='job_created',
|
||||
|
||||
@@ -3,6 +3,8 @@
|
||||
import awx.main.models.notifications
|
||||
from django.db import migrations, models
|
||||
|
||||
from ._sqlite_helper import dbawaremigrations
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
@@ -104,11 +106,12 @@ class Migration(migrations.Migration):
|
||||
name='deleted_actor',
|
||||
field=models.JSONField(null=True),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
dbawaremigrations.RunSQL(
|
||||
"""
|
||||
ALTER TABLE main_activitystream RENAME setting TO setting_old;
|
||||
ALTER TABLE main_activitystream ALTER COLUMN setting_old DROP NOT NULL;
|
||||
""",
|
||||
sqlite_sql="ALTER TABLE main_activitystream RENAME setting TO setting_old",
|
||||
state_operations=[
|
||||
migrations.RemoveField(
|
||||
model_name='activitystream',
|
||||
@@ -121,11 +124,12 @@ class Migration(migrations.Migration):
|
||||
name='setting',
|
||||
field=models.JSONField(blank=True, default=dict),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
dbawaremigrations.RunSQL(
|
||||
"""
|
||||
ALTER TABLE main_job RENAME survey_passwords TO survey_passwords_old;
|
||||
ALTER TABLE main_job ALTER COLUMN survey_passwords_old DROP NOT NULL;
|
||||
""",
|
||||
sqlite_sql="ALTER TABLE main_job RENAME survey_passwords TO survey_passwords_old",
|
||||
state_operations=[
|
||||
migrations.RemoveField(
|
||||
model_name='job',
|
||||
@@ -138,11 +142,12 @@ class Migration(migrations.Migration):
|
||||
name='survey_passwords',
|
||||
field=models.JSONField(blank=True, default=dict, editable=False),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
dbawaremigrations.RunSQL(
|
||||
"""
|
||||
ALTER TABLE main_joblaunchconfig RENAME char_prompts TO char_prompts_old;
|
||||
ALTER TABLE main_joblaunchconfig ALTER COLUMN char_prompts_old DROP NOT NULL;
|
||||
""",
|
||||
sqlite_sql="ALTER TABLE main_joblaunchconfig RENAME char_prompts TO char_prompts_old",
|
||||
state_operations=[
|
||||
migrations.RemoveField(
|
||||
model_name='joblaunchconfig',
|
||||
@@ -155,11 +160,12 @@ class Migration(migrations.Migration):
|
||||
name='char_prompts',
|
||||
field=models.JSONField(blank=True, default=dict),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
dbawaremigrations.RunSQL(
|
||||
"""
|
||||
ALTER TABLE main_joblaunchconfig RENAME survey_passwords TO survey_passwords_old;
|
||||
ALTER TABLE main_joblaunchconfig ALTER COLUMN survey_passwords_old DROP NOT NULL;
|
||||
""",
|
||||
sqlite_sql="ALTER TABLE main_joblaunchconfig RENAME survey_passwords TO survey_passwords_old;",
|
||||
state_operations=[
|
||||
migrations.RemoveField(
|
||||
model_name='joblaunchconfig',
|
||||
@@ -172,11 +178,12 @@ class Migration(migrations.Migration):
|
||||
name='survey_passwords',
|
||||
field=models.JSONField(blank=True, default=dict, editable=False),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
dbawaremigrations.RunSQL(
|
||||
"""
|
||||
ALTER TABLE main_notification RENAME body TO body_old;
|
||||
ALTER TABLE main_notification ALTER COLUMN body_old DROP NOT NULL;
|
||||
""",
|
||||
sqlite_sql="ALTER TABLE main_notification RENAME body TO body_old",
|
||||
state_operations=[
|
||||
migrations.RemoveField(
|
||||
model_name='notification',
|
||||
@@ -189,11 +196,12 @@ class Migration(migrations.Migration):
|
||||
name='body',
|
||||
field=models.JSONField(blank=True, default=dict),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
dbawaremigrations.RunSQL(
|
||||
"""
|
||||
ALTER TABLE main_unifiedjob RENAME job_env TO job_env_old;
|
||||
ALTER TABLE main_unifiedjob ALTER COLUMN job_env_old DROP NOT NULL;
|
||||
""",
|
||||
sqlite_sql="ALTER TABLE main_unifiedjob RENAME job_env TO job_env_old",
|
||||
state_operations=[
|
||||
migrations.RemoveField(
|
||||
model_name='unifiedjob',
|
||||
@@ -206,11 +214,12 @@ class Migration(migrations.Migration):
|
||||
name='job_env',
|
||||
field=models.JSONField(blank=True, default=dict, editable=False),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
dbawaremigrations.RunSQL(
|
||||
"""
|
||||
ALTER TABLE main_workflowjob RENAME char_prompts TO char_prompts_old;
|
||||
ALTER TABLE main_workflowjob ALTER COLUMN char_prompts_old DROP NOT NULL;
|
||||
""",
|
||||
sqlite_sql="ALTER TABLE main_workflowjob RENAME char_prompts TO char_prompts_old",
|
||||
state_operations=[
|
||||
migrations.RemoveField(
|
||||
model_name='workflowjob',
|
||||
@@ -223,11 +232,12 @@ class Migration(migrations.Migration):
|
||||
name='char_prompts',
|
||||
field=models.JSONField(blank=True, default=dict),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
dbawaremigrations.RunSQL(
|
||||
"""
|
||||
ALTER TABLE main_workflowjob RENAME survey_passwords TO survey_passwords_old;
|
||||
ALTER TABLE main_workflowjob ALTER COLUMN survey_passwords_old DROP NOT NULL;
|
||||
""",
|
||||
sqlite_sql="ALTER TABLE main_workflowjob RENAME survey_passwords TO survey_passwords_old",
|
||||
state_operations=[
|
||||
migrations.RemoveField(
|
||||
model_name='workflowjob',
|
||||
@@ -240,11 +250,12 @@ class Migration(migrations.Migration):
|
||||
name='survey_passwords',
|
||||
field=models.JSONField(blank=True, default=dict, editable=False),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
dbawaremigrations.RunSQL(
|
||||
"""
|
||||
ALTER TABLE main_workflowjobnode RENAME char_prompts TO char_prompts_old;
|
||||
ALTER TABLE main_workflowjobnode ALTER COLUMN char_prompts_old DROP NOT NULL;
|
||||
""",
|
||||
sqlite_sql="ALTER TABLE main_workflowjobnode RENAME char_prompts TO char_prompts_old",
|
||||
state_operations=[
|
||||
migrations.RemoveField(
|
||||
model_name='workflowjobnode',
|
||||
@@ -257,11 +268,12 @@ class Migration(migrations.Migration):
|
||||
name='char_prompts',
|
||||
field=models.JSONField(blank=True, default=dict),
|
||||
),
|
||||
migrations.RunSQL(
|
||||
dbawaremigrations.RunSQL(
|
||||
"""
|
||||
ALTER TABLE main_workflowjobnode RENAME survey_passwords TO survey_passwords_old;
|
||||
ALTER TABLE main_workflowjobnode ALTER COLUMN survey_passwords_old DROP NOT NULL;
|
||||
""",
|
||||
sqlite_sql="ALTER TABLE main_workflowjobnode RENAME survey_passwords TO survey_passwords_old",
|
||||
state_operations=[
|
||||
migrations.RemoveField(
|
||||
model_name='workflowjobnode',
|
||||
|
||||
@@ -3,6 +3,8 @@ from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
from ._sqlite_helper import dbawaremigrations
|
||||
|
||||
|
||||
def delete_taggit_contenttypes(apps, schema_editor):
|
||||
ContentType = apps.get_model('contenttypes', 'ContentType')
|
||||
@@ -20,8 +22,8 @@ class Migration(migrations.Migration):
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunSQL("DROP TABLE IF EXISTS taggit_tag CASCADE;"),
|
||||
migrations.RunSQL("DROP TABLE IF EXISTS taggit_taggeditem CASCADE;"),
|
||||
dbawaremigrations.RunSQL("DROP TABLE IF EXISTS taggit_tag CASCADE;", sqlite_sql="DROP TABLE IF EXISTS taggit_tag;"),
|
||||
dbawaremigrations.RunSQL("DROP TABLE IF EXISTS taggit_taggeditem CASCADE;", sqlite_sql="DROP TABLE IF EXISTS taggit_taggeditem;"),
|
||||
migrations.RunPython(delete_taggit_contenttypes),
|
||||
migrations.RunPython(delete_taggit_migration_records),
|
||||
]
|
||||
|
||||
61
awx/main/migrations/_sqlite_helper.py
Normal file
@@ -0,0 +1,61 @@
|
||||
from django.db import migrations
|
||||
|
||||
|
||||
class RunSQL(migrations.operations.special.RunSQL):
|
||||
"""
|
||||
Bit of a hack here. Django actually wants this decision made in the router
|
||||
and we can pass **hints.
|
||||
"""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
if 'sqlite_sql' not in kwargs:
|
||||
raise ValueError("sqlite_sql parameter required")
|
||||
sqlite_sql = kwargs.pop('sqlite_sql')
|
||||
|
||||
self.sqlite_sql = sqlite_sql
|
||||
self.sqlite_reverse_sql = kwargs.pop('sqlite_reverse_sql', None)
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def database_forwards(self, app_label, schema_editor, from_state, to_state):
|
||||
if not schema_editor.connection.vendor.startswith('postgres'):
|
||||
self.sql = self.sqlite_sql or migrations.RunSQL.noop
|
||||
super().database_forwards(app_label, schema_editor, from_state, to_state)
|
||||
|
||||
def database_backwards(self, app_label, schema_editor, from_state, to_state):
|
||||
if not schema_editor.connection.vendor.startswith('postgres'):
|
||||
self.reverse_sql = self.sqlite_reverse_sql or migrations.RunSQL.noop
|
||||
super().database_backwards(app_label, schema_editor, from_state, to_state)
|
||||
|
||||
|
||||
class RunPython(migrations.operations.special.RunPython):
|
||||
"""
|
||||
Bit of a hack here. Django actually wants this decision made in the router
|
||||
and we can pass **hints.
|
||||
"""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
if 'sqlite_code' not in kwargs:
|
||||
raise ValueError("sqlite_code parameter required")
|
||||
sqlite_code = kwargs.pop('sqlite_code')
|
||||
|
||||
self.sqlite_code = sqlite_code
|
||||
self.sqlite_reverse_code = kwargs.pop('sqlite_reverse_code', None)
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def database_forwards(self, app_label, schema_editor, from_state, to_state):
|
||||
if not schema_editor.connection.vendor.startswith('postgres'):
|
||||
self.code = self.sqlite_code or migrations.RunPython.noop
|
||||
super().database_forwards(app_label, schema_editor, from_state, to_state)
|
||||
|
||||
def database_backwards(self, app_label, schema_editor, from_state, to_state):
|
||||
if not schema_editor.connection.vendor.startswith('postgres'):
|
||||
self.reverse_code = self.sqlite_reverse_code or migrations.RunPython.noop
|
||||
super().database_backwards(app_label, schema_editor, from_state, to_state)
|
||||
|
||||
|
||||
class _sqlitemigrations:
|
||||
RunPython = RunPython
|
||||
RunSQL = RunSQL
|
||||
|
||||
|
||||
dbawaremigrations = _sqlitemigrations()
|
||||
@@ -57,7 +57,6 @@ from awx.main.models.ha import ( # noqa
|
||||
from awx.main.models.rbac import ( # noqa
|
||||
Role,
|
||||
batch_role_ancestor_rebuilding,
|
||||
get_roles_on_resource,
|
||||
role_summary_fields_generator,
|
||||
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
|
||||
ROLE_SINGLETON_SYSTEM_AUDITOR,
|
||||
@@ -91,13 +90,12 @@ from oauth2_provider.models import Grant, RefreshToken # noqa -- needed django-
|
||||
|
||||
# Add custom methods to User model for permissions checks.
|
||||
from django.contrib.auth.models import User # noqa
|
||||
from awx.main.access import get_user_queryset, check_user_access, check_user_access_with_errors, user_accessible_objects # noqa
|
||||
from awx.main.access import get_user_queryset, check_user_access, check_user_access_with_errors # noqa
|
||||
|
||||
|
||||
User.add_to_class('get_queryset', get_user_queryset)
|
||||
User.add_to_class('can_access', check_user_access)
|
||||
User.add_to_class('can_access_with_errors', check_user_access_with_errors)
|
||||
User.add_to_class('accessible_objects', user_accessible_objects)
|
||||
|
||||
|
||||
def convert_jsonfields():
|
||||
|
||||
@@ -19,7 +19,7 @@ from django.utils.translation import gettext_lazy as _
|
||||
|
||||
# AWX
|
||||
from awx.main.models.base import prevent_search
|
||||
from awx.main.models.rbac import Role, RoleAncestorEntry, get_roles_on_resource
|
||||
from awx.main.models.rbac import Role, RoleAncestorEntry
|
||||
from awx.main.utils import parse_yaml_or_json, get_custom_venv_choices, get_licenser, polymorphic
|
||||
from awx.main.utils.execution_environments import get_default_execution_environment
|
||||
from awx.main.utils.encryption import decrypt_value, get_encryption_key, is_encrypted
|
||||
@@ -54,10 +54,7 @@ class ResourceMixin(models.Model):
|
||||
Use instead of `MyModel.objects` when you want to only consider
|
||||
resources that a user has specific permissions for. For example:
|
||||
MyModel.accessible_objects(user, 'read_role').filter(name__istartswith='bar');
|
||||
NOTE: This should only be used for list type things. If you have a
|
||||
specific resource you want to check permissions on, it is more
|
||||
performant to resolve the resource in question then call
|
||||
`myresource.get_permissions(user)`.
|
||||
NOTE: This should only be used for list type things.
|
||||
"""
|
||||
return ResourceMixin._accessible_objects(cls, accessor, role_field)
|
||||
|
||||
@@ -86,15 +83,6 @@ class ResourceMixin(models.Model):
|
||||
def _accessible_objects(cls, accessor, role_field):
|
||||
return cls.objects.filter(pk__in=ResourceMixin._accessible_pk_qs(cls, accessor, role_field))
|
||||
|
||||
def get_permissions(self, accessor):
|
||||
"""
|
||||
Returns a string list of the roles a accessor has for a given resource.
|
||||
An accessor can be either a User, Role, or an arbitrary resource that
|
||||
contains one or more Roles associated with it.
|
||||
"""
|
||||
|
||||
return get_roles_on_resource(self, accessor)
|
||||
|
||||
|
||||
class SurveyJobTemplateMixin(models.Model):
|
||||
class Meta:
|
||||
|
||||
@@ -1439,6 +1439,11 @@ class UnifiedJob(
|
||||
if not self.celery_task_id:
|
||||
return
|
||||
canceled = []
|
||||
if not connection.get_autocommit():
|
||||
# this condition is purpose-written for the task manager, when it cancels jobs in workflows
|
||||
ControlDispatcher('dispatcher', self.controller_node).cancel([self.celery_task_id], with_reply=False)
|
||||
return True # task manager itself needs to act under assumption that cancel was received
|
||||
|
||||
try:
|
||||
# Use control and reply mechanism to cancel and obtain confirmation
|
||||
timeout = 5
|
||||
|
||||
@@ -270,6 +270,9 @@ class WorkflowManager(TaskBase):
|
||||
job.status = 'failed'
|
||||
job.save(update_fields=['status', 'job_explanation'])
|
||||
job.websocket_emit_status('failed')
|
||||
# NOTE: sending notification templates here is slightly worse performance
|
||||
# this is not yet optimized in the same way as for the TaskManager
|
||||
job.send_notification_templates('failed')
|
||||
ScheduleWorkflowManager().schedule()
|
||||
|
||||
# TODO: should we emit a status on the socket here similar to tasks.py awx_periodic_scheduler() ?
|
||||
@@ -430,6 +433,25 @@ class TaskManager(TaskBase):
|
||||
self.tm_models = TaskManagerModels()
|
||||
self.controlplane_ig = self.tm_models.instance_groups.controlplane_ig
|
||||
|
||||
def process_job_dep_failures(self, task):
|
||||
"""If job depends on a job that has failed, mark as failed and handle misc stuff."""
|
||||
for dep in task.dependent_jobs.all():
|
||||
# if we detect a failed or error dependency, go ahead and fail this task.
|
||||
if dep.status in ("error", "failed"):
|
||||
task.status = 'failed'
|
||||
logger.warning(f'Previous task failed task: {task.id} dep: {dep.id} task manager')
|
||||
task.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
|
||||
get_type_for_model(type(dep)),
|
||||
dep.name,
|
||||
dep.id,
|
||||
)
|
||||
task.save(update_fields=['status', 'job_explanation'])
|
||||
task.websocket_emit_status('failed')
|
||||
self.pre_start_failed.append(task.id)
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def job_blocked_by(self, task):
|
||||
# TODO: I'm not happy with this, I think blocking behavior should be decided outside of the dependency graph
|
||||
# in the old task manager this was handled as a method on each task object outside of the graph and
|
||||
@@ -441,20 +463,6 @@ class TaskManager(TaskBase):
|
||||
for dep in task.dependent_jobs.all():
|
||||
if dep.status in ACTIVE_STATES:
|
||||
return dep
|
||||
# if we detect a failed or error dependency, go ahead and fail this
|
||||
# task. The errback on the dependency takes some time to trigger,
|
||||
# and we don't want the task to enter running state if its
|
||||
# dependency has failed or errored.
|
||||
elif dep.status in ("error", "failed"):
|
||||
task.status = 'failed'
|
||||
task.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
|
||||
get_type_for_model(type(dep)),
|
||||
dep.name,
|
||||
dep.id,
|
||||
)
|
||||
task.save(update_fields=['status', 'job_explanation'])
|
||||
task.websocket_emit_status('failed')
|
||||
return dep
|
||||
|
||||
return None
|
||||
|
||||
@@ -474,7 +482,6 @@ class TaskManager(TaskBase):
|
||||
if self.start_task_limit == 0:
|
||||
# schedule another run immediately after this task manager
|
||||
ScheduleTaskManager().schedule()
|
||||
from awx.main.tasks.system import handle_work_error, handle_work_success
|
||||
|
||||
task.status = 'waiting'
|
||||
|
||||
@@ -485,7 +492,7 @@ class TaskManager(TaskBase):
|
||||
task.job_explanation += ' '
|
||||
task.job_explanation += 'Task failed pre-start check.'
|
||||
task.save()
|
||||
# TODO: run error handler to fail sub-tasks and send notifications
|
||||
self.pre_start_failed.append(task.id)
|
||||
else:
|
||||
if type(task) is WorkflowJob:
|
||||
task.status = 'running'
|
||||
@@ -507,19 +514,16 @@ class TaskManager(TaskBase):
|
||||
# apply_async does a NOTIFY to the channel dispatcher is listening to
|
||||
# postgres will treat this as part of the transaction, which is what we want
|
||||
if task.status != 'failed' and type(task) is not WorkflowJob:
|
||||
task_actual = {'type': get_type_for_model(type(task)), 'id': task.id}
|
||||
task_cls = task._get_task_class()
|
||||
task_cls.apply_async(
|
||||
[task.pk],
|
||||
opts,
|
||||
queue=task.get_queue_name(),
|
||||
uuid=task.celery_task_id,
|
||||
callbacks=[{'task': handle_work_success.name, 'kwargs': {'task_actual': task_actual}}],
|
||||
errbacks=[{'task': handle_work_error.name, 'kwargs': {'task_actual': task_actual}}],
|
||||
)
|
||||
|
||||
# In exception cases, like a job failing pre-start checks, we send the websocket status message
|
||||
# for jobs going into waiting, we omit this because of performance issues, as it should go to running quickly
|
||||
# In exception cases, like a job failing pre-start checks, we send the websocket status message.
|
||||
# For jobs going into waiting, we omit this because of performance issues, as it should go to running quickly
|
||||
if task.status != 'waiting':
|
||||
task.websocket_emit_status(task.status) # adds to on_commit
|
||||
|
||||
@@ -540,6 +544,11 @@ class TaskManager(TaskBase):
|
||||
if self.timed_out():
|
||||
logger.warning("Task manager has reached time out while processing pending jobs, exiting loop early")
|
||||
break
|
||||
|
||||
has_failed = self.process_job_dep_failures(task)
|
||||
if has_failed:
|
||||
continue
|
||||
|
||||
blocked_by = self.job_blocked_by(task)
|
||||
if blocked_by:
|
||||
self.subsystem_metrics.inc(f"{self.prefix}_tasks_blocked", 1)
|
||||
@@ -653,6 +662,11 @@ class TaskManager(TaskBase):
|
||||
reap_job(j, 'failed')
|
||||
|
||||
def process_tasks(self):
|
||||
# maintain a list of jobs that went to an early failure state,
|
||||
# meaning the dispatcher never got these jobs,
|
||||
# that means we have to handle notifications for those
|
||||
self.pre_start_failed = []
|
||||
|
||||
running_tasks = [t for t in self.all_tasks if t.status in ['waiting', 'running']]
|
||||
self.process_running_tasks(running_tasks)
|
||||
self.subsystem_metrics.inc(f"{self.prefix}_running_processed", len(running_tasks))
|
||||
@@ -662,6 +676,11 @@ class TaskManager(TaskBase):
|
||||
self.process_pending_tasks(pending_tasks)
|
||||
self.subsystem_metrics.inc(f"{self.prefix}_pending_processed", len(pending_tasks))
|
||||
|
||||
if self.pre_start_failed:
|
||||
from awx.main.tasks.system import handle_failure_notifications
|
||||
|
||||
handle_failure_notifications.delay(self.pre_start_failed)
|
||||
|
||||
def timeout_approval_node(self, task):
|
||||
if self.timed_out():
|
||||
logger.warning("Task manager has reached time out while processing approval nodes, exiting loop early")
|
||||
|
||||
@@ -74,6 +74,8 @@ from awx.main.utils.common import (
|
||||
extract_ansible_vars,
|
||||
get_awx_version,
|
||||
create_partition,
|
||||
ScheduleWorkflowManager,
|
||||
ScheduleTaskManager,
|
||||
)
|
||||
from awx.conf.license import get_license
|
||||
from awx.main.utils.handlers import SpecialInventoryHandler
|
||||
@@ -450,6 +452,12 @@ class BaseTask(object):
|
||||
instance.ansible_version = ansible_version_info
|
||||
instance.save(update_fields=['ansible_version'])
|
||||
|
||||
# Run task manager appropriately for speculative dependencies
|
||||
if instance.unifiedjob_blocked_jobs.exists():
|
||||
ScheduleTaskManager().schedule()
|
||||
if instance.spawned_by_workflow:
|
||||
ScheduleWorkflowManager().schedule()
|
||||
|
||||
def should_use_fact_cache(self):
|
||||
return False
|
||||
|
||||
|
||||
@@ -16,7 +16,9 @@ class SignalExit(Exception):
|
||||
class SignalState:
|
||||
def reset(self):
|
||||
self.sigterm_flag = False
|
||||
self.is_active = False
|
||||
self.sigint_flag = False
|
||||
|
||||
self.is_active = False # for nested context managers
|
||||
self.original_sigterm = None
|
||||
self.original_sigint = None
|
||||
self.raise_exception = False
|
||||
@@ -24,23 +26,36 @@ class SignalState:
|
||||
def __init__(self):
|
||||
self.reset()
|
||||
|
||||
def set_flag(self, *args):
|
||||
"""Method to pass into the python signal.signal method to receive signals"""
|
||||
self.sigterm_flag = True
|
||||
def raise_if_needed(self):
|
||||
if self.raise_exception:
|
||||
self.raise_exception = False # so it is not raised a second time in error handling
|
||||
raise SignalExit()
|
||||
|
||||
def set_sigterm_flag(self, *args):
|
||||
self.sigterm_flag = True
|
||||
self.raise_if_needed()
|
||||
|
||||
def set_sigint_flag(self, *args):
|
||||
self.sigint_flag = True
|
||||
self.raise_if_needed()
|
||||
|
||||
def connect_signals(self):
|
||||
self.original_sigterm = signal.getsignal(signal.SIGTERM)
|
||||
self.original_sigint = signal.getsignal(signal.SIGINT)
|
||||
signal.signal(signal.SIGTERM, self.set_flag)
|
||||
signal.signal(signal.SIGINT, self.set_flag)
|
||||
signal.signal(signal.SIGTERM, self.set_sigterm_flag)
|
||||
signal.signal(signal.SIGINT, self.set_sigint_flag)
|
||||
self.is_active = True
|
||||
|
||||
def restore_signals(self):
|
||||
signal.signal(signal.SIGTERM, self.original_sigterm)
|
||||
signal.signal(signal.SIGINT, self.original_sigint)
|
||||
# if we got a signal while context manager was active, call parent methods.
|
||||
if self.sigterm_flag:
|
||||
if callable(self.original_sigterm):
|
||||
self.original_sigterm()
|
||||
if self.sigint_flag:
|
||||
if callable(self.original_sigint):
|
||||
self.original_sigint()
|
||||
self.reset()
|
||||
|
||||
|
||||
@@ -48,7 +63,7 @@ signal_state = SignalState()
|
||||
|
||||
|
||||
def signal_callback():
|
||||
return signal_state.sigterm_flag
|
||||
return bool(signal_state.sigterm_flag or signal_state.sigint_flag)
|
||||
|
||||
|
||||
def with_signal_handling(f):
|
||||
|
||||
@@ -53,13 +53,7 @@ from awx.main.models import (
|
||||
from awx.main.constants import ACTIVE_STATES
|
||||
from awx.main.dispatch.publish import task
|
||||
from awx.main.dispatch import get_task_queuename, reaper
|
||||
from awx.main.utils.common import (
|
||||
get_type_for_model,
|
||||
ignore_inventory_computed_fields,
|
||||
ignore_inventory_group_removal,
|
||||
ScheduleWorkflowManager,
|
||||
ScheduleTaskManager,
|
||||
)
|
||||
from awx.main.utils.common import ignore_inventory_computed_fields, ignore_inventory_group_removal
|
||||
|
||||
from awx.main.utils.reload import stop_local_services
|
||||
from awx.main.utils.pglock import advisory_lock
|
||||
@@ -765,63 +759,19 @@ def awx_periodic_scheduler():
|
||||
emit_channel_notification('schedules-changed', dict(id=schedule.id, group_name="schedules"))
|
||||
|
||||
|
||||
def schedule_manager_success_or_error(instance):
|
||||
if instance.unifiedjob_blocked_jobs.exists():
|
||||
ScheduleTaskManager().schedule()
|
||||
if instance.spawned_by_workflow:
|
||||
ScheduleWorkflowManager().schedule()
|
||||
|
||||
|
||||
@task(queue=get_task_queuename)
|
||||
def handle_work_success(task_actual):
|
||||
try:
|
||||
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
|
||||
except ObjectDoesNotExist:
|
||||
logger.warning('Missing {} `{}` in success callback.'.format(task_actual['type'], task_actual['id']))
|
||||
return
|
||||
if not instance:
|
||||
return
|
||||
schedule_manager_success_or_error(instance)
|
||||
|
||||
|
||||
@task(queue=get_task_queuename)
|
||||
def handle_work_error(task_actual):
|
||||
try:
|
||||
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
|
||||
except ObjectDoesNotExist:
|
||||
logger.warning('Missing {} `{}` in error callback.'.format(task_actual['type'], task_actual['id']))
|
||||
return
|
||||
if not instance:
|
||||
return
|
||||
|
||||
subtasks = instance.get_jobs_fail_chain() # reverse of dependent_jobs mostly
|
||||
logger.debug(f'Executing error task id {task_actual["id"]}, subtasks: {[subtask.id for subtask in subtasks]}')
|
||||
|
||||
deps_of_deps = {}
|
||||
|
||||
for subtask in subtasks:
|
||||
if subtask.celery_task_id != instance.celery_task_id and not subtask.cancel_flag and not subtask.status in ('successful', 'failed'):
|
||||
# If there are multiple in the dependency chain, A->B->C, and this was called for A, blame B for clarity
|
||||
blame_job = deps_of_deps.get(subtask.id, instance)
|
||||
subtask.status = 'failed'
|
||||
subtask.failed = True
|
||||
if not subtask.job_explanation:
|
||||
subtask.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
|
||||
get_type_for_model(type(blame_job)),
|
||||
blame_job.name,
|
||||
blame_job.id,
|
||||
)
|
||||
subtask.save()
|
||||
subtask.websocket_emit_status("failed")
|
||||
|
||||
for sub_subtask in subtask.get_jobs_fail_chain():
|
||||
deps_of_deps[sub_subtask.id] = subtask
|
||||
|
||||
# We only send 1 job complete message since all the job completion message
|
||||
# handling does is trigger the scheduler. If we extend the functionality of
|
||||
# what the job complete message handler does then we may want to send a
|
||||
# completion event for each job here.
|
||||
schedule_manager_success_or_error(instance)
|
||||
def handle_failure_notifications(task_ids):
|
||||
"""A task-ified version of the method that sends notifications."""
|
||||
found_task_ids = set()
|
||||
for instance in UnifiedJob.objects.filter(id__in=task_ids):
|
||||
found_task_ids.add(instance.id)
|
||||
try:
|
||||
instance.send_notification_templates('failed')
|
||||
except Exception:
|
||||
logger.exception(f'Error preparing notifications for task {instance.id}')
|
||||
deleted_tasks = set(task_ids) - found_task_ids
|
||||
if deleted_tasks:
|
||||
logger.warning(f'Could not send notifications for {deleted_tasks} because they were not found in the database')
|
||||
|
||||
|
||||
@task(queue=get_task_queuename)
|
||||
|
||||
44
awx/main/tests/functional/test_migrations.py
Normal file
@@ -0,0 +1,44 @@
|
||||
import pytest
|
||||
|
||||
from django_test_migrations.plan import all_migrations, nodes_to_tuples
|
||||
|
||||
"""
|
||||
Most tests that live in here can probably be deleted at some point. They are mainly
|
||||
for a developer. When AWX versions that users upgrade from falls out of support that
|
||||
is when migration tests can be deleted. This is also a good time to squash. Squashing
|
||||
will likely mess with the tests that live here.
|
||||
|
||||
The smoke test should be kept in here. The smoke test ensures that our migrations
|
||||
continue to work when sqlite is the backing database (vs. the default DB of postgres).
|
||||
"""
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestMigrationSmoke:
|
||||
def test_happy_path(self, migrator):
|
||||
"""
|
||||
This smoke test runs all the migrations.
|
||||
|
||||
Example of how to use django-test-migration to invoke particular migration(s)
|
||||
while weaving in object creation and assertions.
|
||||
|
||||
Note that this is more than just an example. It is a smoke test because it runs ALL
|
||||
the migrations. Our "normal" unit tests subvert the migrations running because it is slow.
|
||||
"""
|
||||
migration_nodes = all_migrations('default')
|
||||
migration_tuples = nodes_to_tuples(migration_nodes)
|
||||
final_migration = migration_tuples[-1]
|
||||
|
||||
migrator.apply_initial_migration(('main', None))
|
||||
# I just picked a newish migration at the time of writing this.
|
||||
# If someone from the future finds themselves here because the are squashing migrations
|
||||
# it is fine to change the 0180_... below to some other newish migration
|
||||
intermediate_state = migrator.apply_tested_migration(('main', '0180_add_hostmetric_fields'))
|
||||
|
||||
Instance = intermediate_state.apps.get_model('main', 'Instance')
|
||||
# Create any old object in the database
|
||||
Instance.objects.create(hostname='foobar', node_type='control')
|
||||
|
||||
final_state = migrator.apply_tested_migration(final_migration)
|
||||
Instance = final_state.apps.get_model('main', 'Instance')
|
||||
assert Instance.objects.filter(hostname='foobar').count() == 1
|
||||
@@ -122,25 +122,6 @@ def test_team_org_resource_role(ext_auth, organization, rando, org_admin, team):
|
||||
] == [True for i in range(2)]
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_user_accessible_objects(user, organization):
|
||||
"""
|
||||
We cannot directly use accessible_objects for User model because
|
||||
both editing and read permissions are obligated to complex business logic
|
||||
"""
|
||||
admin = user('admin', False)
|
||||
u = user('john', False)
|
||||
access = UserAccess(admin)
|
||||
assert access.get_queryset().count() == 1 # can only see himself
|
||||
|
||||
organization.member_role.members.add(u)
|
||||
organization.member_role.members.add(admin)
|
||||
assert access.get_queryset().count() == 2
|
||||
|
||||
organization.member_role.members.remove(u)
|
||||
assert access.get_queryset().count() == 1
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_org_admin_create_sys_auditor(org_admin):
|
||||
access = UserAccess(org_admin)
|
||||
|
||||
@@ -5,8 +5,8 @@ import tempfile
|
||||
import shutil
|
||||
|
||||
from awx.main.tasks.jobs import RunJob
|
||||
from awx.main.tasks.system import execution_node_health_check, _cleanup_images_and_files, handle_work_error
|
||||
from awx.main.models import Instance, Job, InventoryUpdate, ProjectUpdate
|
||||
from awx.main.tasks.system import execution_node_health_check, _cleanup_images_and_files
|
||||
from awx.main.models import Instance, Job
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
@@ -73,17 +73,3 @@ def test_does_not_run_reaped_job(mocker, mock_me):
|
||||
job.refresh_from_db()
|
||||
assert job.status == 'failed'
|
||||
mock_run.assert_not_called()
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_handle_work_error_nested(project, inventory_source):
|
||||
pu = ProjectUpdate.objects.create(status='failed', project=project, celery_task_id='1234')
|
||||
iu = InventoryUpdate.objects.create(status='pending', inventory_source=inventory_source, source='scm')
|
||||
job = Job.objects.create(status='pending')
|
||||
iu.dependent_jobs.add(pu)
|
||||
job.dependent_jobs.add(pu, iu)
|
||||
handle_work_error({'type': 'project_update', 'id': pu.id})
|
||||
iu.refresh_from_db()
|
||||
job.refresh_from_db()
|
||||
assert iu.job_explanation == f'Previous Task Failed: {{"job_type": "project_update", "job_name": "", "job_id": "{pu.id}"}}'
|
||||
assert job.job_explanation == f'Previous Task Failed: {{"job_type": "inventory_update", "job_name": "", "job_id": "{iu.id}"}}'
|
||||
|
||||
@@ -47,7 +47,7 @@ data_loggly = {
|
||||
'\n'.join(
|
||||
[
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
|
||||
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
|
||||
]
|
||||
),
|
||||
),
|
||||
@@ -61,7 +61,7 @@ data_loggly = {
|
||||
'\n'.join(
|
||||
[
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
|
||||
'action(type="omfwd" target="localhost" port="9000" protocol="udp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa
|
||||
'action(type="omfwd" target="localhost" port="9000" protocol="udp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5")', # noqa
|
||||
]
|
||||
),
|
||||
),
|
||||
@@ -75,7 +75,7 @@ data_loggly = {
|
||||
'\n'.join(
|
||||
[
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
|
||||
'action(type="omfwd" target="localhost" port="9000" protocol="tcp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa
|
||||
'action(type="omfwd" target="localhost" port="9000" protocol="tcp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5")', # noqa
|
||||
]
|
||||
),
|
||||
),
|
||||
@@ -89,7 +89,7 @@ data_loggly = {
|
||||
'\n'.join(
|
||||
[
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
]
|
||||
),
|
||||
),
|
||||
@@ -103,7 +103,7 @@ data_loggly = {
|
||||
'\n'.join(
|
||||
[
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
]
|
||||
),
|
||||
),
|
||||
@@ -117,7 +117,7 @@ data_loggly = {
|
||||
'\n'.join(
|
||||
[
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
]
|
||||
),
|
||||
),
|
||||
@@ -131,7 +131,7 @@ data_loggly = {
|
||||
'\n'.join(
|
||||
[
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
]
|
||||
),
|
||||
),
|
||||
@@ -145,7 +145,7 @@ data_loggly = {
|
||||
'\n'.join(
|
||||
[
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
]
|
||||
),
|
||||
),
|
||||
@@ -159,7 +159,7 @@ data_loggly = {
|
||||
'\n'.join(
|
||||
[
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
]
|
||||
),
|
||||
),
|
||||
@@ -173,7 +173,7 @@ data_loggly = {
|
||||
'\n'.join(
|
||||
[
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
|
||||
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
|
||||
]
|
||||
),
|
||||
),
|
||||
|
||||
@@ -1,8 +1,43 @@
|
||||
import signal
|
||||
import functools
|
||||
|
||||
from awx.main.tasks.signals import signal_state, signal_callback, with_signal_handling
|
||||
|
||||
|
||||
def pytest_sigint():
|
||||
pytest_sigint.called_count += 1
|
||||
|
||||
|
||||
def pytest_sigterm():
|
||||
pytest_sigterm.called_count += 1
|
||||
|
||||
|
||||
def tmp_signals_for_test(func):
|
||||
"""
|
||||
When we run our internal signal handlers, it will call the original signal
|
||||
handlers when its own work is finished.
|
||||
This would crash the test runners normally, because those methods will
|
||||
shut down the process.
|
||||
So this is a decorator to safely replace existing signal handlers
|
||||
with new signal handlers that do nothing so that tests do not crash.
|
||||
"""
|
||||
|
||||
@functools.wraps(func)
|
||||
def wrapper():
|
||||
original_sigterm = signal.getsignal(signal.SIGTERM)
|
||||
original_sigint = signal.getsignal(signal.SIGINT)
|
||||
signal.signal(signal.SIGTERM, pytest_sigterm)
|
||||
signal.signal(signal.SIGINT, pytest_sigint)
|
||||
pytest_sigterm.called_count = 0
|
||||
pytest_sigint.called_count = 0
|
||||
func()
|
||||
signal.signal(signal.SIGTERM, original_sigterm)
|
||||
signal.signal(signal.SIGINT, original_sigint)
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
@tmp_signals_for_test
|
||||
def test_outer_inner_signal_handling():
|
||||
"""
|
||||
Even if the flag is set in the outer context, its value should persist in the inner context
|
||||
@@ -15,17 +50,22 @@ def test_outer_inner_signal_handling():
|
||||
@with_signal_handling
|
||||
def f1():
|
||||
assert signal_callback() is False
|
||||
signal_state.set_flag()
|
||||
signal_state.set_sigterm_flag()
|
||||
assert signal_callback()
|
||||
f2()
|
||||
|
||||
original_sigterm = signal.getsignal(signal.SIGTERM)
|
||||
assert signal_callback() is False
|
||||
assert pytest_sigterm.called_count == 0
|
||||
assert pytest_sigint.called_count == 0
|
||||
f1()
|
||||
assert signal_callback() is False
|
||||
assert signal.getsignal(signal.SIGTERM) is original_sigterm
|
||||
assert pytest_sigterm.called_count == 1
|
||||
assert pytest_sigint.called_count == 0
|
||||
|
||||
|
||||
@tmp_signals_for_test
|
||||
def test_inner_outer_signal_handling():
|
||||
"""
|
||||
Even if the flag is set in the inner context, its value should persist in the outer context
|
||||
@@ -34,7 +74,7 @@ def test_inner_outer_signal_handling():
|
||||
@with_signal_handling
|
||||
def f2():
|
||||
assert signal_callback() is False
|
||||
signal_state.set_flag()
|
||||
signal_state.set_sigint_flag()
|
||||
assert signal_callback()
|
||||
|
||||
@with_signal_handling
|
||||
@@ -45,6 +85,10 @@ def test_inner_outer_signal_handling():
|
||||
|
||||
original_sigterm = signal.getsignal(signal.SIGTERM)
|
||||
assert signal_callback() is False
|
||||
assert pytest_sigterm.called_count == 0
|
||||
assert pytest_sigint.called_count == 0
|
||||
f1()
|
||||
assert signal_callback() is False
|
||||
assert signal.getsignal(signal.SIGTERM) is original_sigterm
|
||||
assert pytest_sigterm.called_count == 0
|
||||
assert pytest_sigint.called_count == 1
|
||||
|
||||
@@ -143,13 +143,6 @@ def test_send_notifications_job_id(mocker):
|
||||
assert UnifiedJob.objects.get.called_with(id=1)
|
||||
|
||||
|
||||
def test_work_success_callback_missing_job():
|
||||
task_data = {'type': 'project_update', 'id': 9999}
|
||||
with mock.patch('django.db.models.query.QuerySet.get') as get_mock:
|
||||
get_mock.side_effect = ProjectUpdate.DoesNotExist()
|
||||
assert system.handle_work_success(task_data) is None
|
||||
|
||||
|
||||
@mock.patch('awx.main.models.UnifiedJob.objects.get')
|
||||
@mock.patch('awx.main.models.Notification.objects.filter')
|
||||
def test_send_notifications_list(mock_notifications_filter, mock_job_get, mocker):
|
||||
|
||||
@@ -17,11 +17,26 @@ def construct_rsyslog_conf_template(settings=settings):
|
||||
port = getattr(settings, 'LOG_AGGREGATOR_PORT', '')
|
||||
protocol = getattr(settings, 'LOG_AGGREGATOR_PROTOCOL', '')
|
||||
timeout = getattr(settings, 'LOG_AGGREGATOR_TCP_TIMEOUT', 5)
|
||||
max_disk_space_main_queue = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_GB', 1)
|
||||
action_queue_size = getattr(settings, 'LOG_AGGREGATOR_ACTION_QUEUE_SIZE', 131072)
|
||||
max_disk_space_action_queue = getattr(settings, 'LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB', 1)
|
||||
spool_directory = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_PATH', '/var/lib/awx').rstrip('/')
|
||||
error_log_file = getattr(settings, 'LOG_AGGREGATOR_RSYSLOGD_ERROR_LOG_FILE', '')
|
||||
|
||||
queue_options = [
|
||||
f'queue.spoolDirectory="{spool_directory}"',
|
||||
'queue.filename="awx-external-logger-action-queue"',
|
||||
f'queue.maxDiskSpace="{max_disk_space_action_queue}g"', # overall disk space for all queue files
|
||||
'queue.maxFileSize="100m"', # individual file size
|
||||
'queue.type="LinkedList"',
|
||||
'queue.saveOnShutdown="on"',
|
||||
'queue.syncqueuefiles="on"', # (f)sync when checkpoint occurs
|
||||
'queue.checkpointInterval="1000"', # Update disk queue every 1000 messages
|
||||
f'queue.size="{action_queue_size}"', # max number of messages in queue
|
||||
f'queue.highwaterMark="{int(action_queue_size * 0.75)}"', # 75% of queue.size
|
||||
f'queue.discardMark="{int(action_queue_size * 0.9)}"', # 90% of queue.size
|
||||
'queue.discardSeverity="5"', # Only discard notice, info, debug if we must discard anything
|
||||
]
|
||||
|
||||
if not os.access(spool_directory, os.W_OK):
|
||||
spool_directory = '/var/lib/awx'
|
||||
|
||||
@@ -33,7 +48,6 @@ def construct_rsyslog_conf_template(settings=settings):
|
||||
'$WorkDirectory /var/lib/awx/rsyslog',
|
||||
f'$MaxMessageSize {max_bytes}',
|
||||
'$IncludeConfig /var/lib/awx/rsyslog/conf.d/*.conf',
|
||||
f'main_queue(queue.spoolDirectory="{spool_directory}" queue.maxdiskspace="{max_disk_space_main_queue}g" queue.type="Disk" queue.filename="awx-external-logger-backlog")', # noqa
|
||||
'module(load="imuxsock" SysSock.Use="off")',
|
||||
'input(type="imuxsock" Socket="' + settings.LOGGING['handlers']['external_logger']['address'] + '" unlink="on" RateLimit.Burst="0")',
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
|
||||
@@ -79,12 +93,7 @@ def construct_rsyslog_conf_template(settings=settings):
|
||||
'action.resumeRetryCount="-1"',
|
||||
'template="awx"',
|
||||
f'action.resumeInterval="{timeout}"',
|
||||
f'queue.spoolDirectory="{spool_directory}"',
|
||||
'queue.filename="awx-external-logger-action-queue"',
|
||||
f'queue.maxdiskspace="{max_disk_space_action_queue}g"',
|
||||
'queue.type="LinkedList"',
|
||||
'queue.saveOnShutdown="on"',
|
||||
]
|
||||
] + queue_options
|
||||
if error_log_file:
|
||||
params.append(f'errorfile="{error_log_file}"')
|
||||
if parsed.path:
|
||||
@@ -112,9 +121,18 @@ def construct_rsyslog_conf_template(settings=settings):
|
||||
params = ' '.join(params)
|
||||
parts.extend(['module(load="omhttp")', f'action({params})'])
|
||||
elif protocol and host and port:
|
||||
parts.append(
|
||||
f'action(type="omfwd" target="{host}" port="{port}" protocol="{protocol}" action.resumeRetryCount="-1" action.resumeInterval="{timeout}" template="awx")' # noqa
|
||||
)
|
||||
params = [
|
||||
'type="omfwd"',
|
||||
f'target="{host}"',
|
||||
f'port="{port}"',
|
||||
f'protocol="{protocol}"',
|
||||
'action.resumeRetryCount="-1"',
|
||||
f'action.resumeInterval="{timeout}"',
|
||||
'template="awx"',
|
||||
] + queue_options
|
||||
params = ' '.join(params)
|
||||
parts.append(f'action({params})')
|
||||
|
||||
else:
|
||||
parts.append('action(type="omfile" file="/dev/null")') # rsyslog needs *at least* one valid action to start
|
||||
tmpl = '\n'.join(parts)
|
||||
|
||||
@@ -199,6 +199,8 @@ class Licenser(object):
|
||||
license['support_level'] = attr.get('value')
|
||||
elif attr.get('name') == 'usage':
|
||||
license['usage'] = attr.get('value')
|
||||
elif attr.get('name') == 'ph_product_name' and attr.get('value') == 'RHEL Developer':
|
||||
license['license_type'] = 'developer'
|
||||
|
||||
if not license:
|
||||
logger.error("No valid subscriptions found in manifest")
|
||||
@@ -322,7 +324,9 @@ class Licenser(object):
|
||||
def generate_license_options_from_entitlements(self, json):
|
||||
from dateutil.parser import parse
|
||||
|
||||
ValidSub = collections.namedtuple('ValidSub', 'sku name support_level end_date trial quantity pool_id satellite subscription_id account_number usage')
|
||||
ValidSub = collections.namedtuple(
|
||||
'ValidSub', 'sku name support_level end_date trial developer_license quantity pool_id satellite subscription_id account_number usage'
|
||||
)
|
||||
valid_subs = []
|
||||
for sub in json:
|
||||
satellite = sub.get('satellite')
|
||||
@@ -350,6 +354,7 @@ class Licenser(object):
|
||||
|
||||
sku = sub['productId']
|
||||
trial = sku.startswith('S') # i.e.,, SER/SVC
|
||||
developer_license = False
|
||||
support_level = ''
|
||||
usage = ''
|
||||
pool_id = sub['id']
|
||||
@@ -364,9 +369,24 @@ class Licenser(object):
|
||||
support_level = attr.get('value')
|
||||
elif attr.get('name') == 'usage':
|
||||
usage = attr.get('value')
|
||||
elif attr.get('name') == 'ph_product_name' and attr.get('value') == 'RHEL Developer':
|
||||
developer_license = True
|
||||
|
||||
valid_subs.append(
|
||||
ValidSub(sku, sub['productName'], support_level, end_date, trial, quantity, pool_id, satellite, subscription_id, account_number, usage)
|
||||
ValidSub(
|
||||
sku,
|
||||
sub['productName'],
|
||||
support_level,
|
||||
end_date,
|
||||
trial,
|
||||
developer_license,
|
||||
quantity,
|
||||
pool_id,
|
||||
satellite,
|
||||
subscription_id,
|
||||
account_number,
|
||||
usage,
|
||||
)
|
||||
)
|
||||
|
||||
if valid_subs:
|
||||
@@ -381,6 +401,8 @@ class Licenser(object):
|
||||
if sub.trial:
|
||||
license._attrs['trial'] = True
|
||||
license._attrs['license_type'] = 'trial'
|
||||
if sub.developer_license:
|
||||
license._attrs['license_type'] = 'developer'
|
||||
license._attrs['instance_count'] = min(MAX_INSTANCES, license._attrs['instance_count'])
|
||||
human_instances = license._attrs['instance_count']
|
||||
if human_instances == MAX_INSTANCES:
|
||||
|
||||
@@ -3,6 +3,8 @@ import logging
|
||||
import asyncio
|
||||
from typing import Dict
|
||||
|
||||
import ipaddress
|
||||
|
||||
import aiohttp
|
||||
from aiohttp import client_exceptions
|
||||
import aioredis
|
||||
@@ -71,7 +73,16 @@ class WebsocketRelayConnection:
|
||||
if not self.channel_layer:
|
||||
self.channel_layer = get_channel_layer()
|
||||
|
||||
uri = f"{self.protocol}://{self.remote_host}:{self.remote_port}/websocket/relay/"
|
||||
# figure out if what we have is an ipaddress, IPv6 Addresses must have brackets added for uri
|
||||
uri_hostname = self.remote_host
|
||||
try:
|
||||
# Throws ValueError if self.remote_host is a hostname like example.com, not an IPv4 or IPv6 ip address
|
||||
if isinstance(ipaddress.ip_address(uri_hostname), ipaddress.IPv6Address):
|
||||
uri_hostname = f"[{uri_hostname}]"
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
uri = f"{self.protocol}://{uri_hostname}:{self.remote_port}/websocket/relay/"
|
||||
timeout = aiohttp.ClientTimeout(total=10)
|
||||
|
||||
secret_val = WebsocketSecretAuthHelper.construct_secret()
|
||||
@@ -216,7 +227,8 @@ class WebSocketRelayManager(object):
|
||||
continue
|
||||
try:
|
||||
if not notif.payload or notif.channel != "web_ws_heartbeat":
|
||||
return
|
||||
logger.warning(f"Unexpected channel or missing payload. {notif.channel}, {notif.payload}")
|
||||
continue
|
||||
|
||||
try:
|
||||
payload = json.loads(notif.payload)
|
||||
@@ -224,13 +236,15 @@ class WebSocketRelayManager(object):
|
||||
logmsg = "Failed to decode message from pg_notify channel `web_ws_heartbeat`"
|
||||
if logger.isEnabledFor(logging.DEBUG):
|
||||
logmsg = "{} {}".format(logmsg, payload)
|
||||
logger.warning(logmsg)
|
||||
return
|
||||
logger.warning(logmsg)
|
||||
continue
|
||||
|
||||
# Skip if the message comes from the same host we are running on
|
||||
# In this case, we'll be sharing a redis, no need to relay.
|
||||
if payload.get("hostname") == self.local_hostname:
|
||||
return
|
||||
hostname = payload.get("hostname")
|
||||
logger.debug("Received a heartbeat request for {hostname}. Skipping as we use redis for local host.")
|
||||
continue
|
||||
|
||||
action = payload.get("action")
|
||||
|
||||
@@ -239,7 +253,7 @@ class WebSocketRelayManager(object):
|
||||
ip = payload.get("ip") or hostname # try back to hostname if ip isn't supplied
|
||||
if ip is None:
|
||||
logger.warning(f"Received invalid {action} ws_heartbeat, missing hostname and ip: {payload}")
|
||||
return
|
||||
continue
|
||||
logger.debug(f"Web host {hostname} ({ip}) {action} heartbeat received.")
|
||||
|
||||
if action == "online":
|
||||
|
||||
@@ -796,7 +796,7 @@ LOG_AGGREGATOR_ENABLED = False
|
||||
LOG_AGGREGATOR_TCP_TIMEOUT = 5
|
||||
LOG_AGGREGATOR_VERIFY_CERT = True
|
||||
LOG_AGGREGATOR_LEVEL = 'INFO'
|
||||
LOG_AGGREGATOR_MAX_DISK_USAGE_GB = 1 # Main queue
|
||||
LOG_AGGREGATOR_ACTION_QUEUE_SIZE = 131072
|
||||
LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB = 1 # Action queue
|
||||
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH = '/var/lib/awx'
|
||||
LOG_AGGREGATOR_RSYSLOGD_DEBUG = False
|
||||
|
||||
@@ -29,7 +29,7 @@ SettingsAPI.readCategory.mockResolvedValue({
|
||||
LOG_AGGREGATOR_TCP_TIMEOUT: 5,
|
||||
LOG_AGGREGATOR_VERIFY_CERT: true,
|
||||
LOG_AGGREGATOR_LEVEL: 'INFO',
|
||||
LOG_AGGREGATOR_MAX_DISK_USAGE_GB: 1,
|
||||
LOG_AGGREGATOR_ACTION_QUEUE_SIZE: 131072,
|
||||
LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB: 1,
|
||||
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH: '/var/lib/awx',
|
||||
LOG_AGGREGATOR_RSYSLOGD_DEBUG: false,
|
||||
|
||||
@@ -31,7 +31,7 @@ const mockSettings = {
|
||||
LOG_AGGREGATOR_TCP_TIMEOUT: 123,
|
||||
LOG_AGGREGATOR_VERIFY_CERT: true,
|
||||
LOG_AGGREGATOR_LEVEL: 'ERROR',
|
||||
LOG_AGGREGATOR_MAX_DISK_USAGE_GB: 1,
|
||||
LOG_AGGREGATOR_ACTION_QUEUE_SIZE: 131072,
|
||||
LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB: 1,
|
||||
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH: '/var/lib/awx',
|
||||
LOG_AGGREGATOR_RSYSLOGD_DEBUG: false,
|
||||
|
||||
@@ -659,21 +659,21 @@
|
||||
]
|
||||
]
|
||||
},
|
||||
"LOG_AGGREGATOR_MAX_DISK_USAGE_GB": {
|
||||
"LOG_AGGREGATOR_ACTION_QUEUE_SIZE": {
|
||||
"type": "integer",
|
||||
"required": false,
|
||||
"label": "Maximum disk persistence for external log aggregation (in GB)",
|
||||
"help_text": "Amount of data to store (in gigabytes) during an outage of the external log aggregator (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. Notably, this is used for the rsyslogd main queue (for input messages).",
|
||||
"label": "Maximum number of messages that can be stored in the log action queue",
|
||||
"help_text": "Defines how large the rsyslog action queue can grow in number of messages stored. This can have an impact on memory utilization. When the queue reaches 75% of this number, the queue will start writing to disk (queue.highWatermark in rsyslog). When it reaches 90%, NOTICE, INFO, and DEBUG messages will start to be discarded (queue.discardMark with queue.discardSeverity=5).",
|
||||
"min_value": 1,
|
||||
"category": "Logging",
|
||||
"category_slug": "logging",
|
||||
"default": 1
|
||||
"default": 131072
|
||||
},
|
||||
"LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB": {
|
||||
"type": "integer",
|
||||
"required": false,
|
||||
"label": "Maximum disk persistence for rsyslogd action queuing (in GB)",
|
||||
"help_text": "Amount of data to store (in gigabytes) if an rsyslog action takes time to process an incoming message (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.",
|
||||
"help_text": "Amount of data to store (in gigabytes) if an rsyslog action takes time to process an incoming message (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). It stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.",
|
||||
"min_value": 1,
|
||||
"category": "Logging",
|
||||
"category_slug": "logging",
|
||||
@@ -5016,10 +5016,10 @@
|
||||
]
|
||||
]
|
||||
},
|
||||
"LOG_AGGREGATOR_MAX_DISK_USAGE_GB": {
|
||||
"LOG_AGGREGATOR_ACTION_QUEUE_SIZE": {
|
||||
"type": "integer",
|
||||
"label": "Maximum disk persistence for external log aggregation (in GB)",
|
||||
"help_text": "Amount of data to store (in gigabytes) during an outage of the external log aggregator (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. Notably, this is used for the rsyslogd main queue (for input messages).",
|
||||
"label": "Maximum number of messages that can be stored in the log action queue",
|
||||
"help_text": "Defines how large the rsyslog action queue can grow in number of messages stored. This can have an impact on memory utilization. When the queue reaches 75% of this number, the queue will start writing to disk (queue.highWatermark in rsyslog). When it reaches 90%, NOTICE, INFO, and DEBUG messages will start to be discarded (queue.discardMark with queue.discardSeverity=5).",
|
||||
"min_value": 1,
|
||||
"category": "Logging",
|
||||
"category_slug": "logging",
|
||||
@@ -5028,7 +5028,7 @@
|
||||
"LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB": {
|
||||
"type": "integer",
|
||||
"label": "Maximum disk persistence for rsyslogd action queuing (in GB)",
|
||||
"help_text": "Amount of data to store (in gigabytes) if an rsyslog action takes time to process an incoming message (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.",
|
||||
"help_text": "Amount of data to store (in gigabytes) if an rsyslog action takes time to process an incoming message (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). It stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.",
|
||||
"min_value": 1,
|
||||
"category": "Logging",
|
||||
"category_slug": "logging",
|
||||
|
||||
@@ -70,7 +70,7 @@
|
||||
"LOG_AGGREGATOR_TCP_TIMEOUT": 5,
|
||||
"LOG_AGGREGATOR_VERIFY_CERT": true,
|
||||
"LOG_AGGREGATOR_LEVEL": "INFO",
|
||||
"LOG_AGGREGATOR_MAX_DISK_USAGE_GB": 1,
|
||||
"LOG_AGGREGATOR_ACTION_QUEUE_SIZE": 131072,
|
||||
"LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB": 1,
|
||||
"LOG_AGGREGATOR_MAX_DISK_USAGE_PATH": "/var/lib/awx",
|
||||
"LOG_AGGREGATOR_RSYSLOGD_DEBUG": false,
|
||||
@@ -548,4 +548,4 @@
|
||||
"adj_list": []
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"LOG_AGGREGATOR_TCP_TIMEOUT": 5,
|
||||
"LOG_AGGREGATOR_VERIFY_CERT": true,
|
||||
"LOG_AGGREGATOR_LEVEL": "INFO",
|
||||
"LOG_AGGREGATOR_MAX_DISK_USAGE_GB": 1,
|
||||
"LOG_AGGREGATOR_ACTION_QUEUE_SIZE": 131072,
|
||||
"LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB": 1,
|
||||
"LOG_AGGREGATOR_MAX_DISK_USAGE_PATH": "/var/lib/awx",
|
||||
"LOG_AGGREGATOR_RSYSLOGD_DEBUG": false,
|
||||
|
||||
@@ -980,6 +980,15 @@ class ControllerAPIModule(ControllerModule):
|
||||
def create_or_update_if_needed(
|
||||
self, existing_item, new_item, endpoint=None, item_type='unknown', on_create=None, on_update=None, auto_exit=True, associations=None
|
||||
):
|
||||
# Remove boolean values of certain specific types
|
||||
# this is needed so that boolean fields will not get a false value when not provided
|
||||
for key in list(new_item.keys()):
|
||||
if key in self.argument_spec:
|
||||
param_spec = self.argument_spec[key]
|
||||
if 'type' in param_spec and param_spec['type'] == 'bool':
|
||||
if new_item[key] is None:
|
||||
new_item.pop(key)
|
||||
|
||||
if existing_item:
|
||||
return self.update_if_needed(existing_item, new_item, on_update=on_update, auto_exit=auto_exit, associations=associations)
|
||||
else:
|
||||
|
||||
@@ -118,6 +118,7 @@ status:
|
||||
'''
|
||||
|
||||
from ..module_utils.controller_api import ControllerAPIModule
|
||||
import json
|
||||
|
||||
|
||||
def main():
|
||||
@@ -161,7 +162,11 @@ def main():
|
||||
}
|
||||
for arg in ['job_type', 'limit', 'forks', 'verbosity', 'extra_vars', 'become_enabled', 'diff_mode']:
|
||||
if module.params.get(arg):
|
||||
post_data[arg] = module.params.get(arg)
|
||||
# extra_var can receive a dict or a string, if a dict covert it to a string
|
||||
if arg == 'extra_vars' and type(module.params.get(arg)) is not str:
|
||||
post_data[arg] = json.dumps(module.params.get(arg))
|
||||
else:
|
||||
post_data[arg] = module.params.get(arg)
|
||||
|
||||
# Attempt to look up the related items the user specified (these will fail the module if not found)
|
||||
post_data['inventory'] = module.resolve_name_to_id('inventories', inventory)
|
||||
|
||||
@@ -58,6 +58,7 @@ options:
|
||||
Insights, Machine, Microsoft Azure Key Vault, Microsoft Azure Resource Manager, Network, OpenShift or Kubernetes API
|
||||
Bearer Token, OpenStack, Red Hat Ansible Automation Platform, Red Hat Satellite 6, Red Hat Virtualization, Source Control,
|
||||
Thycotic DevOps Secrets Vault, Thycotic Secret Server, Vault, VMware vCenter, or a custom credential type
|
||||
required: True
|
||||
type: str
|
||||
inputs:
|
||||
description:
|
||||
@@ -214,7 +215,7 @@ def main():
|
||||
copy_from=dict(),
|
||||
description=dict(),
|
||||
organization=dict(),
|
||||
credential_type=dict(),
|
||||
credential_type=dict(required=True),
|
||||
inputs=dict(type='dict', no_log=True),
|
||||
update_secrets=dict(type='bool', default=True, no_log=False),
|
||||
user=dict(),
|
||||
|
||||
@@ -115,7 +115,7 @@ EXAMPLES = '''
|
||||
- name: Export a job template named "My Template" and all Credentials
|
||||
export:
|
||||
job_templates: "My Template"
|
||||
credential: 'all'
|
||||
credentials: 'all'
|
||||
|
||||
- name: Export a list of inventories
|
||||
export:
|
||||
|
||||
@@ -72,6 +72,21 @@
|
||||
- "result is changed"
|
||||
- "result.status == 'successful'"
|
||||
|
||||
- name: Launch an Ad Hoc Command with extra_vars
|
||||
ad_hoc_command:
|
||||
inventory: "Demo Inventory"
|
||||
credential: "{{ ssh_cred_name }}"
|
||||
module_name: "ping"
|
||||
extra_vars:
|
||||
var1: "test var"
|
||||
wait: true
|
||||
register: result
|
||||
|
||||
- assert:
|
||||
that:
|
||||
- "result is changed"
|
||||
- "result.status == 'successful'"
|
||||
|
||||
- name: Launch an Ad Hoc Command with Execution Environment specified
|
||||
ad_hoc_command:
|
||||
inventory: "Demo Inventory"
|
||||
|
||||
@@ -71,6 +71,19 @@
|
||||
that:
|
||||
- "result is changed"
|
||||
|
||||
- name: Delete a credential without credential_type
|
||||
credential:
|
||||
name: "{{ ssh_cred_name1 }}"
|
||||
organization: Default
|
||||
state: absent
|
||||
register: result
|
||||
ignore_errors: true
|
||||
|
||||
- assert:
|
||||
that:
|
||||
- "result is failed"
|
||||
|
||||
|
||||
- name: Create an Org-specific credential with an ID with exists
|
||||
credential:
|
||||
name: "{{ ssh_cred_name1 }}"
|
||||
|
||||
@@ -42,6 +42,16 @@
|
||||
that:
|
||||
- "result is not changed"
|
||||
|
||||
- name: Modify the host as a no-op
|
||||
host:
|
||||
name: "{{ host_name }}"
|
||||
inventory: "{{ inv_name }}"
|
||||
register: result
|
||||
|
||||
- assert:
|
||||
that:
|
||||
- "result is not changed"
|
||||
|
||||
- name: Delete a Host
|
||||
host:
|
||||
name: "{{ host_name }}"
|
||||
@@ -68,6 +78,15 @@
|
||||
that:
|
||||
- "result is changed"
|
||||
|
||||
- name: Use lookup to check that host was enabled
|
||||
ansible.builtin.set_fact:
|
||||
host_enabled_test: "lookup('awx.awx.controller_api', 'hosts/{{result.id}}/').enabled"
|
||||
|
||||
- name: Newly created host should have API default value for enabled
|
||||
assert:
|
||||
that:
|
||||
- host_enabled_test
|
||||
|
||||
- name: Delete a Host
|
||||
host:
|
||||
name: "{{ result.id }}"
|
||||
|
||||
@@ -76,6 +76,15 @@
|
||||
that:
|
||||
- result is changed
|
||||
|
||||
- name: Use lookup to check that schedules was enabled
|
||||
ansible.builtin.set_fact:
|
||||
schedules_enabled_test: "lookup('awx.awx.controller_api', 'schedules/{{result.id}}/').enabled"
|
||||
|
||||
- name: Newly created schedules should have API default value for enabled
|
||||
assert:
|
||||
that:
|
||||
- schedules_enabled_test
|
||||
|
||||
- name: Build a real schedule with exists
|
||||
schedule:
|
||||
name: "{{ sched1 }}"
|
||||
|
||||
@@ -4,9 +4,9 @@ Bulk API endpoints allows to perform bulk operations in single web request. Ther
|
||||
- /api/v2/bulk/job_launch
|
||||
- /api/v2/bulk/host_create
|
||||
|
||||
Making individual API calls in rapid succession or at high concurrency can overwhelm AWX's ability to serve web requests. When the application's ability to serve is exausted, clients often receive 504 timeout errors.
|
||||
Making individual API calls in rapid succession or at high concurrency can overwhelm AWX's ability to serve web requests. When the application's ability to serve is exhausted, clients often receive 504 timeout errors.
|
||||
|
||||
Allowing the client combine actions into fewer requests allows for launching more jobs or adding more hosts with fewer requests and less time without exauhsting Controller's ability to serve requests, making excessive and repetitive database queries, or using excessive database connections (each web request opens a seperate database connection).
|
||||
Allowing the client combine actions into fewer requests allows for launching more jobs or adding more hosts with fewer requests and less time without exhauhsting Controller's ability to serve requests, making excessive and repetitive database queries, or using excessive database connections (each web request opens a separate database connection).
|
||||
|
||||
## Bulk Job Launch
|
||||
|
||||
|
||||
@@ -104,7 +104,7 @@ Given settings.AWX_CONTROL_NODE_TASK_IMPACT is 1:
|
||||
|
||||
This setting allows you to determine how much impact controlling jobs has. This
|
||||
can be helpful if you notice symptoms of your control plane exceeding desired
|
||||
CPU or memory usage, as it effectivly throttles how many jobs can be run
|
||||
CPU or memory usage, as it effectively throttles how many jobs can be run
|
||||
concurrently by your control plane. This is usually a concern with container
|
||||
groups, which at this time effectively have infinite capacity, so it is easy to
|
||||
end up with too many jobs running concurrently, overwhelming the control plane
|
||||
@@ -130,10 +130,10 @@ be `18`:
|
||||
|
||||
By default, only Instances have capacity and we only track capacity consumed per instance. With the max_forks and max_concurrent_jobs fields now available on Instance Groups, we additionally can limit how many jobs or forks are allowed to be concurrently consumed across an entire Instance Group or Container Group.
|
||||
|
||||
This is especially useful for Container Groups where previously, there was no limit to how many jobs we would submit to a Container Group, which made it impossible to "overflow" job loads from one Container Group to another container group, which may be on a different Kubenetes cluster or namespace.
|
||||
This is especially useful for Container Groups where previously, there was no limit to how many jobs we would submit to a Container Group, which made it impossible to "overflow" job loads from one Container Group to another container group, which may be on a different Kubernetes cluster or namespace.
|
||||
|
||||
One way to calculate what max_concurrent_jobs is desirable to set on a Container Group is to consider the pod_spec for that container group. In the pod_spec we indicate the resource requests and limits for the automation job pod. If you pod_spec indicates that a pod with 100MB of memory will be provisioned, and you know your Kubernetes cluster has 1 worker node with 8GB of RAM, you know that the maximum number of jobs that you would ideally start would be around 81 jobs, calculated by taking (8GB memory on node * 1024 MB) // 100 MB memory/job pod which with floor division comes out to 81.
|
||||
|
||||
Alternatively, instead of considering the number of job pods and the resources requested, we can consider the memory consumption of the forks in the jobs. We normally consider that 100MB of memory will be used by each fork of ansible. Therefore we also know that our 8 GB worker node should also only run 81 forks of ansible at a time -- which depending on the forks and inventory settings of the job templates, could be consumed by anywhere from 1 job to 81 jobs. So we can also set max_forks = 81. This way, either 39 jobs with 1 fork can run (task impact is always forks + 1), or 2 jobs with forks set to 39 can run.
|
||||
|
||||
While this feature is most useful for Container Groups where there is no other way to limit job execution, this feature is avialable for use on any instance group. This can be useful if for other business reasons you want to set a InstanceGroup wide limit on concurrent jobs. For example, if you have a job template that you only want 10 copies of running at a time -- you could create a dedicated instance group for that job template and set max_concurrent_jobs to 10.
|
||||
While this feature is most useful for Container Groups where there is no other way to limit job execution, this feature is available for use on any instance group. This can be useful if for other business reasons you want to set a InstanceGroup wide limit on concurrent jobs. For example, if you have a job template that you only want 10 copies of running at a time -- you could create a dedicated instance group for that job template and set max_concurrent_jobs to 10.
|
||||
|
||||
@@ -45,7 +45,7 @@ of the awx-operator repo. If not, continue to the next section.
|
||||
```
|
||||
# in awx-operator repo on the branch you want to use
|
||||
$ export IMAGE_TAG_BASE=quay.io/<username>/awx-operator
|
||||
$ export VERSION=<cusom-tag>
|
||||
$ export VERSION=<custom-tag>
|
||||
$ make docker-build
|
||||
$ docker push ${IMAGE_TAG_BASE}:${VERSION}
|
||||
```
|
||||
@@ -118,7 +118,7 @@ To access via the web browser, run the following command:
|
||||
$ minikube service awx-service --url
|
||||
```
|
||||
|
||||
To retreive your admin password
|
||||
To retrieve your admin password
|
||||
```
|
||||
$ kubectl get secrets awx-admin-password -o json | jq '.data.password' | xargs | base64 -d
|
||||
```
|
||||
|
||||
@@ -5,7 +5,7 @@ import shlex
|
||||
from datetime import datetime
|
||||
from importlib import import_module
|
||||
|
||||
#sys.path.insert(0, os.path.abspath('./rst/rest_api/_swagger'))
|
||||
sys.path.insert(0, os.path.abspath('./rst/rest_api/_swagger'))
|
||||
|
||||
project = u'Ansible AWX'
|
||||
copyright = u'2023, Red Hat'
|
||||
@@ -35,6 +35,7 @@ extensions = [
|
||||
'sphinx.ext.coverage',
|
||||
'sphinx.ext.ifconfig',
|
||||
'sphinx_ansible_theme',
|
||||
'swagger',
|
||||
]
|
||||
|
||||
html_theme = 'sphinx_ansible_theme'
|
||||
|
||||
@@ -8,13 +8,13 @@ alabaster==0.7.13
|
||||
# via sphinx
|
||||
ansible-pygments==0.1.1
|
||||
# via sphinx-ansible-theme
|
||||
babel==2.12.1
|
||||
babel==2.13.1
|
||||
# via sphinx
|
||||
certifi==2023.7.22
|
||||
# via requests
|
||||
charset-normalizer==3.2.0
|
||||
charset-normalizer==3.3.2
|
||||
# via requests
|
||||
docutils==0.16
|
||||
docutils==0.18.1
|
||||
# via
|
||||
# -r docs/docsite/requirements.in
|
||||
# sphinx
|
||||
@@ -23,13 +23,13 @@ idna==3.4
|
||||
# via requests
|
||||
imagesize==1.4.1
|
||||
# via sphinx
|
||||
jinja2==3.0.3
|
||||
jinja2==3.1.2
|
||||
# via
|
||||
# -r docs/docsite/requirements.in
|
||||
# sphinx
|
||||
markupsafe==2.1.3
|
||||
# via jinja2
|
||||
packaging==23.1
|
||||
packaging==23.2
|
||||
# via sphinx
|
||||
pygments==2.16.1
|
||||
# via
|
||||
@@ -41,7 +41,7 @@ requests==2.31.0
|
||||
# via sphinx
|
||||
snowballstemmer==2.2.0
|
||||
# via sphinx
|
||||
sphinx==5.1.1
|
||||
sphinx==7.2.6
|
||||
# via
|
||||
# -r docs/docsite/requirements.in
|
||||
# sphinx-ansible-theme
|
||||
@@ -52,7 +52,7 @@ sphinx==5.1.1
|
||||
# sphinxcontrib-jquery
|
||||
# sphinxcontrib-qthelp
|
||||
# sphinxcontrib-serializinghtml
|
||||
sphinx-ansible-theme==0.9.1
|
||||
sphinx-ansible-theme==0.10.2
|
||||
# via -r docs/docsite/requirements.in
|
||||
sphinx-rtd-theme==1.3.0
|
||||
# via sphinx-ansible-theme
|
||||
@@ -70,5 +70,5 @@ sphinxcontrib-qthelp==1.0.6
|
||||
# via sphinx
|
||||
sphinxcontrib-serializinghtml==1.1.9
|
||||
# via sphinx
|
||||
urllib3==2.0.4
|
||||
urllib3==2.1.0
|
||||
# via requests
|
||||
|
||||
@@ -23,6 +23,7 @@ The default length of time, in seconds, that your supplied token is valid can be
|
||||
4. Enter the timeout period in seconds in the **Idle Time Force Log Out** text field.
|
||||
|
||||
.. image:: ../common/images/configure-awx-system-timeout.png
|
||||
:alt: Miscellaneous Authentication settings showing the Idle Time Force Logout field where you can adjust the token validity period.
|
||||
|
||||
4. Click **Save** to apply your changes.
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
.. _ag_clustering:
|
||||
|
||||
Clustering
|
||||
@@ -11,7 +10,7 @@ Clustering
|
||||
Clustering is sharing load between hosts. Each instance should be able to act as an entry point for UI and API access. This should enable AWX administrators to use load balancers in front of as many instances as they wish and maintain good data visibility.
|
||||
|
||||
.. note::
|
||||
Load balancing is optional and is entirely possible to have ingress on one or all instances as needed. The ``CSRF_TRUSTED_ORIGIN`` setting may be required if you are using AWX behind a load balancer. See :ref:`ki_csrf_trusted_origin_setting` for more detail.
|
||||
Load balancing is optional and is entirely possible to have ingress on one or all instances as needed. The ``CSRF_TRUSTED_ORIGIN`` setting may be required if you are using AWX behind a load balancer. See :ref:`ki_csrf_trusted_origin_setting` for more detail.
|
||||
|
||||
Each instance should be able to join AWX cluster and expand its ability to execute jobs. This is a simple system where jobs can and will run anywhere rather than be directed on where to run. Also, clustered instances can be grouped into different pools/queues, called :ref:`ag_instance_groups`.
|
||||
|
||||
@@ -107,61 +106,61 @@ Example of customization could be:
|
||||
|
||||
::
|
||||
|
||||
---
|
||||
spec:
|
||||
...
|
||||
node_selector: |
|
||||
disktype: ssd
|
||||
kubernetes.io/arch: amd64
|
||||
kubernetes.io/os: linux
|
||||
topology_spread_constraints: |
|
||||
- maxSkew: 100
|
||||
topologyKey: "topology.kubernetes.io/zone"
|
||||
whenUnsatisfiable: "ScheduleAnyway"
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: "<resourcename>"
|
||||
tolerations: |
|
||||
- key: "dedicated"
|
||||
operator: "Equal"
|
||||
value: "AWX"
|
||||
effect: "NoSchedule"
|
||||
task_tolerations: |
|
||||
- key: "dedicated"
|
||||
operator: "Equal"
|
||||
value: "AWX_task"
|
||||
effect: "NoSchedule"
|
||||
postgres_selector: |
|
||||
disktype: ssd
|
||||
kubernetes.io/arch: amd64
|
||||
kubernetes.io/os: linux
|
||||
postgres_tolerations: |
|
||||
- key: "dedicated"
|
||||
operator: "Equal"
|
||||
value: "AWX"
|
||||
effect: "NoSchedule"
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: another-node-label-key
|
||||
operator: In
|
||||
values:
|
||||
- another-node-label-value
|
||||
- another-node-label-value
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: security
|
||||
operator: In
|
||||
values:
|
||||
- S2
|
||||
topologyKey: topology.kubernetes.io/zone
|
||||
---
|
||||
spec:
|
||||
...
|
||||
node_selector: |
|
||||
disktype: ssd
|
||||
kubernetes.io/arch: amd64
|
||||
kubernetes.io/os: linux
|
||||
topology_spread_constraints: |
|
||||
- maxSkew: 100
|
||||
topologyKey: "topology.kubernetes.io/zone"
|
||||
whenUnsatisfiable: "ScheduleAnyway"
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: "<resourcename>"
|
||||
tolerations: |
|
||||
- key: "dedicated"
|
||||
operator: "Equal"
|
||||
value: "AWX"
|
||||
effect: "NoSchedule"
|
||||
task_tolerations: |
|
||||
- key: "dedicated"
|
||||
operator: "Equal"
|
||||
value: "AWX_task"
|
||||
effect: "NoSchedule"
|
||||
postgres_selector: |
|
||||
disktype: ssd
|
||||
kubernetes.io/arch: amd64
|
||||
kubernetes.io/os: linux
|
||||
postgres_tolerations: |
|
||||
- key: "dedicated"
|
||||
operator: "Equal"
|
||||
value: "AWX"
|
||||
effect: "NoSchedule"
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: another-node-label-key
|
||||
operator: In
|
||||
values:
|
||||
- another-node-label-value
|
||||
- another-node-label-value
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: security
|
||||
operator: In
|
||||
values:
|
||||
- S2
|
||||
topologyKey: topology.kubernetes.io/zone
|
||||
|
||||
|
||||
Status and Monitoring via Browser API
|
||||
@@ -204,6 +203,7 @@ The way jobs are run and reported to a 'normal' user of AWX does not change. On
|
||||
- When a job is submitted from the API interface it gets pushed into the dispatcher queue. Each AWX instance will connect to and receive jobs from that queue using a particular scheduling algorithm. Any instance in the cluster is just as likely to receive the work and execute the task. If a instance fails while executing jobs, then the work is marked as permanently failed.
|
||||
|
||||
.. image:: ../common/images/clustering-visual.png
|
||||
:alt: An illustration depicting job distribution in an AWX cluster.
|
||||
|
||||
- Project updates run successfully on any instance that could potentially run a job. Projects will sync themselves to the correct version on the instance immediately prior to running the job. If the needed revision is already locally checked out and Galaxy or Collections updates are not needed, then a sync may not be performed.
|
||||
|
||||
@@ -218,5 +218,3 @@ Job Runs
|
||||
By default, when a job is submitted to the AWX queue, it can be picked up by any of the workers. However, you can control where a particular job runs, such as restricting the instances from which a job runs on.
|
||||
|
||||
In order to support temporarily taking an instance offline, there is a property enabled defined on each instance. When this property is disabled, no jobs will be assigned to that instance. Existing jobs will finish, but no new work will be assigned.
|
||||
|
||||
|
||||
|
||||
@@ -11,6 +11,7 @@ AWX Configuration
|
||||
You can configure various AWX settings within the Settings screen in the following tabs:
|
||||
|
||||
.. image:: ../common/images/ug-settings-menu-screen.png
|
||||
:alt: Screenshot of the AWX settings menu screen.
|
||||
|
||||
Each tab contains fields with a **Reset** button, allowing you to revert any value entered back to the default value. **Reset All** allows you to revert all the values to their factory default values.
|
||||
|
||||
@@ -47,6 +48,7 @@ The Jobs tab allows you to configure the types of modules that are allowed to be
|
||||
The values for all the timeouts are in seconds.
|
||||
|
||||
.. image:: ../common/images/configure-awx-jobs.png
|
||||
:alt: Screenshot of the AWX job configuration settings.
|
||||
|
||||
3. Click **Save** to apply the settings or **Cancel** to abandon the changes.
|
||||
|
||||
@@ -69,6 +71,7 @@ The System tab allows you to define the base URL for the AWX host, configure ale
|
||||
- **Logging settings**: configure logging options based on the type you choose:
|
||||
|
||||
.. image:: ../common/images/configure-awx-system-logging-types.png
|
||||
:alt: Logging settings shown with the list of options for Logging Aggregator Types.
|
||||
|
||||
For more information about each of the logging aggregation types, refer to the :ref:`ag_logging` section of the |ata|.
|
||||
|
||||
@@ -78,6 +81,7 @@ The System tab allows you to define the base URL for the AWX host, configure ale
|
||||
.. |help| image:: ../common/images/tooltips-icon.png
|
||||
|
||||
.. image:: ../common/images/configure-awx-system.png
|
||||
:alt: Miscellaneous System settings window showing all possible configurable options.
|
||||
|
||||
.. note::
|
||||
|
||||
|
||||
@@ -140,6 +140,7 @@ The Instance Groups list view from the |at| User Interface provides a summary of
|
||||
|Instance Group policy example|
|
||||
|
||||
.. |Instance Group policy example| image:: ../common/images/instance-groups_list_view.png
|
||||
:alt: Instance Group list view with example instance group and container groups.
|
||||
|
||||
See :ref:`ug_instance_groups_create` for further detail.
|
||||
|
||||
@@ -231,6 +232,7 @@ Likewise, an administrator could assign multiple groups to each organization as
|
||||
|Instance Group example|
|
||||
|
||||
.. |Instance Group example| image:: ../common/images/instance-groups-scenarios.png
|
||||
:alt: Illustration showing grouping scenarios.
|
||||
|
||||
Arranging resources in this way offers a lot of flexibility. Also, you can create instance groups with only one instance, thus allowing you to direct work towards a very specific Host in the AWX cluster.
|
||||
|
||||
@@ -327,6 +329,7 @@ To create a container group:
|
||||
|IG - create new CG|
|
||||
|
||||
.. |IG - create new CG| image:: ../common/images/instance-group-create-new-cg.png
|
||||
:alt: Create new container group form.
|
||||
|
||||
4. Enter a name for your new container group and select the credential previously created to associate it to the container group.
|
||||
|
||||
@@ -342,10 +345,12 @@ To customize the Pod spec, specify the namespace in the **Pod Spec Override** fi
|
||||
|IG - CG customize pod|
|
||||
|
||||
.. |IG - CG customize pod| image:: ../common/images/instance-group-customize-cg-pod.png
|
||||
:alt: Create new container group form with the option to custom the pod spec.
|
||||
|
||||
You may provide additional customizations, if needed. Click **Expand** to view the entire customization window.
|
||||
|
||||
.. image:: ../common/images/instance-group-customize-cg-pod-expanded.png
|
||||
:alt: The expanded view for customizing the pod spec.
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -354,10 +359,12 @@ You may provide additional customizations, if needed. Click **Expand** to view t
|
||||
Once the container group is successfully created, the **Details** tab of the newly created container group remains, which allows you to review and edit your container group information. This is the same menu that is opened if the Edit (|edit-button|) button is clicked from the **Instance Group** link. You can also edit **Instances** and review **Jobs** associated with this instance group.
|
||||
|
||||
.. |edit-button| image:: ../common/images/edit-button.png
|
||||
:alt: Edit button.
|
||||
|
||||
|IG - example CG successfully created|
|
||||
|
||||
.. |IG - example CG successfully created| image:: ../common/images/instance-group-example-cg-successfully-created.png
|
||||
:alt: Example of the successfully created instance group as shown in the Jobs tab of the Instance groups window.
|
||||
|
||||
Container groups and instance groups are labeled accordingly.
|
||||
|
||||
@@ -375,6 +382,7 @@ To verify the deployment and termination of your container:
|
||||
|Dummy inventory|
|
||||
|
||||
.. |Dummy inventory| image:: ../common/images/inventories-create-new-cg-test-inventory.png
|
||||
:alt: Example of creating a new container group test inventory.
|
||||
|
||||
2. Create "localhost" host in inventory with variables:
|
||||
|
||||
@@ -385,6 +393,7 @@ To verify the deployment and termination of your container:
|
||||
|Inventory with localhost|
|
||||
|
||||
.. |Inventory with localhost| image:: ../common/images/inventories-create-new-cg-test-localhost.png
|
||||
:alt: The new container group test inventory showing the populated variables.
|
||||
|
||||
3. Launch an ad hoc job against the localhost using the *ping* or *setup* module. Even though the **Machine Credential** field is required, it does not matter which one is selected for this simple test.
|
||||
|
||||
@@ -393,13 +402,14 @@ To verify the deployment and termination of your container:
|
||||
.. |Launch inventory with localhost| image:: ../common/images/inventories-launch-adhoc-cg-test-localhost.png
|
||||
|
||||
.. image:: ../common/images/inventories-launch-adhoc-cg-test-localhost2.png
|
||||
:alt: Launching a Ping adhoc command on the newly created inventory with localhost.
|
||||
|
||||
You can see in the jobs detail view the container was reached successfully using one of ad hoc jobs.
|
||||
|
||||
|Inventory with localhost ping success|
|
||||
|
||||
.. |Inventory with localhost ping success| image:: ../common/images/inventories-launch-adhoc-cg-test-localhost-success.png
|
||||
|
||||
:alt: Jobs output view showing a successfully ran adhoc job.
|
||||
|
||||
If you have an OpenShift UI, you can see Pods appear and disappear as they deploy and terminate. Alternatively, you can use the CLI to perform a ``get pod`` operation on your namespace to watch these same events occurring in real-time.
|
||||
|
||||
@@ -412,6 +422,7 @@ When you run a job associated with a container group, you can see the details of
|
||||
|IG - instances jobs|
|
||||
|
||||
.. |IG - instances jobs| image:: ../common/images/instance-group-job-details-with-cgs.png
|
||||
:alt: Example Job details window showing the associated execution environment and container group.
|
||||
|
||||
|
||||
Kubernetes API failure conditions
|
||||
|
||||
@@ -55,6 +55,7 @@ To set up enterprise authentication for Microsoft Azure Active Directory (AD), y
|
||||
8. To verify that the authentication was configured correctly, logout of AWX and the login screen will now display the Microsoft Azure logo to allow logging in with those credentials.
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-azure-logo.png
|
||||
:alt: AWX login screen displaying the Microsoft Azure logo for authentication.
|
||||
|
||||
|
||||
For application registering basics in Azure AD, refer to the `Azure AD Identity Platform (v2)`_ overview.
|
||||
@@ -102,6 +103,7 @@ SAML settings
|
||||
SAML allows the exchange of authentication and authorization data between an Identity Provider (IdP - a system of servers that provide the Single Sign On service) and a Service Provider (in this case, AWX). AWX can be configured to talk with SAML in order to authenticate (create/login/logout) AWX users. User Team and Organization membership can be embedded in the SAML response to AWX.
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-saml-topology.png
|
||||
:alt: Diagram depicting SAML topology for AWX.
|
||||
|
||||
The following instructions describe AWX as the service provider.
|
||||
|
||||
@@ -122,6 +124,7 @@ To setup SAML authentication:
|
||||
In this example, the Service Provider is the AWX cluster, and therefore, the ID is set to the AWX Cluster FQDN.
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-saml-spentityid.png
|
||||
:alt: Configuring SAML Service Provider Entity ID in AWX.
|
||||
|
||||
5. Create a server certificate for the Ansible cluster. Typically when an Ansible cluster is configured, AWX nodes will be configured to handle HTTP traffic only and the load balancer will be an SSL Termination Point. In this case, an SSL certificate is required for the load balancer, and not for the individual AWX Cluster Nodes. SSL can either be enabled or disabled per individual AWX node, but should be disabled when using an SSL terminated load balancer. It is recommended to use a non-expiring self signed certificate to avoid periodically updating certificates. This way, authentication will not fail in case someone forgets to update the certificate.
|
||||
|
||||
@@ -132,6 +135,7 @@ In this example, the Service Provider is the AWX cluster, and therefore, the ID
|
||||
If you are using a CA bundle with your certificate, include the entire bundle in this field.
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-saml-cert.png
|
||||
:alt: Configuring SAML Service Provider Public Certificate in AWX.
|
||||
|
||||
As an example for public certs:
|
||||
|
||||
@@ -167,6 +171,7 @@ As an example for private keys:
|
||||
For example:
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-saml-org-info.png
|
||||
:alt: Configuring SAML Organization information in AWX.
|
||||
|
||||
.. note::
|
||||
These fields are required in order to properly configure SAML within AWX.
|
||||
@@ -183,6 +188,7 @@ For example:
|
||||
For example:
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-saml-techcontact-info.png
|
||||
:alt: Configuring SAML Technical Contact information in AWX.
|
||||
|
||||
9. Provide the IdP with the support contact information in the **SAML Service Provider Support Contact** field. Do not remove the contents of this field.
|
||||
|
||||
@@ -196,6 +202,7 @@ For example:
|
||||
For example:
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-saml-suppcontact-info.png
|
||||
:alt: Configuring SAML Support Contact information in AWX.
|
||||
|
||||
10. In the **SAML Enabled Identity Providers** field, provide information on how to connect to each Identity Provider listed. AWX expects the following SAML attributes in the example below:
|
||||
|
||||
@@ -238,6 +245,7 @@ Configure the required keys for each IDp:
|
||||
}
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-saml-idps.png
|
||||
:alt: Configuring SAML Identity Providers (IdPs) in AWX.
|
||||
|
||||
.. warning::
|
||||
|
||||
@@ -249,6 +257,7 @@ Configure the required keys for each IDp:
|
||||
The IdP provides the email, last name and firstname using the well known SAML urn. The IdP uses a custom SAML attribute to identify a user, which is an attribute that AWX is unable to read. Instead, AWX can understand the unique identifier name, which is the URN. Use the URN listed in the SAML “Name” attribute for the user attributes as shown in the example below.
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-saml-idps-urn.png
|
||||
:alt: Configuring SAML Identity Providers (IdPs) in AWX using URNs.
|
||||
|
||||
11. Optionally provide the **SAML Organization Map**. For further detail, see :ref:`ag_org_team_maps`.
|
||||
|
||||
@@ -479,6 +488,7 @@ Example::
|
||||
Alternatively, logout of AWX and the login screen will now display the SAML logo to indicate it as a alternate method of logging into AWX.
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-saml-logo.png
|
||||
:alt: AWX login screen displaying the SAML logo for authentication.
|
||||
|
||||
|
||||
Transparent SAML Logins
|
||||
@@ -495,6 +505,7 @@ For transparent logins to work, you must first get IdP-initiated logins to work.
|
||||
2. Once this is working, specify the redirect URL for non-logged-in users to somewhere other than the default AWX login page by using the **Login redirect override URL** field in the Miscellaneous Authentication settings window of the **Settings** menu, accessible from the left navigation bar. This should be set to ``/sso/login/saml/?idp=<name-of-your-idp>`` for transparent SAML login, as shown in the example.
|
||||
|
||||
.. image:: ../common/images/configure-awx-system-login-redirect-url.png
|
||||
:alt: Configuring the login redirect URL in AWX Miscellaneous Authentication Settings.
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -537,6 +548,7 @@ Terminal Access Controller Access-Control System Plus (TACACS+) is a protocol th
|
||||
- **TACACS+ Authentication Protocol**: The protocol used by TACACS+ client. Options are **ascii** or **pap**.
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-tacacs.png
|
||||
:alt: TACACS+ configuration details in AWX settings.
|
||||
|
||||
4. Click **Save** when done.
|
||||
|
||||
@@ -563,6 +575,7 @@ To configure OIDC in AWX:
|
||||
The example below shows specific values associated to GitHub as the generic IdP:
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-oidc.png
|
||||
:alt: OpenID Connect (OIDC) configuration details in AWX settings.
|
||||
|
||||
4. Click **Save** when done.
|
||||
|
||||
@@ -574,4 +587,4 @@ The example below shows specific values associated to GitHub as the generic IdP:
|
||||
5. To verify that the authentication was configured correctly, logout of AWX and the login screen will now display the OIDC logo to indicate it as a alternate method of logging into AWX.
|
||||
|
||||
.. image:: ../common/images/configure-awx-auth-oidc-logo.png
|
||||
|
||||
:alt: AWX login screen displaying the OpenID Connect (OIDC) logo for authentication.
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
.. _ag_instances:
|
||||
|
||||
Managing Capacity With Instances
|
||||
@@ -58,6 +57,7 @@ Manage instances
|
||||
Click **Instances** from the left side navigation menu to access the Instances list.
|
||||
|
||||
.. image:: ../common/images/instances_list_view.png
|
||||
:alt: List view of instances in AWX
|
||||
|
||||
The Instances list displays all the current nodes in your topology, along with relevant details:
|
||||
|
||||
@@ -83,6 +83,7 @@ The Instances list displays all the current nodes in your topology, along with r
|
||||
From this page, you can add, remove or run health checks on your nodes. Use the check boxes next to an instance to select it to remove or run a health check against. When a button is grayed-out, you do not have permission for that particular action. Contact your Administrator to grant you the required level of access. If you are able to remove an instance, you will receive a prompt for confirmation, like the one below:
|
||||
|
||||
.. image:: ../common/images/instances_delete_prompt.png
|
||||
:alt: Prompt for deleting instances in AWX.
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -95,6 +96,7 @@ Click **Remove** to confirm.
|
||||
If running a health check on an instance, at the top of the Details page, a message displays that the health check is in progress.
|
||||
|
||||
.. image:: ../common/images/instances_health_check.png
|
||||
:alt: Health check for instances in AWX
|
||||
|
||||
Click **Reload** to refresh the instance status.
|
||||
|
||||
@@ -103,10 +105,12 @@ Click **Reload** to refresh the instance status.
|
||||
Health checks are ran asynchronously, and may take up to a minute for the instance status to update, even with a refresh. The status may or may not change after the health check. At the bottom of the Details page, a timer/clock icon displays next to the last known health check date and time stamp if the health check task is currently running.
|
||||
|
||||
.. image:: ../common/images/instances_health_check_pending.png
|
||||
:alt: Health check for instance still in pending state.
|
||||
|
||||
The example health check shows the status updates with an error on node 'one':
|
||||
|
||||
.. image:: ../common/images/topology-viewer-instance-with-errors.png
|
||||
:alt: Health check showing an error in one of the instances.
|
||||
|
||||
|
||||
Add an instance
|
||||
@@ -119,6 +123,7 @@ One of the ways to expand capacity is to create an instance, which serves as a n
|
||||
2. In the Instances list view, click the **Add** button and the Create new Instance window opens.
|
||||
|
||||
.. image:: ../common/images/instances_create_new.png
|
||||
:alt: Create a new instance form.
|
||||
|
||||
An instance has several attributes that may be configured:
|
||||
|
||||
@@ -134,6 +139,7 @@ An instance has several attributes that may be configured:
|
||||
Upon successful creation, the Details of the created instance opens.
|
||||
|
||||
.. image:: ../common/images/instances_create_details.png
|
||||
:alt: Details of the newly created instance.
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -142,6 +148,7 @@ Upon successful creation, the Details of the created instance opens.
|
||||
4. Click the download button next to the **Install Bundle** field to download the tarball that includes this new instance and the files relevant to install the node into the mesh.
|
||||
|
||||
.. image:: ../common/images/instances_install_bundle.png
|
||||
:alt: Instance details showing the Download button in the Install Bundle field of the Details tab.
|
||||
|
||||
5. Extract the downloaded ``tar.gz`` file from the location you downloaded it. The install bundle contains yaml files, certificates, and keys that will be used in the installation process.
|
||||
|
||||
@@ -179,6 +186,7 @@ The content of the ``inventory.yml`` file serves as a template and contains vari
|
||||
|
||||
|
||||
.. image:: ../common/images/instances_peers_tab.png
|
||||
:alt: "Peers" tab showing two peers.
|
||||
|
||||
You may run a health check by selecting the node and clicking the **Run health check** button from its Details page.
|
||||
|
||||
|
||||
|
Before Width: | Height: | Size: 222 KiB After Width: | Height: | Size: 128 KiB |
|
Before Width: | Height: | Size: 383 KiB |
|
Before Width: | Height: | Size: 52 KiB After Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 20 KiB |
|
Before Width: | Height: | Size: 16 KiB After Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 54 KiB After Width: | Height: | Size: 51 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 24 KiB |
|
Before Width: | Height: | Size: 34 KiB After Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 20 KiB After Width: | Height: | Size: 19 KiB |
|
Before Width: | Height: | Size: 40 KiB After Width: | Height: | Size: 48 KiB |
|
Before Width: | Height: | Size: 98 KiB After Width: | Height: | Size: 93 KiB |
|
After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 11 KiB After Width: | Height: | Size: 26 KiB |
|
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 35 KiB |
|
Before Width: | Height: | Size: 228 KiB After Width: | Height: | Size: 109 KiB |
|
Before Width: | Height: | Size: 24 KiB After Width: | Height: | Size: 43 KiB |
|
Before Width: | Height: | Size: 5.4 KiB After Width: | Height: | Size: 11 KiB |
|
Before Width: | Height: | Size: 74 KiB After Width: | Height: | Size: 99 KiB |
|
Before Width: | Height: | Size: 44 KiB After Width: | Height: | Size: 47 KiB |
|
Before Width: | Height: | Size: 48 KiB After Width: | Height: | Size: 64 KiB |
|
Before Width: | Height: | Size: 1.0 KiB After Width: | Height: | Size: 2.2 KiB |
|
Before Width: | Height: | Size: 200 KiB After Width: | Height: | Size: 69 KiB |
|
Before Width: | Height: | Size: 90 KiB |
|
Before Width: | Height: | Size: 72 KiB After Width: | Height: | Size: 43 KiB |
|
Before Width: | Height: | Size: 99 KiB |
@@ -3,6 +3,7 @@ AWX supports the use of a custom logo and login message. You can add a custom lo
|
||||
|
||||
|
||||
.. image:: ../common/images/configure-awx-ui.png
|
||||
:alt: Edit User Interface Settings form.
|
||||
|
||||
|
||||
For the custom logo to look its best, use a ``.png`` file with a transparent background. GIF, PNG, and JPEG formats are supported.
|
||||
@@ -14,14 +15,17 @@ adding it to the **Custom Login Info** text field.
|
||||
For example, if you uploaded a specific logo, and added the following text:
|
||||
|
||||
.. image:: ../common/images/configure-awx-ui-logo-filled.png
|
||||
:alt: Edit User Interface Settings form populated with custom text and logo.
|
||||
|
||||
|
||||
The Tower login dialog would look like this:
|
||||
|
||||
.. image:: ../common/images/configure-awx-ui-angry-spud-login.png
|
||||
:alt: AWX login screen with custom text and logo.
|
||||
|
||||
|
||||
|
||||
Selecting ``Revert`` will result in the appearance of the standard |at| logo.
|
||||
|
||||
.. image:: ../common/images/login-form.png
|
||||
:alt: AWX login screen with default AWX logo.
|
||||
|
||||
@@ -8,5 +8,6 @@
|
||||
To enter the Settings window for AWX, click **Settings** from the left navigation bar. This page allows you to modify your AWX configuration, such as settings associated with authentication, jobs, system, and user interface.
|
||||
|
||||
.. image:: ../common/images/ug-settings-menu-screen.png
|
||||
:alt: The main settings window for AWX.
|
||||
|
||||
For more information on configuring these settings, refer to :ref:`ag_configure_awx` section of the |ata|.
|
||||
@@ -8,11 +8,20 @@ Release Notes
|
||||
pair: release notes; v23.0.0
|
||||
pair: release notes; v23.1.0
|
||||
pair: release notes; v23.2.0
|
||||
pair: release notes; v23.3.0
|
||||
pair: release notes; v23.3.1
|
||||
pair: release notes; v23.4.0
|
||||
|
||||
|
||||
For versions older than 23.0.0, refer to `AWX Release Notes <https://github.com/ansible/awx/releases>`_.
|
||||
|
||||
|
||||
- See `What's Changed for 23.4.0 <https://github.com/ansible/awx/releases/tag/23.4.0>`_.
|
||||
|
||||
- See `What's Changed for 23.3.1 <https://github.com/ansible/awx/releases/tag/23.3.1>`_.
|
||||
|
||||
- See `What's Changed for 23.3.0 <https://github.com/ansible/awx/releases/tag/23.3.0>`_.
|
||||
|
||||
- See `What's Changed for 23.2.0 <https://github.com/ansible/awx/releases/tag/23.2.0>`_.
|
||||
|
||||
- See `What's Changed for 23.1.0 <https://github.com/ansible/awx/releases/tag/23.1.0>`_.
|
||||
|
||||
13
docs/docsite/rst/rest_api/_swagger/download-json.py
Normal file
@@ -0,0 +1,13 @@
|
||||
import requests
|
||||
|
||||
url = "https://awx-public-ci-files.s3.amazonaws.com/community-docs/swagger.json"
|
||||
swagger_json = "./docs/docsite/rst/rest_api/_swagger/swagger.json"
|
||||
|
||||
response = requests.get(url)
|
||||
|
||||
if response.status_code == 200:
|
||||
with open(swagger_json, 'wb') as file:
|
||||
file.write(response.content)
|
||||
print(f"JSON file downloaded to {swagger_json}")
|
||||
else:
|
||||
print(f"Request failed with status code: {response.status_code}")
|
||||
@@ -1,5 +1,3 @@
|
||||
:orphan:
|
||||
|
||||
.. _api_reference:
|
||||
|
||||
AWX API Reference Guide
|
||||
@@ -48,7 +46,7 @@ The API Reference Manual provides in-depth documentation for the AWX REST API, i
|
||||
<script>
|
||||
window.onload = function() {
|
||||
$('head').append('<link rel="stylesheet" href="../_static/swagger-ui.css" type="text/css"></link>');
|
||||
$('head').append('<link rel="stylesheet" href="../_static/tower.css" type="text/css"></link>');
|
||||
$('head').append('<link rel="stylesheet" href="../_static/awx-rest-api.css" type="text/css"></link>');
|
||||
$('#swagger-ui').on('click', function(e) {
|
||||
// By default, swagger-ui has a show/hide toggle for headers, and
|
||||
// there's no way to turn it off; this code intercepts the click event
|
||||
|
||||
@@ -31,7 +31,7 @@ You can also find lots of AWX discussion and get answers to questions at `forum.
|
||||
access_resources
|
||||
read_only_fields
|
||||
authentication
|
||||
.. api_ref
|
||||
api_ref
|
||||
|
||||
.. intro
|
||||
.. auth_token
|
||||
|
||||
@@ -29,16 +29,19 @@ To create a new credential for use with Insights:
|
||||
5. In the **Organization** field, optionally enter the name of the organization with which the credential is associated, or click the |search| button and select it from the pop-up window.
|
||||
|
||||
.. |search| image:: ../common/images/search-button.png
|
||||
:alt: Search button to select the organization from a pop-up window
|
||||
|
||||
6. In the **Credential Type** field, enter **Insights** or select it from the drop-down list.
|
||||
|
||||
.. image:: ../common/images/credential-types-popup-window-insights.png
|
||||
|
||||
:alt: Dropdown menu for selecting the "Insights" Credential Type
|
||||
|
||||
7. Enter a valid Insights credential in the **Username** and **Password** fields.
|
||||
|
||||
|Credentials - create with demo insights credentials|
|
||||
|
||||
.. |Credentials - create with demo insights credentials| image:: ../common/images/insights-create-with-demo-credentials.png
|
||||
:alt: Create new credential form showing example Insights credential
|
||||
|
||||
8. Click **Save** when done.
|
||||
|
||||
@@ -71,19 +74,21 @@ To create a new Insights project:
|
||||
|Insights - create demo insights project form|
|
||||
|
||||
.. |Insights - create demo insights project form| image:: ../common/images/insights-create-project-insights-form.png
|
||||
:alt: Form for creating a new Insights project with specific Insights-related details
|
||||
|
||||
6. Click **Save** when done.
|
||||
|
||||
All SCM/Project syncs occur automatically the first time you save a new project. However, if you want them to be updated to what is current in Insights, manually update the SCM-based project by clicking the |update| button under the project's available Actions.
|
||||
|
||||
.. |update| image:: ../common/images/update-button.png
|
||||
:alt: Update button to manually refresh the SCM-based project
|
||||
|
||||
This process syncs your Insights project with your Insights account solution. Notice that the status dot beside the name of the project updates once the sync has run.
|
||||
|
||||
|Insights - demo insights project success|
|
||||
|
||||
.. |Insights - demo insights project success| image:: ../common/images/insights-create-project-insights-succeed.png
|
||||
|
||||
:alt: Projects list showing successfully created sample Insights project
|
||||
|
||||
Create Insights Inventory
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
@@ -118,11 +123,13 @@ Remediation of an Insights inventory allows AWX to run Insights playbooks with a
|
||||
|Insights - maintenance plan template filled|
|
||||
|
||||
.. |Insights - maintenance plan template filled| image:: ../common/images/insights-create-new-job-template-maintenance-plan-filled.png
|
||||
:alt: Form for creating a maintenance plan template for Insights remediation.
|
||||
|
||||
3. Click **Save** when done.
|
||||
|
||||
4. Click the |launch| icon to launch the job template.
|
||||
|
||||
.. |launch| image:: ../common/images/launch-button.png
|
||||
:alt: Launch icon.
|
||||
|
||||
Once complete, the job results display in the Job Details page.
|
||||
|
||||
@@ -14,17 +14,20 @@ The User Interface offers a friendly graphical framework for your IT orchestrati
|
||||
The new AWX User Interface is available for tech preview and is subject to change in a future release. To preview the new UI, click the **Enable Preview of New User Interface** toggle to **On** from the Miscellaneous System option of the Settings menu.
|
||||
|
||||
.. image:: ../common/images/configure-awx-system-misc-preview-newui.png
|
||||
:alt: Enabling preview of new user interface in the Miscellaneous System option of the Settings menu.
|
||||
|
||||
After saving, logout and log back in to access the new UI from the preview banner. To return to the current UI, click the link on the top banner where indicated.
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-preview-banner.png
|
||||
:alt: Tech preview banner to view the new user interface.
|
||||
|
||||
Across the top-right side of the interface, you can access your user profile, the About page, view related documentation, and log out. Right below these options, you can view the activity stream for that user by clicking on the Activity Stream |activitystream| button.
|
||||
|
||||
.. |activitystream| image:: ../common/images/activitystream.png
|
||||
:alt: Activity stream icon.
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-top-nav.png
|
||||
|
||||
:alt: Main screen with arrow showing where the activity stream icon resides on the Dashboard.
|
||||
|
||||
|
||||
|
||||
@@ -57,7 +60,7 @@ Dashboard view
|
||||
The **Dashboard** view begins with a summary of your hosts, inventories, and projects. Each of these is linked to the corresponding objects for easy access.
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-topsummary.png
|
||||
|
||||
:alt: Dashboard showing a summary of your hosts, inventories, and projects; and job run statuses.
|
||||
|
||||
On the main Dashboard screen, a summary appears listing your current **Job Status**. The **Job Status** graph displays the number of successful and failed jobs over a specified time period. You can choose to limit the job types that are viewed, and to change the time horizon of the graph.
|
||||
|
||||
@@ -66,11 +69,12 @@ Also available for view are summaries of **Recent Jobs** and **Recent Templates*
|
||||
The **Recent Jobs** section displays which jobs were most recently run, their status, and time when they were run as well.
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-recent-jobs.png
|
||||
:alt: Summary of the most recently used jobs
|
||||
|
||||
The **Recent Templates** section of this display shows a summary of the most recently used templates. You can also access this summary by clicking **Templates** from the left navigation bar.
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-recent-templates.png
|
||||
|
||||
:alt: Summary of the most recently used templates
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -83,6 +87,7 @@ Jobs view
|
||||
Access the **Jobs** view by clicking **Jobs** from the left navigation bar. This view shows all the jobs that have ran, including projects, templates, management jobs, SCM updates, playbook runs, etc.
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-jobs-view.png
|
||||
:alt: Jobs view showing jobs that have ran, including projects, templates, management jobs, SCM updates, playbook runs, etc.
|
||||
|
||||
|
||||
Schedules view
|
||||
@@ -92,6 +97,7 @@ Access the Schedules view by clicking **Schedules** from the left navigation bar
|
||||
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-schedule-view.png
|
||||
:alt: Schedules view showing all the scheduled jobs that are configured.
|
||||
|
||||
|
||||
.. _ug_activitystreams:
|
||||
@@ -110,15 +116,18 @@ Most screens have an Activity Stream (|activitystream|) button. Clicking this br
|
||||
|Users - Activity Stream|
|
||||
|
||||
.. |Users - Activity Stream| image:: ../common/images/users-activity-stream.png
|
||||
:alt: Summary of the recent activity on Activity Stream dashboard.
|
||||
|
||||
An Activity Stream shows all changes for a particular object. For each change, the Activity Stream shows the time of the event, the user that
|
||||
initiated the event, and the action. The information displayed varies depending on the type of event. Clicking on the Examine (|examine|) button shows the event log for the change.
|
||||
|
||||
.. |examine| image:: ../common/images/examine-button.png
|
||||
:alt: Examine button
|
||||
|
||||
|event log|
|
||||
|
||||
.. |event log| image:: ../common/images/activity-stream-event-log.png
|
||||
:alt: Example of an event log of an Activity Stream instance.
|
||||
|
||||
The Activity Stream can be filtered by the initiating user (or the
|
||||
system, if it was system initiated), and by any related object,
|
||||
|
||||
@@ -15,18 +15,16 @@ A :term:`workflow job template` links together a sequence of disparate resources
|
||||
|
||||
The **Templates** menu opens a list of the workflow and job templates that are currently available. The default view is collapsed (Compact), showing the template name, template type, and the statuses of the jobs that ran using that template, but you can click **Expanded** to view more information. This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template. From this screen, you can launch (|launch|), edit (|edit|), and copy (|copy|) a workflow job template.
|
||||
|
||||
.. |delete| image:: ../common/images/delete-button.png
|
||||
|
||||
|
||||
Only workflow templates have the Workflow Visualizer icon (|wf-viz-icon|) as a shortcut for accessing the workflow editor.
|
||||
|
||||
.. |wf-viz-icon| image:: ../common/images/wf-viz-icon.png
|
||||
:alt: Workflow vizualizer icon
|
||||
|
||||
|
||||
|Wf templates - home with example wf template|
|
||||
|
||||
.. |Wf templates - home with example wf template| image:: ../common/images/wf-templates-home-with-example-wf-template.png
|
||||
|
||||
:alt: Job templates list view with example of workflow template and arrow pointing to the Workflow vizualizer icon.
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -41,10 +39,12 @@ To create a new workflow job template:
|
||||
1. Click the |add options template| button then select **Workflow Template** from the menu list.
|
||||
|
||||
.. |add options template| image:: ../common/images/add-options-template.png
|
||||
:alt: Create new template Add drop-down options.
|
||||
|
||||
|Wf templates - create new wf template|
|
||||
|
||||
.. |Wf templates - create new wf template| image:: ../common/images/wf-templates-create-new-wf-template.png
|
||||
:alt: Create new workflow template form.
|
||||
|
||||
|
||||
2. Enter the appropriate details into the following fields:
|
||||
@@ -53,6 +53,9 @@ To create a new workflow job template:
|
||||
|
||||
If a field has the **Prompt on launch** checkbox selected, launching the workflow template, or when the workflow template is used within another workflow template, it will prompt for the value for that field upon launch. Most prompted values will override any values set in the workflow job template; exceptions are noted below.
|
||||
|
||||
.. |delete| image:: ../common/images/delete-button.png
|
||||
:alt: Delete button.
|
||||
|
||||
.. list-table::
|
||||
:widths: 10 35 30
|
||||
:header-rows: 1
|
||||
@@ -106,9 +109,11 @@ To create a new workflow job template:
|
||||
For more information about **Job Tags** and **Skip Tags**, refer to `Tags <https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html>`_ in the Ansible documentation.
|
||||
|
||||
.. |x-circle| image:: ../common/images/x-delete-button.png
|
||||
:alt: x delete button
|
||||
|
||||
.. |x| image:: ../common/images/x-button.png
|
||||
|
||||
:alt: x button
|
||||
|
||||
|
||||
3. **Options**: Specify options for launching this workflow job template, if necessary.
|
||||
|
||||
@@ -136,6 +141,7 @@ For more information about **Job Tags** and **Skip Tags**, refer to `Tags <https
|
||||
Saving the template exits the workflow template page and the Workflow Visualizer opens to allow you to build a workflow. See the :ref:`ug_wf_editor` section for further instructions. Otherwise, you may close the Workflow Visualizer to return to the Details tab of the newly saved template in order to review, edit, add permissions, notifications, schedules, and surveys, or view completed jobs and build a workflow template at a later time. Alternatively, you can click **Launch** to launch the workflow, but you must first save the template prior to launching, otherwise, the **Launch** button remains grayed-out. Also, note the **Notifications** tab is present only after the template has been saved.
|
||||
|
||||
.. image:: ../common/images/wf-templates-wf-template-saved.png
|
||||
:alt: Details tab of the newly created workflow template.
|
||||
|
||||
|
||||
|
||||
@@ -146,6 +152,8 @@ Work with Permissions
|
||||
Clicking on **Access** allows you to review, grant, edit, and remove associated permissions for users as well as team members.
|
||||
|
||||
.. image:: ../common/images/wf-template-completed-permissions-view.png
|
||||
:alt: Access tab of the newly created workflow template showing two user roles and their permissions.
|
||||
|
||||
|
||||
Click the **Add** button to create new permissions for this workflow template by following the prompts to assign them accordingly.
|
||||
|
||||
@@ -156,13 +164,15 @@ Work with Notifications
|
||||
|
||||
Clicking on **Notifications** allows you to review any notification integrations you have setup. The **Notifications** tab is present only after the template has been saved.
|
||||
|
||||
.. .. image:: ../common/images/wf-template-completed-notifications-view.png
|
||||
.. image:: ../common/images/wf-template-completed-notifications-view.png
|
||||
:alt: Notifications tab of the newly created workflow template showing four notification configurations with one notification set for Approval.
|
||||
|
||||
Use the toggles to enable or disable the notifications to use with your particular template. For more detail, see :ref:`ug_notifications_on_off`.
|
||||
|
||||
If no notifications have been set up, see :ref:`ug_notifications_create` for detail.
|
||||
|
||||
.. image:: ../common/images/wf-template-no-notifications-blank.png
|
||||
:alt: Notifications tab of the newly created workflow template showing no notifications set up.
|
||||
|
||||
|
||||
Refer to :ref:`ug_notifications_types` for additional details on configuring various notification types.
|
||||
@@ -174,20 +184,14 @@ View Completed Jobs
|
||||
|
||||
The **Completed Jobs** tab provides the list of workflow templates that have ran. Click **Expanded** to view the various details of each job.
|
||||
|
||||
.. .. image:: ../common/images/wf-template-completed-jobs-list.png
|
||||
.. image:: ../common/images/wf-template-completed-jobs-list.png
|
||||
:alt: Jobs tab of the example workflow template showing completed jobs.
|
||||
|
||||
|
||||
From this view, you can click the job ID - name of the workflow job and see its graphical representation. The example below shows the job details of a workflow job.
|
||||
|
||||
.. image:: ../common/images/wf-template-jobID-detail-example.png
|
||||
|
||||
.. If a workflow template is used in another workflow, the jobs details indicate a parent workflow.
|
||||
|
||||
.. .. image:: ../common/images/wf-template-job-detail-with-parent.png
|
||||
|
||||
.. In the above example, click the parent workflow template, **Overall**, to view its Job Details page and the graphical details of the nodes and statuses of each as they were launched.
|
||||
|
||||
.. .. image:: ../common/images/wf-template-jobs-detail-example.png
|
||||
:alt: Details of the job output for the selected workflow template by job ID
|
||||
|
||||
The nodes are marked with labels that help you identify them at a glance. See the legend_ in the :ref:`ug_wf_editor` section for more information.
|
||||
|
||||
@@ -201,6 +205,7 @@ Work with Schedules
|
||||
Clicking on **Schedules** allows you to review any schedules set up for this template.
|
||||
|
||||
.. .. image:: ../common/images/templates-schedules-example-list.png
|
||||
:alt: workflow template - schedule list example
|
||||
|
||||
|
||||
|
||||
@@ -246,8 +251,8 @@ To create a survey:
|
||||
|
||||
1. Click the **Survey** tab to bring up the **Add Survey** window.
|
||||
|
||||
.. figure:: ../common/images/wf-template-create-survey.png
|
||||
:alt: Workflow Job Template - create survey
|
||||
.. image:: ../common/images/wf-template-create-survey.png
|
||||
:alt: Workflow Job Template showing the Create survey form.
|
||||
|
||||
Use the **ON/OFF** toggle button at the top of the screen to quickly activate or deactivate this survey prompt.
|
||||
|
||||
@@ -283,6 +288,7 @@ A stylized version of the survey is presented in the Preview pane. For any quest
|
||||
|Workflow-template-completed-survey|
|
||||
|
||||
.. |Workflow-template-completed-survey| image:: ../common/images/wf-template-completed-survey.png
|
||||
:alt: Workflow Job Template showing completed survey and arrows pointing to the re-ordering icons.
|
||||
|
||||
|
||||
Optional Survey Questions
|
||||
@@ -324,16 +330,20 @@ You can set up any combination of two or more of the following node types to bui
|
||||
1. In the details/edit view of a workflow template, click the **Visualizer** tab or from the Templates list view, click the (|wf-viz-icon|) icon to launch the Workflow Visualizer.
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-new.png
|
||||
:alt: Workflow Visualizer start page.
|
||||
|
||||
2. Click the |start| button to display a list of nodes to add to your workflow.
|
||||
|
||||
.. |start| image:: ../common/images/wf-start-button.png
|
||||
:alt: Workflow Visualizer Start button.
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-new-add-template-list.png
|
||||
:alt: Workflow Visualizer wizard, step 1 specifying the node type.
|
||||
|
||||
3. On the right pane, select the type of node you want to add from the drop-down menu:
|
||||
|
||||
.. image:: ../common/images/wf-add-node-selections.png
|
||||
:alt: Node type showing the drop-down menu of node type options.
|
||||
|
||||
If selecting an **Approval** node, see :ref:`ug_wf_approval_nodes` for further detail.
|
||||
|
||||
@@ -360,9 +370,7 @@ For subsequent nodes, you can select one of the following scenarios (edge type)
|
||||
|
||||
- Choose **All** to ensure that *all* nodes complete as specified, before converging and triggering the next node. The purpose of ALL nodes is to make sure that every parent met it's expected outcome in order to run the child node. The workflow checks to make sure every parent behaved as expected in order to run the child node. Otherwise, it will not run the child node.
|
||||
|
||||
If selected, the graphical view will label the node as **ALL**.
|
||||
|
||||
.. image:: ../common/images/wf-editor-convergent-node-all.png
|
||||
If selected, the graphical view will indicate the node type with a representative color. Refer to the legend (|compass|) to see the corresponding run scenario and their job types.
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -372,45 +380,51 @@ For subsequent nodes, you can select one of the following scenarios (edge type)
|
||||
7. If a job template used in the workflow has **Prompt on Launch** selected for any of its parameters, a **Prompt** button appears, allowing you to change those values at the node level. Use the wizard to change the value(s) in each of the tabs and click **Confirm** in the Preview tab.
|
||||
|
||||
.. image:: ../common/images/wf-editor-prompt-button-wizard.png
|
||||
:alt: Workflow Visualizer wizard with Prompt on Launch options.
|
||||
|
||||
Likewise, if a workflow template used in the workflow has **Prompt on Launch** selected for the inventory option, use the wizard to supply the inventory at the prompt. If the parent workflow has its own inventory, it will override any inventory that is supplied here.
|
||||
|
||||
.. image:: ../common/images/wf-editor-prompt-button-inventory-wizard.png
|
||||
:alt: Workflow Visualizer wizard with Prompt on Launch for Inventory.
|
||||
|
||||
.. note::
|
||||
|
||||
For workflow job templates with promptable fields that are required, but do not have a default, you must provide those values when creating a node before the **Select** button becomes enabled. The two cases that disable the **Select** button until a value is provided via the **Prompt** button: 1) when you select the **Prompt on Launch** checkbox in a workflow job template, but do not provide a default, or 2) when you create a survey question that is required but do not provide a default answer. However, this is **NOT** the case with credentials. Credentials that require a password on launch are **not permitted** when creating a workflow node, since everything needed to launch the node must be provided when the node is created. So, if a workflow job template prompts for credentials, AWX prevents you from being able to select a credential that requires a password.
|
||||
|
||||
You must also click **Select** when the prompt wizard closes in order to apply the changes at that node. Otherwise, any changes you make will revert back to the values set in the actual job template.
|
||||
|
||||
.. image:: ../common/images/wf-editor-wizard-buttons.png
|
||||
|
||||
Once the node is created, it is labeled with its job type. A template that is associated with each workflow node will run based on the selected run scenario as it proceeds. Click the compass (|compass|) icon to display the legend for each run scenario and their job types.
|
||||
|
||||
.. _legend:
|
||||
|
||||
.. |compass| image:: ../common/images/wf-editor-compass-button.png
|
||||
:alt: Workflow Visualizer legend button.
|
||||
|
||||
.. image:: ../common/images/wf-editor-key-dropdown-list.png
|
||||
:alt: Workflow Visualizer legend expanded.
|
||||
|
||||
8. Hovering over a node allows you to add |add node| another node, view info |info node| about the node, edit |edit| the node details, edit an existing link |edit link|, or delete |delete node| the selected node.
|
||||
|
||||
.. |add node| image:: ../common/images/wf-editor-add-button.png
|
||||
:alt: Add node icon.
|
||||
.. |edit link| image:: ../common/images/wf-editor-edit-link.png
|
||||
:alt: Edit link icon.
|
||||
.. |delete node| image:: ../common/images/wf-editor-delete-button.png
|
||||
:alt: Delete node icon.
|
||||
.. |info node| image:: ../common/images/wf-editor-info-button.png
|
||||
:alt: View node details icon.
|
||||
.. |edit| image:: ../common/images/edit-button.png
|
||||
:alt: Edit node details icon.
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-new-add-template.png
|
||||
:alt: Building a new example workflow job template in the Workflow Visualizer
|
||||
|
||||
|
||||
9. When done adding/editing a node, click **Select** to save any modifications and render it on the graphical view. For possible ways to build your workflow, see :ref:`ug_wf_building_scenarios`.
|
||||
9. When done adding/editing a node, click **Save** to save any modifications and render it on the graphical view. For possible ways to build your workflow, see :ref:`ug_wf_building_scenarios`.
|
||||
|
||||
10. When done with building your workflow template, click **Save** to save your entire workflow template and return to the new workflow template details page.
|
||||
|
||||
.. important::
|
||||
|
||||
Clicking **Close** on this pane will not save your work, but instead, closes the entire Workflow Visualizer and you will have to start over.
|
||||
Closing the wizard without saving will not save your work, but instead, closes the entire Workflow Visualizer and you will have to start where you last saved.
|
||||
|
||||
|
||||
.. _ug_wf_approval_nodes:
|
||||
@@ -421,22 +435,27 @@ Approval nodes
|
||||
Choosing an **Approval** node requires user intervention in order to advance the workflow. This functions as a means to pause the workflow in between playbooks so that a user can give approval to continue on to the next playbook in the workflow, giving the user a specified amount of time to intervene, but also allows the user to continue as quickly as possible without having to wait on some other trigger.
|
||||
|
||||
.. image:: ../common/images/wf-node-approval-form.png
|
||||
:alt: Workflow Visualizer Approval node form.
|
||||
|
||||
The default for the timeout is none, but you can specify the length of time before the request expires and automatically gets denied. After selecting and supplying the information for the approval node, it displays on the graphical view with a pause (|pause|) icon next to it.
|
||||
|
||||
.. |pause| image:: ../common/images/wf-node-approval-icon.png
|
||||
:alt: Workflow node - approval icon.
|
||||
|
||||
.. image:: ../common/images/wf-node-approval-node.png
|
||||
:alt: Workflow Visualizer showing approval node with pause icon.
|
||||
|
||||
The approver is anyone who can execute the workflow job template containing the approval nodes, has org admin or above privileges (for the org associated with that workflow job template), or any user who has the *Approve* permission explicitly assigned to them within that specific workflow job template.
|
||||
|
||||
.. image:: ../common/images/wf-node-approval-notifications.png
|
||||
:alt: Workflows requesting approval from notifications
|
||||
|
||||
If pending approval nodes are not approved within the specified time limit (if an expiration was assigned) or they are denied, then they are marked as "timed out" or "failed", respectively, and move on to the next "on fail node" or "always node". If approved, the "on success" path is taken. If you try to POST in the API to a node that has already been approved, denied or timed out, an error message notifies you that this action is redundant, and no further steps will be taken.
|
||||
|
||||
Below shows the various levels of permissions allowed on approval workflows:
|
||||
|
||||
.. image:: ../common/images/wf-node-approval-rbac.png
|
||||
:alt: Workflow nodes approval RBAC table.
|
||||
|
||||
.. source file located on google spreadsheet "Workflow approvals chart"
|
||||
|
||||
@@ -448,18 +467,22 @@ Node building scenarios
|
||||
You can add a sibling node by clicking the |add node| on the parent node:
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-sibling-node.png
|
||||
:alt: Workflow Visualizer showing how to create a sibling node.
|
||||
|
||||
You can insert another node in between nodes by hovering over the line that connects the two until the |add node| appears. Clicking on the |add node| automatically inserts the node between the two nodes.
|
||||
|
||||
.. image:: ../common/images/wf-editor-insert-node-template.png
|
||||
:alt: Workflow Visualizer showing how to insert a node.
|
||||
|
||||
To add a root node to depict a split scenario, click the |start| button again:
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-new-add-template-split.png
|
||||
:alt: Workflow Visualizer showing how depict a split scnario.
|
||||
|
||||
At any node where you want to create a split scenario, hover over the node from which the split scenario begins and click the |add node|. This essentially adds multiple nodes from the same parent node, creating sibling nodes:
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-siblings.png
|
||||
:alt: Workflow Visualizer showing how to create sibling nodes.
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -471,8 +494,9 @@ If you want to undo the last inserted node, click on another node without making
|
||||
Below is an example of a workflow that contains all three types of jobs that is initiated by a job template that if it fails to run, proceed to the project sync job, and regardless of whether that fails or succeeds, proceed to the inventory sync job.
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-new-add-template-example.png
|
||||
:alt: Workflow Visualizer showing a workflow job that contains a job template, a project, and an inventory source.
|
||||
|
||||
Remember to refer to the Key at the top of the window to identify the meaning of the symbols and colors associated with the graphical depiction.
|
||||
Remember to refer to the Legend at the top of the window to identify the meaning of the symbols and colors associated with the graphical depiction.
|
||||
|
||||
|
||||
.. note::
|
||||
@@ -481,6 +505,7 @@ Remember to refer to the Key at the top of the window to identify the meaning of
|
||||
|
||||
|
||||
.. image:: ../common/images/wf-node-delete-scenario.png
|
||||
:alt: Workflow Visualizer showing a workflow job with a deleted node.
|
||||
|
||||
|
||||
The following ways you can modify your nodes:
|
||||
@@ -490,14 +515,22 @@ The following ways you can modify your nodes:
|
||||
- To edit the edge type for an existing link (success/failure/always), click on the link. The right pane displays the current selection. Make your changes and click **Save** to apply them to the graphical view.
|
||||
|
||||
.. image:: ../common/images/wf-editor-wizard-edit-link.png
|
||||
:alt: Workflow Visualizer showing the wizard to edit the link.
|
||||
|
||||
- To add a new link from one node to another, click the link |edit link| icon that appears on each node. Doing this highlights the nodes that are possible to link to. These feasible options are indicated by the dotted lines. Invalid options are indicated by grayed out boxes (nodes) that would otherwise produce an invalid link. The example below shows the **Demo Project** as a possible option for the **e2e-ec20de52-project** to link to, as indicated by the arrows:
|
||||
|
||||
.. image:: ../common/images/wf-node-link-scenario.png
|
||||
:alt: Workflow showing linking scenerio between two nodes.
|
||||
|
||||
- To remove a link, click the link and click the **Unlink** button.
|
||||
When linked, specify the type of run scenario you would like the link to have in the Add Link prompt.
|
||||
|
||||
.. image:: ../common/images/wf-editor-wizard-add-link-prompt.png
|
||||
:alt: Workflow Visualizer prompt specifying the run type when adding a new link.
|
||||
|
||||
- To remove a link, click the link and click the **Unlink** (|delete node|) icon and click **Remove** at the prompt to confirm.
|
||||
|
||||
.. image:: ../common/images/wf-editor-wizard-unlink.png
|
||||
:alt: Workflow Visualizer showing the wizard to remove the link.
|
||||
|
||||
This button only appears in the right hand panel if the target or child node has more than one parent. All nodes must be linked to at least one other node at all times so you must create a new link before removing an old one.
|
||||
|
||||
@@ -505,6 +538,7 @@ This button only appears in the right hand panel if the target or child node has
|
||||
Click the Tools icon (|tools|) to zoom, pan, or reposition the view. Alternatively, you can drag the workflow diagram to reposition it on the screen or use the scroll on your mouse to zoom.
|
||||
|
||||
.. |tools| image:: ../common/images/tools.png
|
||||
:alt: Workflow Visualizer tools icon.
|
||||
|
||||
|
||||
|
||||
@@ -519,19 +553,19 @@ Launch a workflow template by any of the following ways:
|
||||
- Access the workflow templates list from the **Templates** menu on the left navigation bar or while in the workflow template Details view, scroll to the bottom to access the |launch| button from the list of templates.
|
||||
|
||||
.. image:: ../common/images/wf-templates-wf-template-launch.png
|
||||
:alt: Templates list view with arrow pointing to the launch button of the workflow job template.
|
||||
|
||||
- While in the Workflow Job Template Details view of the job you want to launch, click **Launch**.
|
||||
|
||||
.. |launch| image:: ../common/images/launch-button.png
|
||||
:alt: Workflow template launch button.
|
||||
|
||||
Along with any extra variables set in the workflow job template and survey, AWX automatically adds the same variables as those added for a workflow job template upon launch. Additionally, AWX automatically redirects the web browser to the Jobs Details page for this job, displaying the progress and the results.
|
||||
|
||||
Events related to approvals on workflows display in the Activity Stream (|activity-stream|) with detailed information about the approval requests, if any.
|
||||
Events related to approvals on workflows display at the top in the Activity Stream (|activity-stream|) with detailed information about the approval requests, if any.
|
||||
|
||||
.. |activity-stream| image:: ../common/images/activitystream.png
|
||||
|
||||
.. .. image:: ../common/images/wf-activity-stream-events.png
|
||||
|
||||
:alt: Activity Stream icon.
|
||||
|
||||
Copy a Workflow Template
|
||||
-------------------------------
|
||||
@@ -543,10 +577,12 @@ AWX allows you the ability to copy a workflow template. If you choose to copy a
|
||||
2. Click the |copy| button.
|
||||
|
||||
.. |copy| image:: ../common/images/copy-button.png
|
||||
:alt: Copy button.
|
||||
|
||||
A new template opens with the name of the template from which you copied and a timestamp.
|
||||
|
||||
.. image:: ../common/images/wf-list-view-copy-example.png
|
||||
:alt: Templates list view with example copied workflow.
|
||||
|
||||
Select the copied template and replace the contents of the **Name** field with a new name, and provide or modify the entries in the other fields to complete this template.
|
||||
|
||||
@@ -567,6 +603,8 @@ Extra Variables
|
||||
pair: workflow templates; survey extra variables
|
||||
pair: surveys; extra variables
|
||||
|
||||
.. <- Comment to separate the index and the note to avoid rendering issues.
|
||||
|
||||
.. note::
|
||||
|
||||
``extra_vars`` passed to the job launch API are only honored if one of the following is true:
|
||||
@@ -592,3 +630,4 @@ The following table notes the behavior (hierarchy) of variable precedence in AWX
|
||||
**Variable Precedence Hierarchy (last listed wins)**
|
||||
|
||||
.. image:: ../common/images/Architecture-AWX_Variable_Precedence_Hierarchy-Workflows.png
|
||||
:alt: AWX Variable Precedence Hierarchy for Workflows
|
||||
|
||||
@@ -4,7 +4,7 @@ Stand-alone execution nodes can be added to run alongside the Kubernetes deploym
|
||||
|
||||
Hop nodes can be added to sit between the control plane of AWX and stand alone execution nodes. These machines will not be a part of the AWX Kubernetes cluster. The machines will be registered in AWX as node type "hop", meaning they will only handle inbound / outbound traffic for otherwise unreachable nodes in a different or more strict network.
|
||||
|
||||
Below is an example of an AWX Task pod with two excution nodes. Traffic to execution node 2 flows through a hop node that is setup between it and the control plane.
|
||||
Below is an example of an AWX Task pod with two execution nodes. Traffic to execution node 2 flows through a hop node that is setup between it and the control plane.
|
||||
|
||||
```
|
||||
AWX TASK POD
|
||||
@@ -33,7 +33,7 @@ Adding an execution instance involves a handful of steps:
|
||||
|
||||
### Start machine
|
||||
|
||||
Bring a machine online with a compatible Red Hat family OS (e.g. RHEL 8 and 9). This machines needs a static IP, or a resolvable DNS hostname that the AWX cluster can access. If the listerner_port is defined, the machine will also need an available open port to establish inbound TCP connections on (e.g. 27199).
|
||||
Bring a machine online with a compatible Red Hat family OS (e.g. RHEL 8 and 9). This machines needs a static IP, or a resolvable DNS hostname that the AWX cluster can access. If the listener_port is defined, the machine will also need an available open port to establish inbound TCP connections on (e.g. 27199).
|
||||
|
||||
In general the more CPU cores and memory the machine has, the more jobs that can be scheduled to run on that machine at once. See https://docs.ansible.com/automation-controller/4.2.1/html/userguide/jobs.html#at-capacity-determination-and-job-impact for more information on capacity.
|
||||
|
||||
@@ -48,7 +48,7 @@ Use the Instance page or `api/v2/instances` endpoint to add a new instance.
|
||||
- `peers` is a list of instance hostnames to connect outbound to.
|
||||
- `peers_from_control_nodes` boolean, if True, control plane nodes will automatically peer to this instance.
|
||||
|
||||
Below is a table of configuartions for the [diagram](#adding-execution-nodes-to-awx) above.
|
||||
Below is a table of configurations for the [diagram](#adding-execution-nodes-to-awx) above.
|
||||
|
||||
| instance name | listener_port | peers_from_control_nodes | peers |
|
||||
|------------------|---------------|-------------------------|--------------|
|
||||
|
||||
29
docs/rbac.md
@@ -95,6 +95,12 @@ The `singleton` class method is a helper method on the `Role` model that helps i
|
||||
You may use the `user in some_role` syntax to check and see if the specified
|
||||
user is a member of the given role, **or** a member of any ancestor role.
|
||||
|
||||
#### `get_roles_on_resource(resource, accessor)`
|
||||
|
||||
This is a static method (not bound to a class) that will efficiently return the names
|
||||
of all roles that the `accessor` (a user or a team) has on a particular resource.
|
||||
The resource is a python object for something like an organization, credential, or job template.
|
||||
Return value is a list of strings like `["admin_role", "execute_role"]`.
|
||||
|
||||
### Fields
|
||||
|
||||
@@ -126,13 +132,17 @@ By mixing in the `ResourceMixin` to your model, you are turning your model in to
|
||||
objects.filter(name__istartswith='december')
|
||||
```
|
||||
|
||||
##### `get_permissions(self, user)`
|
||||
##### `accessible_pk_qs(cls, user, role_field)`
|
||||
|
||||
`get_permissions` is an instance method that will give you the list of role names that the user has access to for a given object.
|
||||
`accessible_pk_qs` returns a queryset of ids that match the same role filter as `accessible_objects`.
|
||||
A key difference is that this is more performant to use in subqueries when filtering related models.
|
||||
|
||||
Say that another model, `YourModel` has a ForeignKey reference to `MyModel` via a field `my_model`,
|
||||
and you want to return all instances of `YourModel` that have a visible related `MyModel`.
|
||||
The best way to do this is:
|
||||
|
||||
```python
|
||||
>>> instance.get_permissions(admin)
|
||||
['admin_role', 'execute_role', 'read_role']
|
||||
YourModel.filter(my_model=MyModel.accessible_pk_qs(user, 'admin_role'))
|
||||
```
|
||||
|
||||
## Usage
|
||||
@@ -144,10 +154,7 @@ After exploring the _Overview_, the usage of the RBAC implementation in your cod
|
||||
class Document(Model, ResourceMixin):
|
||||
...
|
||||
# declare your new role
|
||||
readonly_role = ImplicitRoleField(
|
||||
role_name="readonly",
|
||||
permissions={'read':True},
|
||||
)
|
||||
readonly_role = ImplicitRoleField()
|
||||
```
|
||||
|
||||
Now that your model is a resource and has a `Role` defined, you can begin to access the helper methods provided to you by the `ResourceMixin` for checking a user's access to your resource. Here is the output of a Python REPL session:
|
||||
@@ -156,11 +163,11 @@ Now that your model is a resource and has a `Role` defined, you can begin to acc
|
||||
# we've created some documents and a user
|
||||
>>> document = Document.objects.filter(pk=1)
|
||||
>>> user = User.objects.first()
|
||||
>>> user in document.read_role
|
||||
>>> user in document.readonly_role
|
||||
False # not accessible by default
|
||||
>>> document.readonly_role.members.add(user)
|
||||
>>> user in document.read_role
|
||||
>>> user in document.readonly_role
|
||||
True # now it is accessible
|
||||
>>> user in document.admin_role
|
||||
>>> user in document.readonly_role
|
||||
False # my role does not have admin permission
|
||||
```
|
||||
|
||||
@@ -181,7 +181,7 @@ Operator hub PRs are generated via an Ansible Playbook. See someone on the AWX t
|
||||
|
||||
## Revert a Release
|
||||
|
||||
Decide whether or not you can just fall-forward with a new AWX Release to fix a bad release. If you need to remove published artifacts from publically facing repositories, follow the steps below.
|
||||
Decide whether or not you can just fall-forward with a new AWX Release to fix a bad release. If you need to remove published artifacts from publicly facing repositories, follow the steps below.
|
||||
|
||||
Here are the steps needed to revert an AWX and an AWX-Operator release. Depending on your use case, follow the steps for reverting just an AWX release, an Operator release or both.
|
||||
|
||||
@@ -195,7 +195,7 @@ Here are the steps needed to revert an AWX and an AWX-Operator release. Dependin
|
||||

|
||||
[comment]: <> (Need an image here for actually deleting an orphaned tag, place here during next release)
|
||||
|
||||
3. Navigate to the [AWX Operator Release Page]() and delete the AWX-Operator release that needss to tbe removed.
|
||||
3. Navigate to the [AWX Operator Release Page]() and delete the AWX-Operator release that needs to be removed.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
[pytest]
|
||||
DJANGO_SETTINGS_MODULE = awx.main.tests.settings_for_test
|
||||
python_paths = /var/lib/awx/venv/tower/lib/python3.8/site-packages
|
||||
site_dirs = /var/lib/awx/venv/tower/lib/python3.8/site-packages
|
||||
python_files = *.py
|
||||
addopts = --reuse-db --nomigrations --tb=native
|
||||
markers =
|
||||
|
||||