Compare commits

..

54 Commits

Author SHA1 Message Date
Ratan Gulati
40c2b700fe Fix: #14523 Add alt-text codeblock to Images for workflow_template.rst (#14604)
* add alt to images in workflow_templates.rst

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>

* add alt to images in workflow_templates.rst

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>

* Update workflow_templates.rst

* Revised proposed alt text for workflow_templates.rst

---------

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-11-07 10:51:11 -07:00
Thanhnguyet Vo
71d548f9e5 Removed references to images that were deleted. 2023-11-07 08:55:27 -07:00
Thanhnguyet Vo
dd98963f86 Updated images - Workflow Templates chapter of Userguide. 2023-11-07 08:55:27 -07:00
TVo
4b467dfd8d Revised proposed alt text for insights.rst 2023-11-07 08:14:34 -07:00
BHANUTEJA
456b56778e Update insights.rst 2023-11-07 08:14:34 -07:00
BHANUTEJA
5b3cb20f92 Update insights.rst 2023-11-07 08:14:34 -07:00
TVo
d7086a3c88 Revised the proposed Alt text for main_menu.rst 2023-11-06 13:09:26 -07:00
Ratan Gulati
21e7ab078c Fix: #14511 Add alt-text codeblock to Images for Userguide: main_menu.rst
Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>
2023-11-06 13:09:26 -07:00
Elijah DeLee
946ca0b3b8 fix wsrelay connection in ipv6 environments 2023-11-06 13:58:41 -05:00
TVo
b831dbd608 Removed mailing list from triage_replies.md 2023-11-03 14:30:30 -06:00
Thanhnguyet Vo
943e455f9d Re-do for PR #14595 to fix CI issues. 2023-11-03 08:35:22 -06:00
Seth Foster
53bc88abe2 Fix python_paths error in CI(#14622)
Remove outdated lines from pytest.ini

Was causing KeyError 'python_paths' in CI

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2023-11-03 09:36:21 -04:00
Rick Elrod
3b4d95633e [rsyslog] remove main_queue, add more action queue params (#14532)
* [rsyslog] remove main_queue, add more action queue params

Signed-off-by: Rick Elrod <rick@elrod.me>

* Remove now-unused LOG_AGGREGATOR_MAX_DISK_USAGE_GB, add LOG_AGGREGATOR_ACTION_QUEUE_SIZE

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-10-31 14:49:17 -04:00
Alan Rominger
93c329d9d5 Fix cancel bug - WorkflowManager cancel in transaction (#14608)
This fixes a bug where jobs within a workflow job were not canceled
  when the workflow job was canceled by the user

The fix is to submit the cancel request as a part of the
  transaction that WorkflowManager commits its work in
  this requires that we send the message without expecting a reply
  so this changes the control-with-reply cancel to just a control function
2023-10-30 15:30:18 -04:00
Hao Liu
f4c53aaf22 Update receptor-collection version to 2.0.2 (#14613) 2023-10-30 17:24:02 +00:00
Alan Rominger
333ef76cbd Send notifications for dependency failures (#14603)
* Send notifications for dependency failures

* Delete tests for deleted method

* Remove another test for removed method
2023-10-30 10:42:37 -04:00
Alan Rominger
fc0b58fd04 Fix bug that prevented dispatcher exit with downed DB (#14469)
* Separate handling of original sitTERM and sigINT
2023-10-26 14:34:25 -04:00
Andrii Zakurenyi
bef0a8b23a Fix DevOps Secrets Vault credential plugin to work with python-dsv-sdk>=1.0.4
Signed-off-by: Andrii Zakurenyi <andrii.zakurenyi@c.delinea.com>
2023-10-25 15:48:24 -04:00
lmo5
a5f33456b6 Fix missing service account secret in docker-compose-minikube role (#14596)
* Fix missing service account secret

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2023-10-25 19:27:21 +00:00
Surav Shrestha
21fb395912 fix typos in docs/development/minikube.md 2023-10-25 15:23:23 -04:00
jessicamack
44255f378d Fix extra_vars bug in ansible.controller.ad_hoc_command (#14585)
* convert to valid type for serializer

* check that extra_vars are in request

* remove doubled line

* add integration test for change

* move change to the ad_hoc_command module

Signed-off-by: jessicamack <jmack@redhat.com>

* fix imports

Signed-off-by: jessicamack <jmack@redhat.com>

---------

Signed-off-by: jessicamack <jmack@redhat.com>
2023-10-25 10:38:45 -04:00
Parikshit Adhikari
71a6d48612 Fix: typos inside /docs directory (#14594)
fix typos inside docs
2023-10-24 19:01:21 +00:00
nmiah1
b7e5f5d1e1 Typo in export.py example (#14598) 2023-10-24 18:33:38 +00:00
Alan Rominger
b6b167627c Fix Boolean values defaulting to False in collection (#14493)
* Fix Boolean values defaulting to False in collection

* Remove null values in other cases, fix null handling for WFJT nodes

* Only remove null values if it is a boolean field

* Reset changes to WFJT node field processing

* Use test content from sean-m-sullivan to fix lookups in assert
2023-10-24 14:29:16 -04:00
Hao Liu
20f5b255c9 Fix "upgrade in progress" status page not showing up while migration is in progress (#14579)
Web container does not need to wait for migration

if the database is running and responsive, but migrations have not finished, it will start serving, and users will get the upgrading page

wait-for-migration prevent nginix and uwsgi from starting up to serve the "upgrade in progress" status page
2023-10-24 14:27:09 -04:00
Oleksii Baranov
3bcf46555d Fix swagger generation on rhel (#14317) (#14589) 2023-10-24 14:19:02 -04:00
Don Naro
94703ccf84 Pip compile docsite requirements (#14449)
Co-authored-by: Sviatoslav Sydorenko <578543+webknjaz@users.noreply.github.com>
Co-authored-by: Sviatoslav Sydorenko <wk.cvs.github@sydorenko.org.ua>
2023-10-24 12:53:41 -04:00
BHANUTEJA
6cdea1909d Alt text for Execution Env section of Userguide (#14576)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 18:48:07 +00:00
Mike Mwanje
f133580172 Adds alt text to instance_groups.rst images (#14571)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 16:11:17 +00:00
Kishan Mehta
4b90a7fcd1 Add alt text for image directives in credential_types.rst (#14551)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 09:36:05 -06:00
Marliana Lara
95bfedad5b Format constructed inventory hint example as valid YAML (#14568) 2023-10-20 10:24:47 -04:00
Kishan Mehta
1081f2d8e9 Add alt text for image directives in credentials.rst (#14550)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 14:13:49 +00:00
Kishan Mehta
c4ab54d7f3 Add alt text for image directives in job_capacity.rst & job_slices.rst (#14549)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 13:34:04 +00:00
Hao Liu
bcefcd8cf8 Remove specific version for receptorctl (#14593) 2023-10-19 22:49:42 -04:00
Kishan Mehta
0bd057529d Add alt text for image directives in job_templates.rst (#14548)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
2023-10-19 20:24:32 +00:00
Sayyed Faisal Ali
a82c03e2e2 added alt-text in projects.rst (#14544)
Signed-off-by: c0de-slayer <fsali315@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-19 12:39:58 -06:00
TVo
447ac77535 Corrected missing text replacement directives (#14592) 2023-10-19 16:36:41 +00:00
Andrew Klychkov
72d0928f1b [DOCS] EE guide: fix a ref to Get started with EE (#14587) 2023-10-19 03:30:21 -04:00
Deepshri M
6d727d4bc4 Adding alt text for image (#14541)
Signed-off-by: Deepshri M <deepshrim613@gmail.com>
2023-10-17 14:53:18 -06:00
Rohit Raj
6040e44d9d docs: Update teams.rst (#14539)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-17 20:16:09 +00:00
Rohit Raj
b99ce5cd62 docs: Update users.rst (#14538)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-17 14:58:40 +00:00
Rohit Raj
ba8a90c55f docs: Update security.rst (#14540)
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-16 17:56:46 -06:00
Sayyed Faisal Ali
7ee2172517 added alt-text in project-sign.rst (#14545)
Signed-off-by: c0de-slayer <fsali315@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-16 09:25:34 -06:00
Alan Rominger
07f49f5925 AAP-16926 Delete unpartitioned tables in a separate transaction (#14572) 2023-10-13 15:50:51 -04:00
Hao Liu
376993077a Removing mailing list from get involved (#14580) 2023-10-13 17:49:34 +00:00
Hao Liu
48f586bac4 Make wait-for-migrations wait forever (#14566) 2023-10-13 13:48:12 +00:00
Surendran
16dab57c63 Added alt-text for images in notifications.rst (#14555)
Signed-off-by: Surendran Gokul <surendrangokul55@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-12 15:22:37 -06:00
Surendran
75a71492fd Added alt-text for images in organizations.rst (#14556)
Signed-off-by: Surendran Gokul <surendrangokul55@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-12 15:15:45 -06:00
Hao Liu
e9bd99c1ff Fix CVE-2023-43665 (#14561) 2023-10-12 14:00:32 -04:00
Daniel Gonçalves
56878b4910 Add customizable batch_size for cleanup_activitystream and cleanup_jobs (#14412)
Signed-off-by: Daniel Gonçalves <daniel.gonc@lves.fr>
2023-10-11 20:09:16 +00:00
Alan Rominger
19ca480078 Upgrade client library for dsv since tss already landed (#14362) 2023-10-11 16:01:22 -04:00
Steffen Scheib
64eb963025 Cleaning SOS report passwords (#14557) 2023-10-11 19:54:28 +00:00
Will Thames
dc34d0887a Execution environment image should not be required (#14488) 2023-10-11 15:39:51 -04:00
Andrew Klychkov
160634fb6f ee_reference.rst: refert to Builder's definition docs instead of duplicating its content (#14562) 2023-10-11 13:54:12 +01:00
109 changed files with 954 additions and 924 deletions

View File

@@ -7,8 +7,8 @@
## PRs/Issues
### Visit our mailing list
- Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on our mailing list? See https://github.com/ansible/awx/#get-involved for information for ways to connect with us.
### Visit the Forum or Matrix
- Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on either the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com) or the [Ansible Community Forum](https://forum.ansible.com/tag/awx)?
### Denied Submission

5
.pip-tools.toml Normal file
View File

@@ -0,0 +1,5 @@
[tool.pip-tools]
resolver = "backtracking"
allow-unsafe = true
strip-extras = true
quiet = true

View File

@@ -30,7 +30,7 @@ If you're experiencing a problem that you feel is a bug in AWX or have ideas for
Code of Conduct
---------------
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
Get Involved
------------
@@ -39,4 +39,3 @@ We welcome your feedback and ideas. Here's how to reach us with feedback and que
- Join the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com)
- Join the [Ansible Community Forum](https://forum.ansible.com)
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)

View File

@@ -1,4 +1,4 @@
---
collections:
- name: ansible.receptor
version: 2.0.0
version: 2.0.2

View File

@@ -694,16 +694,18 @@ register(
category_slug='logging',
)
register(
'LOG_AGGREGATOR_MAX_DISK_USAGE_GB',
'LOG_AGGREGATOR_ACTION_QUEUE_SIZE',
field_class=fields.IntegerField,
default=1,
default=131072,
min_value=1,
label=_('Maximum disk persistence for external log aggregation (in GB)'),
label=_('Maximum number of messages that can be stored in the log action queue'),
help_text=_(
'Amount of data to store (in gigabytes) during an outage of '
'the external log aggregator (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. '
'Notably, this is used for the rsyslogd main queue (for input messages).'
'Defines how large the rsyslog action queue can grow in number of messages '
'stored. This can have an impact on memory utilization. When the queue '
'reaches 75% of this number, the queue will start writing to disk '
'(queue.highWatermark in rsyslog). When it reaches 90%, NOTICE, INFO, and '
'DEBUG messages will start to be discarded (queue.discardMark with '
'queue.discardSeverity=5).'
),
category=_('Logging'),
category_slug='logging',
@@ -718,8 +720,7 @@ register(
'Amount of data to store (in gigabytes) if an rsyslog action takes time '
'to process an incoming message (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). '
'Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified '
'by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
'It stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
),
category=_('Logging'),
category_slug='logging',

View File

@@ -2,25 +2,28 @@ from .plugin import CredentialPlugin
from django.conf import settings
from django.utils.translation import gettext_lazy as _
from thycotic.secrets.vault import SecretsVault
from delinea.secrets.vault import PasswordGrantAuthorizer, SecretsVault
dsv_inputs = {
'fields': [
{
'id': 'tenant',
'label': _('Tenant'),
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretservercloud.com'),
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretsvaultcloud.com'),
'type': 'string',
},
{
'id': 'tld',
'label': _('Top-level Domain (TLD)'),
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretservercloud.com'),
'choices': ['ca', 'com', 'com.au', 'com.sg', 'eu'],
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretsvaultcloud.com'),
'choices': ['ca', 'com', 'com.au', 'eu'],
'default': 'com',
},
{'id': 'client_id', 'label': _('Client ID'), 'type': 'string'},
{
'id': 'client_id',
'label': _('Client ID'),
'type': 'string',
},
{
'id': 'client_secret',
'label': _('Client Secret'),
@@ -51,12 +54,26 @@ if settings.DEBUG:
'id': 'url_template',
'label': _('URL template'),
'type': 'string',
'default': 'https://{}.secretsvaultcloud.{}/v1',
'default': 'https://{}.secretsvaultcloud.{}',
}
)
dsv_plugin = CredentialPlugin(
'Thycotic DevOps Secrets Vault',
dsv_inputs,
lambda **kwargs: SecretsVault(**{k: v for (k, v) in kwargs.items() if k in [field['id'] for field in dsv_inputs['fields']]}).get_secret(kwargs['path'])['data'][kwargs['secret_field']], # fmt: skip
)
def dsv_backend(**kwargs):
tenant_name = kwargs['tenant']
tenant_tld = kwargs.get('tld', 'com')
tenant_url_template = kwargs.get('url_template', 'https://{}.secretsvaultcloud.{}')
client_id = kwargs['client_id']
client_secret = kwargs['client_secret']
secret_path = kwargs['path']
secret_field = kwargs['secret_field']
tenant_url = tenant_url_template.format(tenant_name, tenant_tld.strip("."))
authorizer = PasswordGrantAuthorizer(tenant_url, client_id, client_secret)
dsv_secret = SecretsVault(tenant_url, authorizer).get_secret(secret_path)
return dsv_secret['data'][secret_field]
dsv_plugin = CredentialPlugin(name='Thycotic DevOps Secrets Vault', inputs=dsv_inputs, backend=dsv_backend)

View File

@@ -37,8 +37,11 @@ class Control(object):
def running(self, *args, **kwargs):
return self.control_with_reply('running', *args, **kwargs)
def cancel(self, task_ids, *args, **kwargs):
return self.control_with_reply('cancel', *args, extra_data={'task_ids': task_ids}, **kwargs)
def cancel(self, task_ids, with_reply=True):
if with_reply:
return self.control_with_reply('cancel', extra_data={'task_ids': task_ids})
else:
self.control({'control': 'cancel', 'task_ids': task_ids, 'reply_to': None}, extra_data={'task_ids': task_ids})
def schedule(self, *args, **kwargs):
return self.control_with_reply('schedule', *args, **kwargs)

View File

@@ -89,8 +89,9 @@ class AWXConsumerBase(object):
if task_ids and not msg:
logger.info(f'Could not locate running tasks to cancel with ids={task_ids}')
with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
if reply_queue is not None:
with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
elif control == 'reload':
for worker in self.pool.workers:
worker.quit()

View File

@@ -24,6 +24,9 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove activity stream events more than N days old')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)')
parser.add_argument(
'--batch-size', dest='batch_size', type=int, default=500, metavar='X', help='Remove activity stream events in batch of X events. Defaults to 500.'
)
def init_logging(self):
log_levels = dict(enumerate([logging.ERROR, logging.INFO, logging.DEBUG, 0]))
@@ -48,7 +51,7 @@ class Command(BaseCommand):
else:
pks_to_delete.add(asobj.pk)
# Cleanup objects in batches instead of deleting each one individually.
if len(pks_to_delete) >= 500:
if len(pks_to_delete) >= self.batch_size:
ActivityStream.objects.filter(pk__in=pks_to_delete).delete()
n_deleted_items += len(pks_to_delete)
pks_to_delete.clear()
@@ -63,4 +66,5 @@ class Command(BaseCommand):
self.days = int(options.get('days', 30))
self.cutoff = now() - datetime.timedelta(days=self.days)
self.dry_run = bool(options.get('dry_run', False))
self.batch_size = int(options.get('batch_size', 500))
self.cleanup_activitystream()

View File

@@ -9,6 +9,7 @@ import re
# Django
from django.apps import apps
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction, connection
from django.db.models import Min, Max
@@ -150,6 +151,9 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove jobs/updates executed more than N days ago. Defaults to 90.')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)')
parser.add_argument(
'--batch-size', dest='batch_size', type=int, default=100000, metavar='X', help='Remove jobs in batch of X jobs. Defaults to 100000.'
)
parser.add_argument('--jobs', dest='only_jobs', action='store_true', default=False, help='Remove jobs')
parser.add_argument('--ad-hoc-commands', dest='only_ad_hoc_commands', action='store_true', default=False, help='Remove ad hoc commands')
parser.add_argument('--project-updates', dest='only_project_updates', action='store_true', default=False, help='Remove project updates')
@@ -195,39 +199,58 @@ class Command(BaseCommand):
delete_meta.delete_jobs()
return (delete_meta.jobs_no_delete_count, delete_meta.jobs_to_delete_count)
def _handle_unpartitioned_events(self, model, pk_list):
"""
If unpartitioned job events remain, it will cascade those from jobs in pk_list
if the unpartitioned table is no longer necessary, it will drop the table
"""
def has_unpartitioned_table(self, model):
tblname = unified_job_class_to_event_table_name(model)
rel_name = model().event_parent_key
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM pg_tables WHERE tablename = '_unpartitioned_{tblname}';")
row = cursor.fetchone()
if row is None:
self.logger.debug(f'Unpartitioned table for {rel_name} does not exist, you are fully migrated')
return
if pk_list:
with connection.cursor() as cursor:
pk_list_csv = ','.join(map(str, pk_list))
cursor.execute(f"DELETE FROM _unpartitioned_{tblname} WHERE {rel_name} IN ({pk_list_csv})")
return False
return True
def _delete_unpartitioned_table(self, model):
"If the unpartitioned table is no longer necessary, it will drop the table"
tblname = unified_job_class_to_event_table_name(model)
if not self.has_unpartitioned_table(model):
self.logger.debug(f'Table _unpartitioned_{tblname} does not exist, you are fully migrated.')
return
with connection.cursor() as cursor:
# same as UnpartitionedJobEvent.objects.aggregate(Max('created'))
cursor.execute(f'SELECT MAX("_unpartitioned_{tblname}"."created") FROM "_unpartitioned_{tblname}"')
cursor.execute(f'SELECT MAX("_unpartitioned_{tblname}"."created") FROM "_unpartitioned_{tblname}";')
row = cursor.fetchone()
last_created = row[0]
if last_created:
self.logger.info(f'Last event created in _unpartitioned_{tblname} was {last_created.isoformat()}')
else:
self.logger.info(f'Table _unpartitioned_{tblname} has no events in it')
if (last_created is None) or (last_created < self.cutoff):
self.logger.warning(f'Dropping table _unpartitioned_{tblname} since no records are newer than {self.cutoff}')
cursor.execute(f'DROP TABLE _unpartitioned_{tblname}')
if last_created:
self.logger.info(f'Last event created in _unpartitioned_{tblname} was {last_created.isoformat()}')
else:
self.logger.info(f'Table _unpartitioned_{tblname} has no events in it')
if (last_created is None) or (last_created < self.cutoff):
self.logger.warning(
f'Dropping table _unpartitioned_{tblname} since no records are newer than {self.cutoff}\n'
'WARNING - this will happen in a separate transaction so a failure will not roll back prior cleanup'
)
with connection.cursor() as cursor:
cursor.execute(f'DROP TABLE _unpartitioned_{tblname};')
def _delete_unpartitioned_events(self, model, pk_list):
"If unpartitioned job events remain, it will cascade those from jobs in pk_list"
tblname = unified_job_class_to_event_table_name(model)
rel_name = model().event_parent_key
# Bail if the unpartitioned table does not exist anymore
if not self.has_unpartitioned_table(model):
return
# Table still exists, delete individual unpartitioned events
if pk_list:
with connection.cursor() as cursor:
self.logger.debug(f'Deleting {len(pk_list)} events from _unpartitioned_{tblname}, use a longer cleanup window to delete the table.')
pk_list_csv = ','.join(map(str, pk_list))
cursor.execute(f"DELETE FROM _unpartitioned_{tblname} WHERE {rel_name} IN ({pk_list_csv});")
def cleanup_jobs(self):
batch_size = 100000
# Hack to avoid doing N+1 queries as each item in the Job query set does
# an individual query to get the underlying UnifiedJob.
Job.polymorphic_super_sub_accessors_replaced = True
@@ -242,13 +265,14 @@ class Command(BaseCommand):
deleted = 0
info = qs.aggregate(min=Min('id'), max=Max('id'))
if info['min'] is not None:
for start in range(info['min'], info['max'] + 1, batch_size):
qs_batch = qs.filter(id__gte=start, id__lte=start + batch_size)
for start in range(info['min'], info['max'] + 1, self.batch_size):
qs_batch = qs.filter(id__gte=start, id__lte=start + self.batch_size)
pk_list = qs_batch.values_list('id', flat=True)
_, results = qs_batch.delete()
deleted += results['main.Job']
self._handle_unpartitioned_events(Job, pk_list)
# Avoid dropping the job event table in case we have interacted with it already
self._delete_unpartitioned_events(Job, pk_list)
return skipped, deleted
@@ -271,7 +295,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._handle_unpartitioned_events(AdHocCommand, pk_list)
self._delete_unpartitioned_events(AdHocCommand, pk_list)
skipped += AdHocCommand.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -299,7 +323,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._handle_unpartitioned_events(ProjectUpdate, pk_list)
self._delete_unpartitioned_events(ProjectUpdate, pk_list)
skipped += ProjectUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -327,7 +351,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._handle_unpartitioned_events(InventoryUpdate, pk_list)
self._delete_unpartitioned_events(InventoryUpdate, pk_list)
skipped += InventoryUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -351,7 +375,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._handle_unpartitioned_events(SystemJob, pk_list)
self._delete_unpartitioned_events(SystemJob, pk_list)
skipped += SystemJob.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -396,12 +420,12 @@ class Command(BaseCommand):
skipped += Notification.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@transaction.atomic
def handle(self, *args, **options):
self.verbosity = int(options.get('verbosity', 1))
self.init_logging()
self.days = int(options.get('days', 90))
self.dry_run = bool(options.get('dry_run', False))
self.batch_size = int(options.get('batch_size', 100000))
try:
self.cutoff = now() - datetime.timedelta(days=self.days)
except OverflowError:
@@ -423,19 +447,29 @@ class Command(BaseCommand):
del s.receivers[:]
s.sender_receivers_cache.clear()
for m in model_names:
if m not in models_to_cleanup:
continue
with transaction.atomic():
for m in models_to_cleanup:
skipped, deleted = getattr(self, 'cleanup_%s' % m)()
skipped, deleted = getattr(self, 'cleanup_%s' % m)()
func = getattr(self, 'cleanup_%s_partition' % m, None)
if func:
skipped_partition, deleted_partition = func()
skipped += skipped_partition
deleted += deleted_partition
func = getattr(self, 'cleanup_%s_partition' % m, None)
if func:
skipped_partition, deleted_partition = func()
skipped += skipped_partition
deleted += deleted_partition
if self.dry_run:
self.logger.log(99, '%s: %d would be deleted, %d would be skipped.', m.replace('_', ' '), deleted, skipped)
else:
self.logger.log(99, '%s: %d deleted, %d skipped.', m.replace('_', ' '), deleted, skipped)
if self.dry_run:
self.logger.log(99, '%s: %d would be deleted, %d would be skipped.', m.replace('_', ' '), deleted, skipped)
else:
self.logger.log(99, '%s: %d deleted, %d skipped.', m.replace('_', ' '), deleted, skipped)
# Deleting unpartitioned tables cannot be done in same transaction as updates to related tables
if not self.dry_run:
with transaction.atomic():
for m in models_to_cleanup:
unified_job_class_name = m[:-1].title().replace('Management', 'System').replace('_', '')
unified_job_class = apps.get_model('main', unified_job_class_name)
try:
unified_job_class().event_class
except (NotImplementedError, AttributeError):
continue # no need to run this for models without events
self._delete_unpartitioned_table(unified_job_class)

View File

@@ -1439,6 +1439,11 @@ class UnifiedJob(
if not self.celery_task_id:
return
canceled = []
if not connection.get_autocommit():
# this condition is purpose-written for the task manager, when it cancels jobs in workflows
ControlDispatcher('dispatcher', self.controller_node).cancel([self.celery_task_id], with_reply=False)
return True # task manager itself needs to act under assumption that cancel was received
try:
# Use control and reply mechanism to cancel and obtain confirmation
timeout = 5

View File

@@ -270,6 +270,9 @@ class WorkflowManager(TaskBase):
job.status = 'failed'
job.save(update_fields=['status', 'job_explanation'])
job.websocket_emit_status('failed')
# NOTE: sending notification templates here is slightly worse performance
# this is not yet optimized in the same way as for the TaskManager
job.send_notification_templates('failed')
ScheduleWorkflowManager().schedule()
# TODO: should we emit a status on the socket here similar to tasks.py awx_periodic_scheduler() ?
@@ -430,6 +433,25 @@ class TaskManager(TaskBase):
self.tm_models = TaskManagerModels()
self.controlplane_ig = self.tm_models.instance_groups.controlplane_ig
def process_job_dep_failures(self, task):
"""If job depends on a job that has failed, mark as failed and handle misc stuff."""
for dep in task.dependent_jobs.all():
# if we detect a failed or error dependency, go ahead and fail this task.
if dep.status in ("error", "failed"):
task.status = 'failed'
logger.warning(f'Previous task failed task: {task.id} dep: {dep.id} task manager')
task.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(dep)),
dep.name,
dep.id,
)
task.save(update_fields=['status', 'job_explanation'])
task.websocket_emit_status('failed')
self.pre_start_failed.append(task.id)
return True
return False
def job_blocked_by(self, task):
# TODO: I'm not happy with this, I think blocking behavior should be decided outside of the dependency graph
# in the old task manager this was handled as a method on each task object outside of the graph and
@@ -441,20 +463,6 @@ class TaskManager(TaskBase):
for dep in task.dependent_jobs.all():
if dep.status in ACTIVE_STATES:
return dep
# if we detect a failed or error dependency, go ahead and fail this
# task. The errback on the dependency takes some time to trigger,
# and we don't want the task to enter running state if its
# dependency has failed or errored.
elif dep.status in ("error", "failed"):
task.status = 'failed'
task.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(dep)),
dep.name,
dep.id,
)
task.save(update_fields=['status', 'job_explanation'])
task.websocket_emit_status('failed')
return dep
return None
@@ -474,7 +482,6 @@ class TaskManager(TaskBase):
if self.start_task_limit == 0:
# schedule another run immediately after this task manager
ScheduleTaskManager().schedule()
from awx.main.tasks.system import handle_work_error, handle_work_success
task.status = 'waiting'
@@ -485,7 +492,7 @@ class TaskManager(TaskBase):
task.job_explanation += ' '
task.job_explanation += 'Task failed pre-start check.'
task.save()
# TODO: run error handler to fail sub-tasks and send notifications
self.pre_start_failed.append(task.id)
else:
if type(task) is WorkflowJob:
task.status = 'running'
@@ -507,19 +514,16 @@ class TaskManager(TaskBase):
# apply_async does a NOTIFY to the channel dispatcher is listening to
# postgres will treat this as part of the transaction, which is what we want
if task.status != 'failed' and type(task) is not WorkflowJob:
task_actual = {'type': get_type_for_model(type(task)), 'id': task.id}
task_cls = task._get_task_class()
task_cls.apply_async(
[task.pk],
opts,
queue=task.get_queue_name(),
uuid=task.celery_task_id,
callbacks=[{'task': handle_work_success.name, 'kwargs': {'task_actual': task_actual}}],
errbacks=[{'task': handle_work_error.name, 'kwargs': {'task_actual': task_actual}}],
)
# In exception cases, like a job failing pre-start checks, we send the websocket status message
# for jobs going into waiting, we omit this because of performance issues, as it should go to running quickly
# In exception cases, like a job failing pre-start checks, we send the websocket status message.
# For jobs going into waiting, we omit this because of performance issues, as it should go to running quickly
if task.status != 'waiting':
task.websocket_emit_status(task.status) # adds to on_commit
@@ -540,6 +544,11 @@ class TaskManager(TaskBase):
if self.timed_out():
logger.warning("Task manager has reached time out while processing pending jobs, exiting loop early")
break
has_failed = self.process_job_dep_failures(task)
if has_failed:
continue
blocked_by = self.job_blocked_by(task)
if blocked_by:
self.subsystem_metrics.inc(f"{self.prefix}_tasks_blocked", 1)
@@ -653,6 +662,11 @@ class TaskManager(TaskBase):
reap_job(j, 'failed')
def process_tasks(self):
# maintain a list of jobs that went to an early failure state,
# meaning the dispatcher never got these jobs,
# that means we have to handle notifications for those
self.pre_start_failed = []
running_tasks = [t for t in self.all_tasks if t.status in ['waiting', 'running']]
self.process_running_tasks(running_tasks)
self.subsystem_metrics.inc(f"{self.prefix}_running_processed", len(running_tasks))
@@ -662,6 +676,11 @@ class TaskManager(TaskBase):
self.process_pending_tasks(pending_tasks)
self.subsystem_metrics.inc(f"{self.prefix}_pending_processed", len(pending_tasks))
if self.pre_start_failed:
from awx.main.tasks.system import handle_failure_notifications
handle_failure_notifications.delay(self.pre_start_failed)
def timeout_approval_node(self, task):
if self.timed_out():
logger.warning("Task manager has reached time out while processing approval nodes, exiting loop early")

View File

@@ -74,6 +74,8 @@ from awx.main.utils.common import (
extract_ansible_vars,
get_awx_version,
create_partition,
ScheduleWorkflowManager,
ScheduleTaskManager,
)
from awx.conf.license import get_license
from awx.main.utils.handlers import SpecialInventoryHandler
@@ -450,6 +452,12 @@ class BaseTask(object):
instance.ansible_version = ansible_version_info
instance.save(update_fields=['ansible_version'])
# Run task manager appropriately for speculative dependencies
if instance.unifiedjob_blocked_jobs.exists():
ScheduleTaskManager().schedule()
if instance.spawned_by_workflow:
ScheduleWorkflowManager().schedule()
def should_use_fact_cache(self):
return False
@@ -1873,6 +1881,8 @@ class RunSystemJob(BaseTask):
if system_job.job_type in ('cleanup_jobs', 'cleanup_activitystream'):
if 'days' in json_vars:
args.extend(['--days', str(json_vars.get('days', 60))])
if 'batch_size' in json_vars:
args.extend(['--batch-size', str(json_vars['batch_size'])])
if 'dry_run' in json_vars and json_vars['dry_run']:
args.extend(['--dry-run'])
if system_job.job_type == 'cleanup_jobs':

View File

@@ -16,7 +16,9 @@ class SignalExit(Exception):
class SignalState:
def reset(self):
self.sigterm_flag = False
self.is_active = False
self.sigint_flag = False
self.is_active = False # for nested context managers
self.original_sigterm = None
self.original_sigint = None
self.raise_exception = False
@@ -24,23 +26,36 @@ class SignalState:
def __init__(self):
self.reset()
def set_flag(self, *args):
"""Method to pass into the python signal.signal method to receive signals"""
self.sigterm_flag = True
def raise_if_needed(self):
if self.raise_exception:
self.raise_exception = False # so it is not raised a second time in error handling
raise SignalExit()
def set_sigterm_flag(self, *args):
self.sigterm_flag = True
self.raise_if_needed()
def set_sigint_flag(self, *args):
self.sigint_flag = True
self.raise_if_needed()
def connect_signals(self):
self.original_sigterm = signal.getsignal(signal.SIGTERM)
self.original_sigint = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGTERM, self.set_flag)
signal.signal(signal.SIGINT, self.set_flag)
signal.signal(signal.SIGTERM, self.set_sigterm_flag)
signal.signal(signal.SIGINT, self.set_sigint_flag)
self.is_active = True
def restore_signals(self):
signal.signal(signal.SIGTERM, self.original_sigterm)
signal.signal(signal.SIGINT, self.original_sigint)
# if we got a signal while context manager was active, call parent methods.
if self.sigterm_flag:
if callable(self.original_sigterm):
self.original_sigterm()
if self.sigint_flag:
if callable(self.original_sigint):
self.original_sigint()
self.reset()
@@ -48,7 +63,7 @@ signal_state = SignalState()
def signal_callback():
return signal_state.sigterm_flag
return bool(signal_state.sigterm_flag or signal_state.sigint_flag)
def with_signal_handling(f):

View File

@@ -53,13 +53,7 @@ from awx.main.models import (
from awx.main.constants import ACTIVE_STATES
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_task_queuename, reaper
from awx.main.utils.common import (
get_type_for_model,
ignore_inventory_computed_fields,
ignore_inventory_group_removal,
ScheduleWorkflowManager,
ScheduleTaskManager,
)
from awx.main.utils.common import ignore_inventory_computed_fields, ignore_inventory_group_removal
from awx.main.utils.reload import stop_local_services
from awx.main.utils.pglock import advisory_lock
@@ -765,63 +759,19 @@ def awx_periodic_scheduler():
emit_channel_notification('schedules-changed', dict(id=schedule.id, group_name="schedules"))
def schedule_manager_success_or_error(instance):
if instance.unifiedjob_blocked_jobs.exists():
ScheduleTaskManager().schedule()
if instance.spawned_by_workflow:
ScheduleWorkflowManager().schedule()
@task(queue=get_task_queuename)
def handle_work_success(task_actual):
try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
except ObjectDoesNotExist:
logger.warning('Missing {} `{}` in success callback.'.format(task_actual['type'], task_actual['id']))
return
if not instance:
return
schedule_manager_success_or_error(instance)
@task(queue=get_task_queuename)
def handle_work_error(task_actual):
try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
except ObjectDoesNotExist:
logger.warning('Missing {} `{}` in error callback.'.format(task_actual['type'], task_actual['id']))
return
if not instance:
return
subtasks = instance.get_jobs_fail_chain() # reverse of dependent_jobs mostly
logger.debug(f'Executing error task id {task_actual["id"]}, subtasks: {[subtask.id for subtask in subtasks]}')
deps_of_deps = {}
for subtask in subtasks:
if subtask.celery_task_id != instance.celery_task_id and not subtask.cancel_flag and not subtask.status in ('successful', 'failed'):
# If there are multiple in the dependency chain, A->B->C, and this was called for A, blame B for clarity
blame_job = deps_of_deps.get(subtask.id, instance)
subtask.status = 'failed'
subtask.failed = True
if not subtask.job_explanation:
subtask.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(blame_job)),
blame_job.name,
blame_job.id,
)
subtask.save()
subtask.websocket_emit_status("failed")
for sub_subtask in subtask.get_jobs_fail_chain():
deps_of_deps[sub_subtask.id] = subtask
# We only send 1 job complete message since all the job completion message
# handling does is trigger the scheduler. If we extend the functionality of
# what the job complete message handler does then we may want to send a
# completion event for each job here.
schedule_manager_success_or_error(instance)
def handle_failure_notifications(task_ids):
"""A task-ified version of the method that sends notifications."""
found_task_ids = set()
for instance in UnifiedJob.objects.filter(id__in=task_ids):
found_task_ids.add(instance.id)
try:
instance.send_notification_templates('failed')
except Exception:
logger.exception(f'Error preparing notifications for task {instance.id}')
deleted_tasks = set(task_ids) - found_task_ids
if deleted_tasks:
logger.warning(f'Could not send notifications for {deleted_tasks} because they were not found in the database')
@task(queue=get_task_queuename)

View File

@@ -76,3 +76,24 @@ def test_hashivault_handle_auth_kubernetes():
def test_hashivault_handle_auth_not_enough_args():
with pytest.raises(Exception):
hashivault.handle_auth()
class TestDelineaImports:
"""
These module have a try-except for ImportError which will allow using the older library
but we do not want the awx_devel image to have the older library,
so these tests are designed to fail if these wind up using the fallback import
"""
def test_dsv_import(self):
from awx.main.credential_plugins.dsv import SecretsVault # noqa
# assert this module as opposed to older thycotic.secrets.vault
assert SecretsVault.__module__ == 'delinea.secrets.vault'
def test_tss_import(self):
from awx.main.credential_plugins.tss import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret # noqa
for cls in (DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret):
# assert this module as opposed to older thycotic.secrets.server
assert cls.__module__ == 'delinea.secrets.server'

View File

@@ -5,8 +5,8 @@ import tempfile
import shutil
from awx.main.tasks.jobs import RunJob
from awx.main.tasks.system import execution_node_health_check, _cleanup_images_and_files, handle_work_error
from awx.main.models import Instance, Job, InventoryUpdate, ProjectUpdate
from awx.main.tasks.system import execution_node_health_check, _cleanup_images_and_files
from awx.main.models import Instance, Job
@pytest.fixture
@@ -73,17 +73,3 @@ def test_does_not_run_reaped_job(mocker, mock_me):
job.refresh_from_db()
assert job.status == 'failed'
mock_run.assert_not_called()
@pytest.mark.django_db
def test_handle_work_error_nested(project, inventory_source):
pu = ProjectUpdate.objects.create(status='failed', project=project, celery_task_id='1234')
iu = InventoryUpdate.objects.create(status='pending', inventory_source=inventory_source, source='scm')
job = Job.objects.create(status='pending')
iu.dependent_jobs.add(pu)
job.dependent_jobs.add(pu, iu)
handle_work_error({'type': 'project_update', 'id': pu.id})
iu.refresh_from_db()
job.refresh_from_db()
assert iu.job_explanation == f'Previous Task Failed: {{"job_type": "project_update", "job_name": "", "job_id": "{pu.id}"}}'
assert job.job_explanation == f'Previous Task Failed: {{"job_type": "inventory_update", "job_name": "", "job_id": "{iu.id}"}}'

View File

@@ -47,7 +47,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
]
),
),
@@ -61,7 +61,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
'action(type="omfwd" target="localhost" port="9000" protocol="udp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa
'action(type="omfwd" target="localhost" port="9000" protocol="udp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5")', # noqa
]
),
),
@@ -75,7 +75,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
'action(type="omfwd" target="localhost" port="9000" protocol="tcp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa
'action(type="omfwd" target="localhost" port="9000" protocol="tcp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5")', # noqa
]
),
),
@@ -89,7 +89,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -103,7 +103,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -117,7 +117,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -131,7 +131,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -145,7 +145,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -159,7 +159,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -173,7 +173,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
]
),
),

View File

@@ -1,8 +1,43 @@
import signal
import functools
from awx.main.tasks.signals import signal_state, signal_callback, with_signal_handling
def pytest_sigint():
pytest_sigint.called_count += 1
def pytest_sigterm():
pytest_sigterm.called_count += 1
def tmp_signals_for_test(func):
"""
When we run our internal signal handlers, it will call the original signal
handlers when its own work is finished.
This would crash the test runners normally, because those methods will
shut down the process.
So this is a decorator to safely replace existing signal handlers
with new signal handlers that do nothing so that tests do not crash.
"""
@functools.wraps(func)
def wrapper():
original_sigterm = signal.getsignal(signal.SIGTERM)
original_sigint = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGTERM, pytest_sigterm)
signal.signal(signal.SIGINT, pytest_sigint)
pytest_sigterm.called_count = 0
pytest_sigint.called_count = 0
func()
signal.signal(signal.SIGTERM, original_sigterm)
signal.signal(signal.SIGINT, original_sigint)
return wrapper
@tmp_signals_for_test
def test_outer_inner_signal_handling():
"""
Even if the flag is set in the outer context, its value should persist in the inner context
@@ -15,17 +50,22 @@ def test_outer_inner_signal_handling():
@with_signal_handling
def f1():
assert signal_callback() is False
signal_state.set_flag()
signal_state.set_sigterm_flag()
assert signal_callback()
f2()
original_sigterm = signal.getsignal(signal.SIGTERM)
assert signal_callback() is False
assert pytest_sigterm.called_count == 0
assert pytest_sigint.called_count == 0
f1()
assert signal_callback() is False
assert signal.getsignal(signal.SIGTERM) is original_sigterm
assert pytest_sigterm.called_count == 1
assert pytest_sigint.called_count == 0
@tmp_signals_for_test
def test_inner_outer_signal_handling():
"""
Even if the flag is set in the inner context, its value should persist in the outer context
@@ -34,7 +74,7 @@ def test_inner_outer_signal_handling():
@with_signal_handling
def f2():
assert signal_callback() is False
signal_state.set_flag()
signal_state.set_sigint_flag()
assert signal_callback()
@with_signal_handling
@@ -45,6 +85,10 @@ def test_inner_outer_signal_handling():
original_sigterm = signal.getsignal(signal.SIGTERM)
assert signal_callback() is False
assert pytest_sigterm.called_count == 0
assert pytest_sigint.called_count == 0
f1()
assert signal_callback() is False
assert signal.getsignal(signal.SIGTERM) is original_sigterm
assert pytest_sigterm.called_count == 0
assert pytest_sigint.called_count == 1

View File

@@ -143,13 +143,6 @@ def test_send_notifications_job_id(mocker):
assert UnifiedJob.objects.get.called_with(id=1)
def test_work_success_callback_missing_job():
task_data = {'type': 'project_update', 'id': 9999}
with mock.patch('django.db.models.query.QuerySet.get') as get_mock:
get_mock.side_effect = ProjectUpdate.DoesNotExist()
assert system.handle_work_success(task_data) is None
@mock.patch('awx.main.models.UnifiedJob.objects.get')
@mock.patch('awx.main.models.Notification.objects.filter')
def test_send_notifications_list(mock_notifications_filter, mock_job_get, mocker):

View File

@@ -17,11 +17,26 @@ def construct_rsyslog_conf_template(settings=settings):
port = getattr(settings, 'LOG_AGGREGATOR_PORT', '')
protocol = getattr(settings, 'LOG_AGGREGATOR_PROTOCOL', '')
timeout = getattr(settings, 'LOG_AGGREGATOR_TCP_TIMEOUT', 5)
max_disk_space_main_queue = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_GB', 1)
action_queue_size = getattr(settings, 'LOG_AGGREGATOR_ACTION_QUEUE_SIZE', 131072)
max_disk_space_action_queue = getattr(settings, 'LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB', 1)
spool_directory = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_PATH', '/var/lib/awx').rstrip('/')
error_log_file = getattr(settings, 'LOG_AGGREGATOR_RSYSLOGD_ERROR_LOG_FILE', '')
queue_options = [
f'queue.spoolDirectory="{spool_directory}"',
'queue.filename="awx-external-logger-action-queue"',
f'queue.maxDiskSpace="{max_disk_space_action_queue}g"', # overall disk space for all queue files
'queue.maxFileSize="100m"', # individual file size
'queue.type="LinkedList"',
'queue.saveOnShutdown="on"',
'queue.syncqueuefiles="on"', # (f)sync when checkpoint occurs
'queue.checkpointInterval="1000"', # Update disk queue every 1000 messages
f'queue.size="{action_queue_size}"', # max number of messages in queue
f'queue.highwaterMark="{int(action_queue_size * 0.75)}"', # 75% of queue.size
f'queue.discardMark="{int(action_queue_size * 0.9)}"', # 90% of queue.size
'queue.discardSeverity="5"', # Only discard notice, info, debug if we must discard anything
]
if not os.access(spool_directory, os.W_OK):
spool_directory = '/var/lib/awx'
@@ -33,7 +48,6 @@ def construct_rsyslog_conf_template(settings=settings):
'$WorkDirectory /var/lib/awx/rsyslog',
f'$MaxMessageSize {max_bytes}',
'$IncludeConfig /var/lib/awx/rsyslog/conf.d/*.conf',
f'main_queue(queue.spoolDirectory="{spool_directory}" queue.maxdiskspace="{max_disk_space_main_queue}g" queue.type="Disk" queue.filename="awx-external-logger-backlog")', # noqa
'module(load="imuxsock" SysSock.Use="off")',
'input(type="imuxsock" Socket="' + settings.LOGGING['handlers']['external_logger']['address'] + '" unlink="on" RateLimit.Burst="0")',
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
@@ -79,12 +93,7 @@ def construct_rsyslog_conf_template(settings=settings):
'action.resumeRetryCount="-1"',
'template="awx"',
f'action.resumeInterval="{timeout}"',
f'queue.spoolDirectory="{spool_directory}"',
'queue.filename="awx-external-logger-action-queue"',
f'queue.maxdiskspace="{max_disk_space_action_queue}g"',
'queue.type="LinkedList"',
'queue.saveOnShutdown="on"',
]
] + queue_options
if error_log_file:
params.append(f'errorfile="{error_log_file}"')
if parsed.path:
@@ -112,9 +121,18 @@ def construct_rsyslog_conf_template(settings=settings):
params = ' '.join(params)
parts.extend(['module(load="omhttp")', f'action({params})'])
elif protocol and host and port:
parts.append(
f'action(type="omfwd" target="{host}" port="{port}" protocol="{protocol}" action.resumeRetryCount="-1" action.resumeInterval="{timeout}" template="awx")' # noqa
)
params = [
'type="omfwd"',
f'target="{host}"',
f'port="{port}"',
f'protocol="{protocol}"',
'action.resumeRetryCount="-1"',
f'action.resumeInterval="{timeout}"',
'template="awx"',
] + queue_options
params = ' '.join(params)
parts.append(f'action({params})')
else:
parts.append('action(type="omfile" file="/dev/null")') # rsyslog needs *at least* one valid action to start
tmpl = '\n'.join(parts)

View File

@@ -3,6 +3,8 @@ import logging
import asyncio
from typing import Dict
import ipaddress
import aiohttp
from aiohttp import client_exceptions
import aioredis
@@ -71,7 +73,16 @@ class WebsocketRelayConnection:
if not self.channel_layer:
self.channel_layer = get_channel_layer()
uri = f"{self.protocol}://{self.remote_host}:{self.remote_port}/websocket/relay/"
# figure out if what we have is an ipaddress, IPv6 Addresses must have brackets added for uri
uri_hostname = self.remote_host
try:
# Throws ValueError if self.remote_host is a hostname like example.com, not an IPv4 or IPv6 ip address
if isinstance(ipaddress.ip_address(uri_hostname), ipaddress.IPv6Address):
uri_hostname = f"[{uri_hostname}]"
except ValueError:
pass
uri = f"{self.protocol}://{uri_hostname}:{self.remote_port}/websocket/relay/"
timeout = aiohttp.ClientTimeout(total=10)
secret_val = WebsocketSecretAuthHelper.construct_secret()

View File

@@ -796,7 +796,7 @@ LOG_AGGREGATOR_ENABLED = False
LOG_AGGREGATOR_TCP_TIMEOUT = 5
LOG_AGGREGATOR_VERIFY_CERT = True
LOG_AGGREGATOR_LEVEL = 'INFO'
LOG_AGGREGATOR_MAX_DISK_USAGE_GB = 1 # Main queue
LOG_AGGREGATOR_ACTION_QUEUE_SIZE = 131072
LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB = 1 # Action queue
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH = '/var/lib/awx'
LOG_AGGREGATOR_RSYSLOGD_DEBUG = False

View File

@@ -302,9 +302,9 @@ function HostsByProcessorTypeExample() {
const hostsByProcessorLimit = `intel_hosts`;
const hostsByProcessorSourceVars = `plugin: constructed
strict: true
groups:
intel_hosts: "GenuineIntel" in ansible_processor`;
strict: true
groups:
intel_hosts: "'GenuineIntel' in ansible_processor"`;
return (
<FormFieldGroupExpandable

View File

@@ -45,7 +45,7 @@ describe('<ConstructedInventoryHint />', () => {
);
expect(navigator.clipboard.writeText).toHaveBeenCalledWith(
expect.stringContaining(
'intel_hosts: "GenuineIntel" in ansible_processor'
`intel_hosts: \"'GenuineIntel' in ansible_processor\"`
)
);
});

View File

@@ -29,7 +29,7 @@ SettingsAPI.readCategory.mockResolvedValue({
LOG_AGGREGATOR_TCP_TIMEOUT: 5,
LOG_AGGREGATOR_VERIFY_CERT: true,
LOG_AGGREGATOR_LEVEL: 'INFO',
LOG_AGGREGATOR_MAX_DISK_USAGE_GB: 1,
LOG_AGGREGATOR_ACTION_QUEUE_SIZE: 131072,
LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB: 1,
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH: '/var/lib/awx',
LOG_AGGREGATOR_RSYSLOGD_DEBUG: false,

View File

@@ -31,7 +31,7 @@ const mockSettings = {
LOG_AGGREGATOR_TCP_TIMEOUT: 123,
LOG_AGGREGATOR_VERIFY_CERT: true,
LOG_AGGREGATOR_LEVEL: 'ERROR',
LOG_AGGREGATOR_MAX_DISK_USAGE_GB: 1,
LOG_AGGREGATOR_ACTION_QUEUE_SIZE: 131072,
LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB: 1,
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH: '/var/lib/awx',
LOG_AGGREGATOR_RSYSLOGD_DEBUG: false,

View File

@@ -659,21 +659,21 @@
]
]
},
"LOG_AGGREGATOR_MAX_DISK_USAGE_GB": {
"LOG_AGGREGATOR_ACTION_QUEUE_SIZE": {
"type": "integer",
"required": false,
"label": "Maximum disk persistence for external log aggregation (in GB)",
"help_text": "Amount of data to store (in gigabytes) during an outage of the external log aggregator (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. Notably, this is used for the rsyslogd main queue (for input messages).",
"label": "Maximum number of messages that can be stored in the log action queue",
"help_text": "Defines how large the rsyslog action queue can grow in number of messages stored. This can have an impact on memory utilization. When the queue reaches 75% of this number, the queue will start writing to disk (queue.highWatermark in rsyslog). When it reaches 90%, NOTICE, INFO, and DEBUG messages will start to be discarded (queue.discardMark with queue.discardSeverity=5).",
"min_value": 1,
"category": "Logging",
"category_slug": "logging",
"default": 1
"default": 131072
},
"LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB": {
"type": "integer",
"required": false,
"label": "Maximum disk persistence for rsyslogd action queuing (in GB)",
"help_text": "Amount of data to store (in gigabytes) if an rsyslog action takes time to process an incoming message (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.",
"help_text": "Amount of data to store (in gigabytes) if an rsyslog action takes time to process an incoming message (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). It stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.",
"min_value": 1,
"category": "Logging",
"category_slug": "logging",
@@ -5016,10 +5016,10 @@
]
]
},
"LOG_AGGREGATOR_MAX_DISK_USAGE_GB": {
"LOG_AGGREGATOR_ACTION_QUEUE_SIZE": {
"type": "integer",
"label": "Maximum disk persistence for external log aggregation (in GB)",
"help_text": "Amount of data to store (in gigabytes) during an outage of the external log aggregator (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. Notably, this is used for the rsyslogd main queue (for input messages).",
"label": "Maximum number of messages that can be stored in the log action queue",
"help_text": "Defines how large the rsyslog action queue can grow in number of messages stored. This can have an impact on memory utilization. When the queue reaches 75% of this number, the queue will start writing to disk (queue.highWatermark in rsyslog). When it reaches 90%, NOTICE, INFO, and DEBUG messages will start to be discarded (queue.discardMark with queue.discardSeverity=5).",
"min_value": 1,
"category": "Logging",
"category_slug": "logging",
@@ -5028,7 +5028,7 @@
"LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB": {
"type": "integer",
"label": "Maximum disk persistence for rsyslogd action queuing (in GB)",
"help_text": "Amount of data to store (in gigabytes) if an rsyslog action takes time to process an incoming message (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.",
"help_text": "Amount of data to store (in gigabytes) if an rsyslog action takes time to process an incoming message (defaults to 1). Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). It stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.",
"min_value": 1,
"category": "Logging",
"category_slug": "logging",

View File

@@ -70,7 +70,7 @@
"LOG_AGGREGATOR_TCP_TIMEOUT": 5,
"LOG_AGGREGATOR_VERIFY_CERT": true,
"LOG_AGGREGATOR_LEVEL": "INFO",
"LOG_AGGREGATOR_MAX_DISK_USAGE_GB": 1,
"LOG_AGGREGATOR_ACTION_QUEUE_SIZE": 131072,
"LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB": 1,
"LOG_AGGREGATOR_MAX_DISK_USAGE_PATH": "/var/lib/awx",
"LOG_AGGREGATOR_RSYSLOGD_DEBUG": false,
@@ -548,4 +548,4 @@
"adj_list": []
}
}
}
}

View File

@@ -15,7 +15,7 @@
"LOG_AGGREGATOR_TCP_TIMEOUT": 5,
"LOG_AGGREGATOR_VERIFY_CERT": true,
"LOG_AGGREGATOR_LEVEL": "INFO",
"LOG_AGGREGATOR_MAX_DISK_USAGE_GB": 1,
"LOG_AGGREGATOR_ACTION_QUEUE_SIZE": 131072,
"LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB": 1,
"LOG_AGGREGATOR_MAX_DISK_USAGE_PATH": "/var/lib/awx",
"LOG_AGGREGATOR_RSYSLOGD_DEBUG": false,

View File

@@ -980,6 +980,15 @@ class ControllerAPIModule(ControllerModule):
def create_or_update_if_needed(
self, existing_item, new_item, endpoint=None, item_type='unknown', on_create=None, on_update=None, auto_exit=True, associations=None
):
# Remove boolean values of certain specific types
# this is needed so that boolean fields will not get a false value when not provided
for key in list(new_item.keys()):
if key in self.argument_spec:
param_spec = self.argument_spec[key]
if 'type' in param_spec and param_spec['type'] == 'bool':
if new_item[key] is None:
new_item.pop(key)
if existing_item:
return self.update_if_needed(existing_item, new_item, on_update=on_update, auto_exit=auto_exit, associations=associations)
else:

View File

@@ -118,6 +118,7 @@ status:
'''
from ..module_utils.controller_api import ControllerAPIModule
import json
def main():
@@ -161,7 +162,11 @@ def main():
}
for arg in ['job_type', 'limit', 'forks', 'verbosity', 'extra_vars', 'become_enabled', 'diff_mode']:
if module.params.get(arg):
post_data[arg] = module.params.get(arg)
# extra_var can receive a dict or a string, if a dict covert it to a string
if arg == 'extra_vars' and type(module.params.get(arg)) is not str:
post_data[arg] = json.dumps(module.params.get(arg))
else:
post_data[arg] = module.params.get(arg)
# Attempt to look up the related items the user specified (these will fail the module if not found)
post_data['inventory'] = module.resolve_name_to_id('inventories', inventory)

View File

@@ -33,7 +33,6 @@ options:
image:
description:
- The fully qualified url of the container image.
required: True
type: str
description:
description:
@@ -79,7 +78,7 @@ def main():
argument_spec = dict(
name=dict(required=True),
new_name=dict(),
image=dict(required=True),
image=dict(),
description=dict(),
organization=dict(),
credential=dict(),

View File

@@ -115,7 +115,7 @@ EXAMPLES = '''
- name: Export a job template named "My Template" and all Credentials
export:
job_templates: "My Template"
credential: 'all'
credentials: 'all'
- name: Export a list of inventories
export:

View File

@@ -72,6 +72,21 @@
- "result is changed"
- "result.status == 'successful'"
- name: Launch an Ad Hoc Command with extra_vars
ad_hoc_command:
inventory: "Demo Inventory"
credential: "{{ ssh_cred_name }}"
module_name: "ping"
extra_vars:
var1: "test var"
wait: true
register: result
- assert:
that:
- "result is changed"
- "result.status == 'successful'"
- name: Launch an Ad Hoc Command with Execution Environment specified
ad_hoc_command:
inventory: "Demo Inventory"

View File

@@ -42,6 +42,16 @@
that:
- "result is not changed"
- name: Modify the host as a no-op
host:
name: "{{ host_name }}"
inventory: "{{ inv_name }}"
register: result
- assert:
that:
- "result is not changed"
- name: Delete a Host
host:
name: "{{ host_name }}"
@@ -68,6 +78,15 @@
that:
- "result is changed"
- name: Use lookup to check that host was enabled
ansible.builtin.set_fact:
host_enabled_test: "lookup('awx.awx.controller_api', 'hosts/{{result.id}}/').enabled"
- name: Newly created host should have API default value for enabled
assert:
that:
- host_enabled_test
- name: Delete a Host
host:
name: "{{ result.id }}"

View File

@@ -76,6 +76,15 @@
that:
- result is changed
- name: Use lookup to check that schedules was enabled
ansible.builtin.set_fact:
schedules_enabled_test: "lookup('awx.awx.controller_api', 'schedules/{{result.id}}/').enabled"
- name: Newly created schedules should have API default value for enabled
assert:
that:
- schedules_enabled_test
- name: Build a real schedule with exists
schedule:
name: "{{ sched1 }}"

View File

@@ -4,9 +4,9 @@ Bulk API endpoints allows to perform bulk operations in single web request. Ther
- /api/v2/bulk/job_launch
- /api/v2/bulk/host_create
Making individual API calls in rapid succession or at high concurrency can overwhelm AWX's ability to serve web requests. When the application's ability to serve is exausted, clients often receive 504 timeout errors.
Making individual API calls in rapid succession or at high concurrency can overwhelm AWX's ability to serve web requests. When the application's ability to serve is exhausted, clients often receive 504 timeout errors.
Allowing the client combine actions into fewer requests allows for launching more jobs or adding more hosts with fewer requests and less time without exauhsting Controller's ability to serve requests, making excessive and repetitive database queries, or using excessive database connections (each web request opens a seperate database connection).
Allowing the client combine actions into fewer requests allows for launching more jobs or adding more hosts with fewer requests and less time without exhauhsting Controller's ability to serve requests, making excessive and repetitive database queries, or using excessive database connections (each web request opens a separate database connection).
## Bulk Job Launch

View File

@@ -104,7 +104,7 @@ Given settings.AWX_CONTROL_NODE_TASK_IMPACT is 1:
This setting allows you to determine how much impact controlling jobs has. This
can be helpful if you notice symptoms of your control plane exceeding desired
CPU or memory usage, as it effectivly throttles how many jobs can be run
CPU or memory usage, as it effectively throttles how many jobs can be run
concurrently by your control plane. This is usually a concern with container
groups, which at this time effectively have infinite capacity, so it is easy to
end up with too many jobs running concurrently, overwhelming the control plane
@@ -130,10 +130,10 @@ be `18`:
By default, only Instances have capacity and we only track capacity consumed per instance. With the max_forks and max_concurrent_jobs fields now available on Instance Groups, we additionally can limit how many jobs or forks are allowed to be concurrently consumed across an entire Instance Group or Container Group.
This is especially useful for Container Groups where previously, there was no limit to how many jobs we would submit to a Container Group, which made it impossible to "overflow" job loads from one Container Group to another container group, which may be on a different Kubenetes cluster or namespace.
This is especially useful for Container Groups where previously, there was no limit to how many jobs we would submit to a Container Group, which made it impossible to "overflow" job loads from one Container Group to another container group, which may be on a different Kubernetes cluster or namespace.
One way to calculate what max_concurrent_jobs is desirable to set on a Container Group is to consider the pod_spec for that container group. In the pod_spec we indicate the resource requests and limits for the automation job pod. If you pod_spec indicates that a pod with 100MB of memory will be provisioned, and you know your Kubernetes cluster has 1 worker node with 8GB of RAM, you know that the maximum number of jobs that you would ideally start would be around 81 jobs, calculated by taking (8GB memory on node * 1024 MB) // 100 MB memory/job pod which with floor division comes out to 81.
Alternatively, instead of considering the number of job pods and the resources requested, we can consider the memory consumption of the forks in the jobs. We normally consider that 100MB of memory will be used by each fork of ansible. Therefore we also know that our 8 GB worker node should also only run 81 forks of ansible at a time -- which depending on the forks and inventory settings of the job templates, could be consumed by anywhere from 1 job to 81 jobs. So we can also set max_forks = 81. This way, either 39 jobs with 1 fork can run (task impact is always forks + 1), or 2 jobs with forks set to 39 can run.
While this feature is most useful for Container Groups where there is no other way to limit job execution, this feature is avialable for use on any instance group. This can be useful if for other business reasons you want to set a InstanceGroup wide limit on concurrent jobs. For example, if you have a job template that you only want 10 copies of running at a time -- you could create a dedicated instance group for that job template and set max_concurrent_jobs to 10.
While this feature is most useful for Container Groups where there is no other way to limit job execution, this feature is available for use on any instance group. This can be useful if for other business reasons you want to set a InstanceGroup wide limit on concurrent jobs. For example, if you have a job template that you only want 10 copies of running at a time -- you could create a dedicated instance group for that job template and set max_concurrent_jobs to 10.

View File

@@ -45,7 +45,7 @@ of the awx-operator repo. If not, continue to the next section.
```
# in awx-operator repo on the branch you want to use
$ export IMAGE_TAG_BASE=quay.io/<username>/awx-operator
$ export VERSION=<cusom-tag>
$ export VERSION=<custom-tag>
$ make docker-build
$ docker push ${IMAGE_TAG_BASE}:${VERSION}
```
@@ -118,7 +118,7 @@ To access via the web browser, run the following command:
$ minikube service awx-service --url
```
To retreive your admin password
To retrieve your admin password
```
$ kubectl get secrets awx-admin-password -o json | jq '.data.password' | xargs | base64 -d
```

View File

@@ -0,0 +1,7 @@
# This requirements file is used for AWX latest doc builds.
sphinx # Tooling to build HTML from RST source.
sphinx-ansible-theme # Ansible community theme for Sphinx doc builds.
docutils # Tooling for RST processing and the swagger extension.
Jinja2 # Requires investiation. Possibly inherited from previous repo with a custom theme.
PyYaml # Requires investigation. Possibly used as tooling for swagger API reference content.

View File

@@ -1,5 +1,74 @@
sphinx==5.1.1
sphinx-ansible-theme==0.9.1
#
# This file is autogenerated by pip-compile with Python 3.11
# by the following command:
#
# pip-compile --allow-unsafe --output-file=docs/docsite/requirements.txt --strip-extras docs/docsite/requirements.in
#
alabaster==0.7.13
# via sphinx
ansible-pygments==0.1.1
# via sphinx-ansible-theme
babel==2.12.1
# via sphinx
certifi==2023.7.22
# via requests
charset-normalizer==3.2.0
# via requests
docutils==0.16
Jinja2<3.1
PyYaml
# via
# -r docs/docsite/requirements.in
# sphinx
# sphinx-rtd-theme
idna==3.4
# via requests
imagesize==1.4.1
# via sphinx
jinja2==3.0.3
# via
# -r docs/docsite/requirements.in
# sphinx
markupsafe==2.1.3
# via jinja2
packaging==23.1
# via sphinx
pygments==2.16.1
# via
# ansible-pygments
# sphinx
pyyaml==6.0.1
# via -r docs/docsite/requirements.in
requests==2.31.0
# via sphinx
snowballstemmer==2.2.0
# via sphinx
sphinx==5.1.1
# via
# -r docs/docsite/requirements.in
# sphinx-ansible-theme
# sphinx-rtd-theme
# sphinxcontrib-applehelp
# sphinxcontrib-devhelp
# sphinxcontrib-htmlhelp
# sphinxcontrib-jquery
# sphinxcontrib-qthelp
# sphinxcontrib-serializinghtml
sphinx-ansible-theme==0.9.1
# via -r docs/docsite/requirements.in
sphinx-rtd-theme==1.3.0
# via sphinx-ansible-theme
sphinxcontrib-applehelp==1.0.7
# via sphinx
sphinxcontrib-devhelp==1.0.5
# via sphinx
sphinxcontrib-htmlhelp==2.0.4
# via sphinx
sphinxcontrib-jquery==4.1
# via sphinx-rtd-theme
sphinxcontrib-jsmath==1.0.1
# via sphinx
sphinxcontrib-qthelp==1.0.6
# via sphinx
sphinxcontrib-serializinghtml==1.1.9
# via sphinx
urllib3==2.0.4
# via requests

View File

@@ -23,6 +23,7 @@ The default length of time, in seconds, that your supplied token is valid can be
4. Enter the timeout period in seconds in the **Idle Time Force Log Out** text field.
.. image:: ../common/images/configure-awx-system-timeout.png
:alt: Miscellaneous Authentication settings showing the Idle Time Force Logout field where you can adjust the token validity period.
4. Click **Save** to apply your changes.

View File

@@ -1,4 +1,3 @@
.. _ag_clustering:
Clustering
@@ -11,7 +10,7 @@ Clustering
Clustering is sharing load between hosts. Each instance should be able to act as an entry point for UI and API access. This should enable AWX administrators to use load balancers in front of as many instances as they wish and maintain good data visibility.
.. note::
Load balancing is optional and is entirely possible to have ingress on one or all instances as needed. The ``CSRF_TRUSTED_ORIGIN`` setting may be required if you are using AWX behind a load balancer. See :ref:`ki_csrf_trusted_origin_setting` for more detail.
Load balancing is optional and is entirely possible to have ingress on one or all instances as needed. The ``CSRF_TRUSTED_ORIGIN`` setting may be required if you are using AWX behind a load balancer. See :ref:`ki_csrf_trusted_origin_setting` for more detail.
Each instance should be able to join AWX cluster and expand its ability to execute jobs. This is a simple system where jobs can and will run anywhere rather than be directed on where to run. Also, clustered instances can be grouped into different pools/queues, called :ref:`ag_instance_groups`.
@@ -107,61 +106,61 @@ Example of customization could be:
::
---
spec:
...
node_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
topology_spread_constraints: |
- maxSkew: 100
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: "ScheduleAnyway"
labelSelector:
matchLabels:
app.kubernetes.io/name: "<resourcename>"
tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
task_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX_task"
effect: "NoSchedule"
postgres_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
postgres_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
- another-node-label-value
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
---
spec:
...
node_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
topology_spread_constraints: |
- maxSkew: 100
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: "ScheduleAnyway"
labelSelector:
matchLabels:
app.kubernetes.io/name: "<resourcename>"
tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
task_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX_task"
effect: "NoSchedule"
postgres_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
postgres_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
- another-node-label-value
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
Status and Monitoring via Browser API
@@ -204,6 +203,7 @@ The way jobs are run and reported to a 'normal' user of AWX does not change. On
- When a job is submitted from the API interface it gets pushed into the dispatcher queue. Each AWX instance will connect to and receive jobs from that queue using a particular scheduling algorithm. Any instance in the cluster is just as likely to receive the work and execute the task. If a instance fails while executing jobs, then the work is marked as permanently failed.
.. image:: ../common/images/clustering-visual.png
:alt: An illustration depicting job distribution in an AWX cluster.
- Project updates run successfully on any instance that could potentially run a job. Projects will sync themselves to the correct version on the instance immediately prior to running the job. If the needed revision is already locally checked out and Galaxy or Collections updates are not needed, then a sync may not be performed.
@@ -218,5 +218,3 @@ Job Runs
By default, when a job is submitted to the AWX queue, it can be picked up by any of the workers. However, you can control where a particular job runs, such as restricting the instances from which a job runs on.
In order to support temporarily taking an instance offline, there is a property enabled defined on each instance. When this property is disabled, no jobs will be assigned to that instance. Existing jobs will finish, but no new work will be assigned.

View File

@@ -11,6 +11,7 @@ AWX Configuration
You can configure various AWX settings within the Settings screen in the following tabs:
.. image:: ../common/images/ug-settings-menu-screen.png
:alt: Screenshot of the AWX settings menu screen.
Each tab contains fields with a **Reset** button, allowing you to revert any value entered back to the default value. **Reset All** allows you to revert all the values to their factory default values.
@@ -47,6 +48,7 @@ The Jobs tab allows you to configure the types of modules that are allowed to be
The values for all the timeouts are in seconds.
.. image:: ../common/images/configure-awx-jobs.png
:alt: Screenshot of the AWX job configuration settings.
3. Click **Save** to apply the settings or **Cancel** to abandon the changes.
@@ -69,6 +71,7 @@ The System tab allows you to define the base URL for the AWX host, configure ale
- **Logging settings**: configure logging options based on the type you choose:
.. image:: ../common/images/configure-awx-system-logging-types.png
:alt: Logging settings shown with the list of options for Logging Aggregator Types.
For more information about each of the logging aggregation types, refer to the :ref:`ag_logging` section of the |ata|.
@@ -78,6 +81,7 @@ The System tab allows you to define the base URL for the AWX host, configure ale
.. |help| image:: ../common/images/tooltips-icon.png
.. image:: ../common/images/configure-awx-system.png
:alt: Miscellaneous System settings window showing all possible configurable options.
.. note::

View File

@@ -140,6 +140,7 @@ The Instance Groups list view from the |at| User Interface provides a summary of
|Instance Group policy example|
.. |Instance Group policy example| image:: ../common/images/instance-groups_list_view.png
:alt: Instance Group list view with example instance group and container groups.
See :ref:`ug_instance_groups_create` for further detail.
@@ -231,6 +232,7 @@ Likewise, an administrator could assign multiple groups to each organization as
|Instance Group example|
.. |Instance Group example| image:: ../common/images/instance-groups-scenarios.png
:alt: Illustration showing grouping scenarios.
Arranging resources in this way offers a lot of flexibility. Also, you can create instance groups with only one instance, thus allowing you to direct work towards a very specific Host in the AWX cluster.
@@ -327,6 +329,7 @@ To create a container group:
|IG - create new CG|
.. |IG - create new CG| image:: ../common/images/instance-group-create-new-cg.png
:alt: Create new container group form.
4. Enter a name for your new container group and select the credential previously created to associate it to the container group.
@@ -342,10 +345,12 @@ To customize the Pod spec, specify the namespace in the **Pod Spec Override** fi
|IG - CG customize pod|
.. |IG - CG customize pod| image:: ../common/images/instance-group-customize-cg-pod.png
:alt: Create new container group form with the option to custom the pod spec.
You may provide additional customizations, if needed. Click **Expand** to view the entire customization window.
.. image:: ../common/images/instance-group-customize-cg-pod-expanded.png
:alt: The expanded view for customizing the pod spec.
.. note::
@@ -354,10 +359,12 @@ You may provide additional customizations, if needed. Click **Expand** to view t
Once the container group is successfully created, the **Details** tab of the newly created container group remains, which allows you to review and edit your container group information. This is the same menu that is opened if the Edit (|edit-button|) button is clicked from the **Instance Group** link. You can also edit **Instances** and review **Jobs** associated with this instance group.
.. |edit-button| image:: ../common/images/edit-button.png
:alt: Edit button.
|IG - example CG successfully created|
.. |IG - example CG successfully created| image:: ../common/images/instance-group-example-cg-successfully-created.png
:alt: Example of the successfully created instance group as shown in the Jobs tab of the Instance groups window.
Container groups and instance groups are labeled accordingly.
@@ -375,6 +382,7 @@ To verify the deployment and termination of your container:
|Dummy inventory|
.. |Dummy inventory| image:: ../common/images/inventories-create-new-cg-test-inventory.png
:alt: Example of creating a new container group test inventory.
2. Create "localhost" host in inventory with variables:
@@ -385,6 +393,7 @@ To verify the deployment and termination of your container:
|Inventory with localhost|
.. |Inventory with localhost| image:: ../common/images/inventories-create-new-cg-test-localhost.png
:alt: The new container group test inventory showing the populated variables.
3. Launch an ad hoc job against the localhost using the *ping* or *setup* module. Even though the **Machine Credential** field is required, it does not matter which one is selected for this simple test.
@@ -393,13 +402,14 @@ To verify the deployment and termination of your container:
.. |Launch inventory with localhost| image:: ../common/images/inventories-launch-adhoc-cg-test-localhost.png
.. image:: ../common/images/inventories-launch-adhoc-cg-test-localhost2.png
:alt: Launching a Ping adhoc command on the newly created inventory with localhost.
You can see in the jobs detail view the container was reached successfully using one of ad hoc jobs.
|Inventory with localhost ping success|
.. |Inventory with localhost ping success| image:: ../common/images/inventories-launch-adhoc-cg-test-localhost-success.png
:alt: Jobs output view showing a successfully ran adhoc job.
If you have an OpenShift UI, you can see Pods appear and disappear as they deploy and terminate. Alternatively, you can use the CLI to perform a ``get pod`` operation on your namespace to watch these same events occurring in real-time.
@@ -412,6 +422,7 @@ When you run a job associated with a container group, you can see the details of
|IG - instances jobs|
.. |IG - instances jobs| image:: ../common/images/instance-group-job-details-with-cgs.png
:alt: Example Job details window showing the associated execution environment and container group.
Kubernetes API failure conditions

View File

@@ -55,6 +55,7 @@ To set up enterprise authentication for Microsoft Azure Active Directory (AD), y
8. To verify that the authentication was configured correctly, logout of AWX and the login screen will now display the Microsoft Azure logo to allow logging in with those credentials.
.. image:: ../common/images/configure-awx-auth-azure-logo.png
:alt: AWX login screen displaying the Microsoft Azure logo for authentication.
For application registering basics in Azure AD, refer to the `Azure AD Identity Platform (v2)`_ overview.
@@ -102,6 +103,7 @@ SAML settings
SAML allows the exchange of authentication and authorization data between an Identity Provider (IdP - a system of servers that provide the Single Sign On service) and a Service Provider (in this case, AWX). AWX can be configured to talk with SAML in order to authenticate (create/login/logout) AWX users. User Team and Organization membership can be embedded in the SAML response to AWX.
.. image:: ../common/images/configure-awx-auth-saml-topology.png
:alt: Diagram depicting SAML topology for AWX.
The following instructions describe AWX as the service provider.
@@ -122,6 +124,7 @@ To setup SAML authentication:
In this example, the Service Provider is the AWX cluster, and therefore, the ID is set to the AWX Cluster FQDN.
.. image:: ../common/images/configure-awx-auth-saml-spentityid.png
:alt: Configuring SAML Service Provider Entity ID in AWX.
5. Create a server certificate for the Ansible cluster. Typically when an Ansible cluster is configured, AWX nodes will be configured to handle HTTP traffic only and the load balancer will be an SSL Termination Point. In this case, an SSL certificate is required for the load balancer, and not for the individual AWX Cluster Nodes. SSL can either be enabled or disabled per individual AWX node, but should be disabled when using an SSL terminated load balancer. It is recommended to use a non-expiring self signed certificate to avoid periodically updating certificates. This way, authentication will not fail in case someone forgets to update the certificate.
@@ -132,6 +135,7 @@ In this example, the Service Provider is the AWX cluster, and therefore, the ID
If you are using a CA bundle with your certificate, include the entire bundle in this field.
.. image:: ../common/images/configure-awx-auth-saml-cert.png
:alt: Configuring SAML Service Provider Public Certificate in AWX.
As an example for public certs:
@@ -167,6 +171,7 @@ As an example for private keys:
For example:
.. image:: ../common/images/configure-awx-auth-saml-org-info.png
:alt: Configuring SAML Organization information in AWX.
.. note::
These fields are required in order to properly configure SAML within AWX.
@@ -183,6 +188,7 @@ For example:
For example:
.. image:: ../common/images/configure-awx-auth-saml-techcontact-info.png
:alt: Configuring SAML Technical Contact information in AWX.
9. Provide the IdP with the support contact information in the **SAML Service Provider Support Contact** field. Do not remove the contents of this field.
@@ -196,6 +202,7 @@ For example:
For example:
.. image:: ../common/images/configure-awx-auth-saml-suppcontact-info.png
:alt: Configuring SAML Support Contact information in AWX.
10. In the **SAML Enabled Identity Providers** field, provide information on how to connect to each Identity Provider listed. AWX expects the following SAML attributes in the example below:
@@ -238,6 +245,7 @@ Configure the required keys for each IDp:
}
.. image:: ../common/images/configure-awx-auth-saml-idps.png
:alt: Configuring SAML Identity Providers (IdPs) in AWX.
.. warning::
@@ -249,6 +257,7 @@ Configure the required keys for each IDp:
The IdP provides the email, last name and firstname using the well known SAML urn. The IdP uses a custom SAML attribute to identify a user, which is an attribute that AWX is unable to read. Instead, AWX can understand the unique identifier name, which is the URN. Use the URN listed in the SAML “Name” attribute for the user attributes as shown in the example below.
.. image:: ../common/images/configure-awx-auth-saml-idps-urn.png
:alt: Configuring SAML Identity Providers (IdPs) in AWX using URNs.
11. Optionally provide the **SAML Organization Map**. For further detail, see :ref:`ag_org_team_maps`.
@@ -479,6 +488,7 @@ Example::
Alternatively, logout of AWX and the login screen will now display the SAML logo to indicate it as a alternate method of logging into AWX.
.. image:: ../common/images/configure-awx-auth-saml-logo.png
:alt: AWX login screen displaying the SAML logo for authentication.
Transparent SAML Logins
@@ -495,6 +505,7 @@ For transparent logins to work, you must first get IdP-initiated logins to work.
2. Once this is working, specify the redirect URL for non-logged-in users to somewhere other than the default AWX login page by using the **Login redirect override URL** field in the Miscellaneous Authentication settings window of the **Settings** menu, accessible from the left navigation bar. This should be set to ``/sso/login/saml/?idp=<name-of-your-idp>`` for transparent SAML login, as shown in the example.
.. image:: ../common/images/configure-awx-system-login-redirect-url.png
:alt: Configuring the login redirect URL in AWX Miscellaneous Authentication Settings.
.. note::
@@ -537,6 +548,7 @@ Terminal Access Controller Access-Control System Plus (TACACS+) is a protocol th
- **TACACS+ Authentication Protocol**: The protocol used by TACACS+ client. Options are **ascii** or **pap**.
.. image:: ../common/images/configure-awx-auth-tacacs.png
:alt: TACACS+ configuration details in AWX settings.
4. Click **Save** when done.
@@ -563,6 +575,7 @@ To configure OIDC in AWX:
The example below shows specific values associated to GitHub as the generic IdP:
.. image:: ../common/images/configure-awx-auth-oidc.png
:alt: OpenID Connect (OIDC) configuration details in AWX settings.
4. Click **Save** when done.
@@ -574,4 +587,4 @@ The example below shows specific values associated to GitHub as the generic IdP:
5. To verify that the authentication was configured correctly, logout of AWX and the login screen will now display the OIDC logo to indicate it as a alternate method of logging into AWX.
.. image:: ../common/images/configure-awx-auth-oidc-logo.png
:alt: AWX login screen displaying the OpenID Connect (OIDC) logo for authentication.

View File

@@ -1,4 +1,3 @@
.. _ag_instances:
Managing Capacity With Instances
@@ -58,6 +57,7 @@ Manage instances
Click **Instances** from the left side navigation menu to access the Instances list.
.. image:: ../common/images/instances_list_view.png
:alt: List view of instances in AWX
The Instances list displays all the current nodes in your topology, along with relevant details:
@@ -83,6 +83,7 @@ The Instances list displays all the current nodes in your topology, along with r
From this page, you can add, remove or run health checks on your nodes. Use the check boxes next to an instance to select it to remove or run a health check against. When a button is grayed-out, you do not have permission for that particular action. Contact your Administrator to grant you the required level of access. If you are able to remove an instance, you will receive a prompt for confirmation, like the one below:
.. image:: ../common/images/instances_delete_prompt.png
:alt: Prompt for deleting instances in AWX.
.. note::
@@ -95,6 +96,7 @@ Click **Remove** to confirm.
If running a health check on an instance, at the top of the Details page, a message displays that the health check is in progress.
.. image:: ../common/images/instances_health_check.png
:alt: Health check for instances in AWX
Click **Reload** to refresh the instance status.
@@ -103,10 +105,12 @@ Click **Reload** to refresh the instance status.
Health checks are ran asynchronously, and may take up to a minute for the instance status to update, even with a refresh. The status may or may not change after the health check. At the bottom of the Details page, a timer/clock icon displays next to the last known health check date and time stamp if the health check task is currently running.
.. image:: ../common/images/instances_health_check_pending.png
:alt: Health check for instance still in pending state.
The example health check shows the status updates with an error on node 'one':
.. image:: ../common/images/topology-viewer-instance-with-errors.png
:alt: Health check showing an error in one of the instances.
Add an instance
@@ -119,6 +123,7 @@ One of the ways to expand capacity is to create an instance, which serves as a n
2. In the Instances list view, click the **Add** button and the Create new Instance window opens.
.. image:: ../common/images/instances_create_new.png
:alt: Create a new instance form.
An instance has several attributes that may be configured:
@@ -134,6 +139,7 @@ An instance has several attributes that may be configured:
Upon successful creation, the Details of the created instance opens.
.. image:: ../common/images/instances_create_details.png
:alt: Details of the newly created instance.
.. note::
@@ -142,6 +148,7 @@ Upon successful creation, the Details of the created instance opens.
4. Click the download button next to the **Install Bundle** field to download the tarball that includes this new instance and the files relevant to install the node into the mesh.
.. image:: ../common/images/instances_install_bundle.png
:alt: Instance details showing the Download button in the Install Bundle field of the Details tab.
5. Extract the downloaded ``tar.gz`` file from the location you downloaded it. The install bundle contains yaml files, certificates, and keys that will be used in the installation process.
@@ -179,6 +186,7 @@ The content of the ``inventory.yml`` file serves as a template and contains vari
.. image:: ../common/images/instances_peers_tab.png
:alt: "Peers" tab showing two peers.
You may run a health check by selecting the node and clicking the **Run health check** button from its Details page.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 222 KiB

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 383 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 228 KiB

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 74 KiB

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 KiB

After

Width:  |  Height:  |  Size: 2.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 200 KiB

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

View File

@@ -3,6 +3,7 @@ AWX supports the use of a custom logo and login message. You can add a custom lo
.. image:: ../common/images/configure-awx-ui.png
:alt: Edit User Interface Settings form.
For the custom logo to look its best, use a ``.png`` file with a transparent background. GIF, PNG, and JPEG formats are supported.
@@ -14,14 +15,17 @@ adding it to the **Custom Login Info** text field.
For example, if you uploaded a specific logo, and added the following text:
.. image:: ../common/images/configure-awx-ui-logo-filled.png
:alt: Edit User Interface Settings form populated with custom text and logo.
The Tower login dialog would look like this:
.. image:: ../common/images/configure-awx-ui-angry-spud-login.png
:alt: AWX login screen with custom text and logo.
Selecting ``Revert`` will result in the appearance of the standard |at| logo.
.. image:: ../common/images/login-form.png
:alt: AWX login screen with default AWX logo.

View File

@@ -37,6 +37,7 @@ Additionally, post-upgrade, these settings are not be visible (or editable) from
AWX should still continue to fetch roles directly from public Galaxy even if galaxy.ansible.com is not the first credential in the list for the Organization. The global "Galaxy" settings are no longer configured at the jobs level, but at the Organization level in the User Interface. The Organization's Add and Edit windows have an optional **Credential** lookup field for credentials of ``kind=galaxy``.
.. image:: ../common/images/organizations-galaxy-credentials.png
:alt: Create a new Organization with Galaxy Credentials
It is very important to specify the order of these credentials as order sets precedence for the sync and lookup of the content.
For more information, see :ref:`ug_organizations_create`.
@@ -99,6 +100,7 @@ Access the Credentials from clicking **Credential Types** from the left navigati
|Credential Types - home empty|
.. |Credential Types - home empty| image:: ../common/images/credential-types-home-empty.png
:alt: Credential Types view without any credential types populated
If credential types have been created, this page displays a list of all existing and available Credential Types.
@@ -106,10 +108,12 @@ If credential types have been created, this page displays a list of all existing
|Credential Types - home with example credential types|
.. |Credential Types - home with example credential types| image:: ../common/images/credential-types-home-with-example-types.png
:alt: Credential Types list view with example credential types
To view more information about a credential type, click on its name or the Edit (|edit|) button from the **Actions** column.
.. |edit| image:: ../common/images/edit-button.png
:alt: Edit button
Each credential type displays its own unique configurations in the **Input Configuration** field and the **Injector Configuration** field, if applicable. Both YAML and JSON formats are supported in the configuration fields.
@@ -127,6 +131,7 @@ To create a new credential type:
|Create new credential type|
.. |Create new credential type| image:: ../common/images/credential-types-create-new.png
:alt: Create new credential type form
2. Enter the appropriate details in the **Name** and **Description** field.
@@ -302,6 +307,7 @@ An example of referencing multiple files in a custom credential template is as f
|New credential type|
.. |New credential type| image:: ../common/images/credential-types-new-listed.png
:alt: Credential Types list view with newly created credential type shown
Click |edit| to modify the credential type options under the Actions column.
@@ -310,6 +316,7 @@ Click |edit| to modify the credential type options under the Actions column.
In the Edit screen, you can modify the details or delete the credential. If the **Delete** button is grayed out, it is indication that the credential type that is being used by a credential, and you must delete the credential type from all the credentials that use it before you can delete it. Below is an example of such a message:
.. image:: ../common/images/credential-types-delete-confirmation.png
:alt: Credential type delete confirmation
7. Verify that the newly created credential type can be selected from the **Credential Type** selection window when creating a new credential:
@@ -317,5 +324,6 @@ Click |edit| to modify the credential type options under the Actions column.
|Verify new credential type|
.. |Verify new credential type| image:: ../common/images/credential-types-new-listed-verify.png
:alt: Newly created credential type selected from the credentials drop-down menu
For details on how to create a new credential, see :ref:`ug_credentials`.

View File

@@ -41,6 +41,7 @@ Click **Credentials** from the left navigation bar to access the Credentials pag
|Credentials - home with example credentials|
.. |Credentials - home with example credentials| image:: ../common/images/credentials-demo-edit-details.png
:alt: Credentials - home with example credentials
Credentials added to a Team are made available to all members of the Team, whereas credentials added to a User are only available to that specific User by default.
@@ -51,6 +52,7 @@ Clicking on the link for the **Demo Credential** takes you to the **Details** vi
|Credentials - home with demo credential details|
.. |Credentials - home with demo credential details| image:: ../common/images/credentials-home-with-demo-credential-details.png
:alt: Credentials - Demo credential details
Clicking the **Access** tab shows you users and teams associated with this Credential and their granted roles (owner, admin, auditor, etc.)
@@ -59,6 +61,7 @@ Clicking the **Access** tab shows you users and teams associated with this Crede
|Credentials - home with permissions credential details|
.. |Credentials - home with permissions credential details| image:: ../common/images/credentials-home-with-permissions-detail.png
:alt: Credentials - Access tab for Demo credential containing two users with their roles
.. note::
@@ -71,6 +74,7 @@ Clicking the **Job Templates** tab shows you the job templates associated with t
.. image:: ../common/images/credentials-home-with-jt-detail.png
:alt: Credentials - Job Template tab for Demo credential with example job template
You can click the **Add** button to assign this **Demo Credential** to additional job templates. Refer to the :ref:`ug_JobTemplates` section for further detail on creating a new job template.
@@ -89,6 +93,7 @@ To create a new credential:
|Create credential|
.. |Create credential| image:: ../common/images/credentials-create-credential.png
:alt: Create credential form
2. Enter the name for your new credential in the **Name** field.
@@ -102,6 +107,7 @@ To create a new credential:
4. Enter or select the credential type you want to create.
.. image:: ../common/images/credential-types-drop-down-menu.png
:alt: Credential types drop down menu
5. Enter the appropriate details depending on the type of credential selected, as described in the next section, :ref:`ug_credentials_cred_types`.
@@ -146,6 +152,7 @@ AWX uses the following environment variables for AWS credentials and are fields
|Credentials - create AWS credential|
.. |Credentials - create AWS credential| image:: ../common/images/credentials-create-aws-credential.png
:alt: Credentials - create AWS credential form
Traditional Amazon Web Services credentials consist of the AWS **Access Key** and **Secret Key**.
@@ -171,10 +178,12 @@ Selecting this credential allows AWX to access Galaxy or use a collection publis
|Credentials - create galaxy credential|
.. |Credentials - create galaxy credential| image:: ../common/images/credentials-create-galaxy-credential.png
:alt: Credentials - create galaxy credential form
To populate the **Galaxy Server URL** and the **Auth Server URL** fields, look for the corresponding fields of the |ah| section of the `Red Hat Hybrid Cloud Console <https://console.redhat.com/ansible/automation-hub/token>`_ labeled **Server URL** and **SSO URL**, respectively.
.. image:: ../common/images/hub-console-tokens-page.png
:alt: Hub console tokens page
Centrify Vault Credential Provider Lookup
@@ -194,6 +203,7 @@ Aside from specifying a name, the **Authentication URL** is the only required fi
|Credentials - create container credential|
.. |Credentials - create container credential| image:: ../common/images/credentials-create-container-credential.png
:alt: Credentials - create container credential form
CyberArk Central Credential Provider Lookup
@@ -218,6 +228,7 @@ Selecting this credential allows you to access GitHub using a Personal Access To
|Credentials - create GitHub credential|
.. |Credentials - create GitHub credential| image:: ../common/images/credentials-create-webhook-github-credential.png
:alt: Credentials - create GitHub credential form
GitHub PAT credentials require a value in the **Token** field, which is provided in your GitHub profile settings.
@@ -236,6 +247,7 @@ Selecting this credential allows you to access GitLab using a Personal Access To
|Credentials - create GitLab credential|
.. |Credentials - create GitLab credential| image:: ../common/images/credentials-create-webhook-gitlab-credential.png
:alt: Credentials - create GitLab credential form
GitLab PAT credentials require a value in the **Token** field, which is provided in your GitLab profile settings.
@@ -261,6 +273,7 @@ AWX uses the following environment variables for GCE credentials and are fields
|Credentials - create GCE credential|
.. |Credentials - create GCE credential| image:: ../common/images/credentials-create-gce-credential.png
:alt: Credentials - create GCE credential form
GCE credentials have the following inputs that are required:
@@ -270,6 +283,7 @@ GCE credentials have the following inputs that are required:
- **RSA Private Key**: The PEM file associated with the service account email.
.. |file-browser| image:: ../common/images/file-browser-button.png
:alt: File browser button
GPG Public Key
@@ -283,6 +297,7 @@ Selecting this credential type allows you to create a credential that gives AWX
|Credentials - create GPG credential|
.. |Credentials - create GPG credential| image:: ../common/images/credentials-create-gpg-credential.png
:alt: Credentials - create GPG credential form
See :ref:`ug_content_signing` for detailed information on how to generate a valid keypair, use the CLI tool to sign content, and how to add the public key to AWX.
@@ -308,6 +323,7 @@ Selecting this credential type enables synchronization of cloud inventory with R
|Credentials - create Insights credential|
.. |Credentials - create Insights credential| image:: ../common/images/credentials-create-insights-credential.png
:alt: Credentials - create Insights credential form
Insights credentials consist of the Insights **Username** and **Password**, which is the users Red Hat Customer Portal Account username and password.
@@ -326,6 +342,7 @@ Machine/SSH credentials do not use environment variables. Instead, they pass the
|Credentials - create machine credential|
.. |Credentials - create machine credential| image:: ../common/images/credentials-create-machine-credential.png
:alt: Credentials - create machine credential form
Machine credentials have several attributes that may be configured:
@@ -336,6 +353,7 @@ Machine credentials have several attributes that may be configured:
- **Privilege Escalation Method**: Specifies the type of escalation privilege to assign to specific users. This is equivalent to specifying the ``--become-method=BECOME_METHOD`` parameter, where ``BECOME_METHOD`` could be any of the typical methods described below, or a custom method you've written. Begin entering the name of the method, and the appropriate name auto-populates.
.. image:: ../common/images/credentials-create-machine-credential-priv-escalation.png
:alt: Credentials - create machine credential privilege escalation drop-down menu
- empty selection: If a task/play has ``become`` set to ``yes`` and is used with an empty selection, then it will default to ``sudo``
@@ -381,6 +399,7 @@ Selecting this credential type enables synchronization of cloud inventory with M
|Credentials - create Azure credential|
.. |Credentials - create Azure credential| image:: ../common/images/credentials-create-azure-credential.png
:alt: Credentials - create Azure credential form
Microsoft Azure Resource Manager credentials have several attributes that may be configured:
@@ -449,6 +468,7 @@ AWX uses the following environment variables for Network credentials and are fie
|Credentials - create network credential|
.. |Credentials - create network credential| image:: ../common/images/credentials-create-network-credential.png
:alt: Credentials - create network credential form
Network credentials have several attributes that may be configured:
@@ -480,6 +500,7 @@ Selecting this credential type allows you to create instance groups that point t
|Credentials - create Containers credential|
.. |Credentials - create Containers credential| image:: ../common/images/credentials-create-containers-credential.png
:alt: Credentials - create Containers credential form
Container credentials have the following inputs:
@@ -503,6 +524,7 @@ Selecting this credential type enables synchronization of cloud inventory with O
|Credentials - create OpenStack credential|
.. |Credentials - create OpenStack credential| image:: ../common/images/credentials-create-openstack-credential.png
:alt: Credentials - create OpenStack credential form
OpenStack credentials have the following inputs that are required:
@@ -525,6 +547,7 @@ Red Hat Ansible Automation Platform
Selecting this credential allows you to access a Red Hat Ansible Automation Platform instance.
.. image:: ../common/images/credentials-create-at-credential.png
:alt: Credentials - create Red Hat Ansible Automation Platform credential form
The Red Hat Ansible Automation Platform credentials have the following inputs that are required:
@@ -551,6 +574,7 @@ AWX writes a Satellite configuration file based on fields prompted in the user i
|Credentials - create Red Hat Satellite 6 credential|
.. |Credentials - create Red Hat Satellite 6 credential| image:: ../common/images/credentials-create-rh-sat-credential.png
:alt: Credentials - create Red Hat Satellite 6 credential form
Satellite credentials have the following inputs that are required:
@@ -581,6 +605,7 @@ AWX uses the following environment variables for Red Hat Virtualization credenti
|Credentials - create rhv credential|
.. |Credentials - create rhv credential| image:: ../common/images/credentials-create-rhv-credential.png
:alt: Credentials - create Red Hat Virtualization credential form
RHV credentials have the following inputs that are required:
@@ -601,6 +626,7 @@ SCM (source control) credentials are used with Projects to clone and update loca
|Credentials - create SCM credential|
.. |Credentials - create SCM credential| image:: ../common/images/credentials-create-scm-credential.png
:alt: Credentials - create SCM credential form
Source Control credentials have several attributes that may be configured:
@@ -637,6 +663,7 @@ Selecting this credential type enables synchronization of inventory with Ansible
|Credentials - create Vault credential|
.. |Credentials - create Vault credential| image:: ../common/images/credentials-create-vault-credential.png
:alt: Credentials - create Vault credential form
Vault credentials require the **Vault Password** and an optional **Vault Identifier** if applying multi-Vault credentialing. For more information on AWX Multi-Vault support, refer to the :ref:`ag_multi_vault` section of the |ata|.
@@ -671,6 +698,7 @@ AWX uses the following environment variables for VMware vCenter credentials and
|Credentials - create VMware credential|
.. |Credentials - create VMware credential| image:: ../common/images/credentials-create-vmware-credential.png
:alt: Credentials - create VMware credential form
VMware credentials have the following inputs that are required:

View File

@@ -2,304 +2,8 @@
Execution Environment Setup Reference
=======================================
This section contains reference information associated with the definition of an |ee|.
You define the content of your execution environment in a YAML file. By default, this file is called ``execution_environment.yml``. This file tells Ansible Builder how to create the build instruction file (Containerfile for Podman, Dockerfile for Docker) and build context for your container image.
.. note::
This page documents the definition schema for Ansible Builder 3.x. If you are running an older version of Ansible Builder, you need an older schema version. Please consult older versions of the docs for more information. We recommend using version 3, which offers substantially more configurable options and functionality than previous versions.
.. _ref_ee_definition:
Execution environment definition
---------------------------------
A definition file is a ``.yml`` file that is required to build an image for an |ee|. Below is a sample version 3 |ee| definition schema file. To use Ansible Builder 3.x, you must specify the schema version. If your |ee| file does not specify ``version: 3``, Ansible Builder will assume you want version 1.
::
---
version: 3
build_arg_defaults:
ANSIBLE_GALAXY_CLI_COLLECTION_OPTS: '--pre'
dependencies:
galaxy: requirements.yml
python:
- six
- psutil
system: bindep.txt
images:
base_image:
name: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8:latest
additional_build_files:
- src: files/ansible.cfg
dest: configs
additional_build_steps:
prepend_galaxy:
- ADD _build/configs/ansible.cfg ~/.ansible.cfg
prepend_final: |
RUN whoami
RUN cat /etc/os-release
append_final:
- RUN echo This is a post-install command!
- RUN ls -la /etc
Configuration options
----------------------
You may use the configuration YAML keys listed here in your v3 |ee| definition file. The Ansible Builder 3.x execution environment definition file accepts seven top-level sections:
.. contents::
:local:
additional_build_files
~~~~~~~~~~~~~~~~~~~~~~~
Specifies files to be added to the build context directory. These can then be referenced or copied by ``additional_build_steps`` during any build stage. The format is a list of dictionary values, each with a ``src`` and ``dest`` key and value.
Each list item must be a dictionary containing the following (non-optional) keys:
**src**
Specifies the source file(s) to copy into the build context directory. This may either be an absolute path (e.g., ``/home/user/.ansible.cfg``), or a path that is relative to the |ee| file. Relative paths may be a glob expression matching one or more files (e.g. ``files/*.cfg``). Note that an absolute path may **not** include a regular expression. If ``src`` is a directory, the entire contents of that directory are copied to ``dest``.
**dest**
Specifies a subdirectory path underneath the ``_build`` subdirectory of the build context directory that should contain the source file(s) (e.g., ``files/configs``). This may not be an absolute path or contain ``..`` within the path. This directory will be created for you if it does not exist.
.. note::
When using an ``ansible.cfg`` file to pass a token and other settings for a private account to an |ah| server, listing the config file path here (as a string) will enable it to be included as a build argument in the initial phase of the build.
additional_build_steps
~~~~~~~~~~~~~~~~~~~~~~~
Specifies custom build commands for any build phase. These commands will be inserted directly into the build instruction file for the container runtime (e.g., Containerfile or Dockerfile). The commands must conform to any rules required by the containerization tool.
You can add build steps before or after any stage of the image creation process. For example, if you need ``git`` to be installed before you install your dependencies, you can add a build step at the end of the base build stage.
Below are the valid keys for this section. Each supports either a multi-line string, or a list of strings.
**prepend_base**
Commands to insert before building of the base image.
**append_base**
Commands to insert after building of the base image.
**prepend_galaxy**
Commands to insert before building of the galaxy image.
**append_galaxy**
Commands to insert after building of the galaxy image.
**prepend_builder**
Commands to insert before building of the builder image.
**append_builder**
Commands to insert after building of the builder image.
**prepend_final**
Commands to insert before building of the final image.
**append_final**
Commands to insert after building of the final image.
build_arg_defaults
~~~~~~~~~~~~~~~~~~~
Specifies default values for build args as a dictionary. This is an alternative to using the ``--build-arg`` CLI flag.
Build arguments used by ``ansible-builder`` are the following:
**ANSIBLE_GALAXY_CLI_COLLECTION_OPTS**
Allows the user to pass the ``pre`` flag (or others) to enable the installation of pre-release collections.
**ANSIBLE_GALAXY_CLI_ROLE_OPTS**
This allows the user to pass any flags, such as --no-deps, to the role installation.
**PKGMGR_PRESERVE_CACHE**
This controls how often the package manager cache is cleared during the image build process. If this value is not set, which is the default, the cache is cleared frequently. If it is set to the string ``always``, the cache is never cleared. Any other value forces the cache to be cleared only after the system dependencies are installed in the final build stage.
Ansible Builder hard-codes values given inside of ``build_arg_defaults`` into the build instruction file, so they will persist if you run your container build manually.
If you specify the same variable in the |ee| definition and at the command line with the CLI ``build-arg`` flag, the CLI value will take higher precedence (the CLI value will override the value in the |ee| definition).
.. _ref_collections_metadata:
dependencies
~~~~~~~~~~~~~
Specifies dependencies to install into the final image, including ``ansible-core``, ``ansible-runner``, Python packages, system packages, and Ansible Collections. Ansible Builder automatically installs dependencies for any Ansible Collections you install.
In general, you can use standard syntax to constrain package versions. Use the same syntax you would pass to ``dnf``, ``pip``, ``ansible-galaxy``, or any other package management utility. You can also define your packages or collections in separate files and reference those files in the ``dependencies`` section of your |ee| definition file.
The following keys are valid for this section:
**ansible_core**
The version of the ``ansible-core`` Python package to be installed. This value is a dictionary with a single key, ``package_pip``. The ``package_pip`` value is passed directly to pip for installation and can be in any format that pip supports. Below are some example values:
::
ansible_core:
package_pip: ansible-core
ansible_core:
package_pip: ansible-core==2.14.3
ansible_core:
package_pip: https://github.com/example_user/ansible/archive/refs/heads/ansible.tar.gz
**ansible_runner**
The version of the Ansible Runner Python package to be installed. This value is a dictionary with a single key, package_pip. The package_pip value is passed directly to pip for installation and can be in any format that pip supports. Below are some example values:
::
ansible_runner:
package_pip: ansible-runner
ansible_runner:
package_pip: ansible-runner==2.3.2
ansible_runner:
package_pip: https://github.com/example_user/ansible-runner/archive/refs/heads/ansible-runner.tar.gz
**galaxy**
Ansible Collections to be installed from Galaxy. This may be a filename, a dictionary, or a multi-line string representation of an Ansible Galaxy ``requirements.yml`` file (see below for examples). Read more about the requirements file format in the `Galaxy user guide <https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#install-multiple-collections-with-a-requirements-file>`_.
**python**
The Python installation requirements. This may either be a filename, or a list of requirements (see below for an example). Ansible Builder combines all the Python requirements files from all collections into a single file using the ``requirements-parser`` library. This library supports complex syntax, including references to other files. If multiple collections require the same *package name*, Ansible Builder combines them into a single entry and combines the constraints. Certain package names are specifically *ignored* by ``ansible-builder``, meaning that Ansible Builder does not include them in the combined file of Python dependencies, even if a collection lists them as dependencies. These include test packages and packages that provide Ansible itself. The full list can be found in ``EXCLUDE_REQUIREMENTS`` in ``src/ansible_builder/_target_scripts/introspect.py``. If you need to include one of these ignored package names, use the ``--user-pip`` option of the ``introspect`` command to list it in the user requirements file. Packages supplied this way are not processed against the list of excluded Python packages.
**python_interpreter**
A dictionary that defines the Python system package name to be installed by dnf (``package_system``) and/or a path to the Python interpreter to be used (``python_path)``.
**system**
The system packages to be installed, in bindep format. This may either be a filename, or a list of requirements (see below for an example). For more information about bindep, refer to the `OpenDev documentation <https://docs.opendev.org/opendev/bindep/latest/readme.html>`_.
For system packages, use the ``bindep`` format to specify cross-platform requirements, so they can be installed by whichever package management system the execution environment uses. Collections should specify necessary requirements for ``[platform:rpm]``. Ansible Builder combines system package entries from multiple collections into a single file. Only requirements with *no* profiles (runtime requirements) are installed to the image. Entries from multiple collections which are outright duplicates of each other may be consolidated in the combined file.
The following example uses filenames that contain the various dependencies:
::
dependencies:
python: requirements.txt
system: bindep.txt
galaxy: requirements.yml
ansible_core:
package_pip: ansible-core==2.14.2
ansible_runner:
package_pip: ansible-runner==2.3.1
python_interpreter:
package_system: "python310"
python_path: "/usr/bin/python3.10"
And this example uses inline values:
::
dependencies:
python:
- pywinrm
system:
- iputils [platform:rpm]
galaxy:
collections:
- name: community.windows
- name: ansible.utils
version: 2.10.1
ansible_core:
package_pip: ansible-core==2.14.2
ansible_runner:
package_pip: ansible-runner==2.3.1
python_interpreter:
package_system: "python310"
python_path: "/usr/bin/python3.10"
.. note::
If any of these dependency files (``requirementa.txt,bindep.txt, and requirements.yml``) are in the ``build_ignore`` of the collection, it will not work correctly.
Collection maintainers can verify that ansible-builder recognizes the requirements they expect by using the ``introspect`` command, for example:
::
ansible-builder introspect --sanitize ~/.ansible/collections/
The ``--sanitize`` option reviews all of the collection requirements and removes duplicates. It also removes any Python requirements that should normally be excluded. Use the ``-v3`` option to ``introspect`` to see logging messages about requirements that are being excluded.
images
~~~~~~~
Specifies the base image to be used. At a minimum you **MUST** specify a source, image, and tag for the base image. The base image provides the operating system and may also provide some packages. We recommend using the standard ``host/namespace/container:tag`` syntax to specify images. You may use Podman or Docker shortcut syntax instead, but the full definition is more reliable and portable.
Valid keys for this section are:
**base_image**
A dictionary defining the parent image for the execution environment. A ``name`` key must be supplied with the container image to use. Use the ``signature_original_name`` key if the image is mirrored within your repository, but signed with the original image's signature key.
image verification
^^^^^^^^^^^^^^^^^^^
You can verify signed container images if you are using the ``podman`` container runtime. Set the ``container-policy`` CLI option to control how this data is used in relation to a Podman ``policy.json`` file for container image signature validation.
- ``ignore_all`` policy: Generate a ``policy.json`` file in the build ``context directory <context>`` where no signature validation is performed.
- ``system`` policy: Signature validation is performed using pre-existing ``policy.json`` files in standard system locations. ``ansible-builder`` assumes no responsibility for the content within these files, and the user has complete control over the content.
- ``signature_required`` policy: ``ansible-builder`` will use the container image definitions here to generate a ``policy.json`` file in the build ``context directory <context>`` that will be used during the build to validate the images.
options
~~~~~~~~
A dictionary of keywords/options that can affect builder runtime functionality. Valid keys for this section are:
**container_init**
A dictionary with keys that allow for customization of the container ``ENTRYPOINT`` and ``CMD`` directives (and related behaviors). Customizing these behaviors is an advanced task, and may result in subtle, difficult-to-debug failures. As the provided defaults for this section control a number of intertwined behaviors, overriding any value will skip all remaining defaults in this dictionary. Valid keys are:
**cmd**
Literal value for the ``CMD`` Containerfile directive. The default value is ``["bash"]``.
**entrypoint**
Literal value for the ``ENTRYPOINT`` Containerfile directive. The default entrypoint behavior handles signal propagation to subprocesses, as well as attempting to ensure at runtime that the container user has a proper environment with a valid writeable home directory, represented in ``/etc/passwd``, with the ``HOME`` environment variable set to match. The default entrypoint script may emit warnings to ``stderr`` in cases where it is unable to suitably adjust the user runtime environment. This behavior can be ignored or elevated to a fatal error; consult the source for the ``entrypoint`` target script for more details. The default value is ``["/opt/builder/bin/entrypoint", "dumb-init"]``.
**package_pip**
Package to install via pip for entrypoint support. This package will be installed in the final build image. The default value is ``dumb-init==1.2.5``.
**package_manager_path**
A string with the path to the package manager (dnf or microdnf) to use. The default is ``/usr/bin/dnf``. This value will be used to install a Python interpreter, if specified in ``dependencies``, and during the build phase by the ``assemble`` script.
**skip_ansible_check**
This boolean value controls whether or not the check for an installation of Ansible and Ansible Runner is performed on the final image. Set this value to ``True`` to not perform this check. The default is ``False``.
**relax_passwd_permissions**
This boolean value controls whether the ``root`` group (GID 0) is explicitly granted write permission to ``/etc/passwd`` in the final container image. The default entrypoint script may attempt to update ``/etc/passwd`` under some container runtimes with dynamically created users to ensure a fully-functional POSIX user environment and home directory. Disabling this capability can cause failures of software features that require users to be listed in ``/etc/passwd`` with a valid and writeable home directory (eg, ``async`` in ansible-core, and the ``~username`` shell expansion). The default is ``True``.
**workdir**
Default current working directory for new processes started under the final container image. Some container runtimes also use this value as ``HOME`` for dynamically-created users in the ``root`` (GID 0) group. When this value is specified, the directory will be created (if it doesn't already exist), set to ``root`` group ownership, and ``rwx`` group permissions recursively applied to it. The default value is ``/runner``.
**user**
This sets the username or UID to use as the default user for the final container image. The default value is ``1000``.
Example options section:
::
options:
container_init:
package_pip: dumb-init>=1.2.5
entrypoint: '["dumb-init"]'
cmd: '["csh"]'
package_manager_path: /usr/bin/microdnf
relax_password_permissions: false
skip_ansible_check: true
workdir: /myworkdir
user: bob
version
~~~~~~~~
An integer value that sets the schema version of the execution environment definition file. Defaults to ``1``. Must be ``3`` if you are using Ansible Builder 3.x.
For detailed information about the |ee| definition,
refer to the `Ansible Builder documentation <https://ansible.readthedocs.io/projects/builder/en/latest/definition/#execution-environment-definition>`_.
Default execution environment for AWX
--------------------------------------

View File

@@ -16,7 +16,7 @@ Execution Environments
Building an Execution Environment
---------------------------------
The `Getting started with Execution Environments guide` will give you a brief technology overview and show you how to build and test your first |ee| in a few easy steps.
The `Getting started with Execution Environments guide <https://ansible.readthedocs.io/en/latest/getting_started_ee/index.html>`_ will give you a brief technology overview and show you how to build and test your first |ee| in a few easy steps.
Use an execution environment in jobs
------------------------------------
@@ -48,16 +48,19 @@ In order to use an |ee| in a job, a few components are required:
- **Registry credential**: If the image has a protected container registry, provide the credential to access it.
.. image:: ../common/images/ee-new-ee-form-filled.png
:alt: Create new Execution Environment form
4. Click **Save**.
Now your newly added |ee| is ready to be used in a job template. To add an |ee| to a job template, specify it in the **Execution Environment** field of the job template, as shown in the example below. For more information on setting up a job template, see :ref:`ug_JobTemplates` in the |atu|.
.. image:: ../common/images/job-template-with-example-ee-selected.png
:alt: Job template using newly created Execution Environment
Once you added an |ee| to a job template, you can see those templates listed in the **Templates** tab of the |ee|:
.. image:: ../common/images/ee-details-templates-list.png
:alt: Templates tab of the Execution Environment showing one job associated with it
Execution environment mount options
@@ -82,7 +85,8 @@ If you encounter this error, or have upgraded from an older version of AWX, perf
2. In the **Paths to expose to isolated jobs** field of the Job Settings page, using the current example, expose the path as such:
.. image:: ../common/images/settings-paths2expose-iso-jobs.png
:alt: Jobs Settings page showing Paths to expose to isolated jobs field with defaults
.. note::
The ``:O`` option is only supported for directories. It is highly recommended that you be as specific as possible, especially when specifying system paths. Mounting ``/etc`` or ``/usr`` directly have impact that make it difficult to troubleshoot.
@@ -99,10 +103,11 @@ This informs podman to run a command similar to the example below, where the con
To expose isolated paths in OpenShift or Kubernetes containers as HostPath, assume the following configuration:
.. image:: ../common/images/settings-paths2expose-iso-jobs-mount-containers.png
:alt: Jobs Settings page showing Paths to expose to isolated jobs field with assumed configuration and Expose host paths for Container Group toggle enabled
Use the **Expose host paths for Container Groups** toggle to enable it.
Once the playbook runs, the resulting Pod spec will display similar to the example below. Note the details of the ``volumeMounts`` and ``volumes`` sections.
.. image:: ../common/images/mount-containers-playbook-run-podspec.png
:alt: Pod spec for the playbook run showing volumeMounts and volumes details

View File

@@ -29,16 +29,19 @@ To create a new credential for use with Insights:
5. In the **Organization** field, optionally enter the name of the organization with which the credential is associated, or click the |search| button and select it from the pop-up window.
.. |search| image:: ../common/images/search-button.png
:alt: Search button to select the organization from a pop-up window
6. In the **Credential Type** field, enter **Insights** or select it from the drop-down list.
.. image:: ../common/images/credential-types-popup-window-insights.png
:alt: Dropdown menu for selecting the "Insights" Credential Type
7. Enter a valid Insights credential in the **Username** and **Password** fields.
|Credentials - create with demo insights credentials|
.. |Credentials - create with demo insights credentials| image:: ../common/images/insights-create-with-demo-credentials.png
:alt: Create new credential form showing example Insights credential
8. Click **Save** when done.
@@ -71,19 +74,21 @@ To create a new Insights project:
|Insights - create demo insights project form|
.. |Insights - create demo insights project form| image:: ../common/images/insights-create-project-insights-form.png
:alt: Form for creating a new Insights project with specific Insights-related details
6. Click **Save** when done.
All SCM/Project syncs occur automatically the first time you save a new project. However, if you want them to be updated to what is current in Insights, manually update the SCM-based project by clicking the |update| button under the project's available Actions.
.. |update| image:: ../common/images/update-button.png
:alt: Update button to manually refresh the SCM-based project
This process syncs your Insights project with your Insights account solution. Notice that the status dot beside the name of the project updates once the sync has run.
|Insights - demo insights project success|
.. |Insights - demo insights project success| image:: ../common/images/insights-create-project-insights-succeed.png
:alt: Projects list showing successfully created sample Insights project
Create Insights Inventory
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -118,11 +123,13 @@ Remediation of an Insights inventory allows AWX to run Insights playbooks with a
|Insights - maintenance plan template filled|
.. |Insights - maintenance plan template filled| image:: ../common/images/insights-create-new-job-template-maintenance-plan-filled.png
:alt: Form for creating a maintenance plan template for Insights remediation.
3. Click **Save** when done.
4. Click the |launch| icon to launch the job template.
.. |launch| image:: ../common/images/launch-button.png
:alt: Launch icon.
Once complete, the job results display in the Job Details page.

View File

@@ -11,6 +11,7 @@ An :term:`Instance Group` provides the ability to group instances in a clustered
|Instance Group policy example|
.. |Instance Group policy example| image:: ../common/images/instance-groups_list_view.png
:alt: Instance groups list view showing example instance groups and one with capacity levels
For more information about the policy or rules associated with instance groups, see the :ref:`ag_instance_groups` section of the |ata|.
@@ -34,6 +35,7 @@ To create a new instance group:
|IG - create new IG|
.. |IG - create new IG| image:: ../common/images/instance-group-create-new-ig.png
:alt: Create instance group form
3. Enter the appropriate details into the following fields:
@@ -57,11 +59,12 @@ To create a new instance group:
Once the instance group is successfully created, the **Details** tab of the newly created instance group remains, allowing you to review and edit your instance group information. This is the same screen that opens when the **Edit** (|edit-button|) button is clicked from the **Instance Groups** list view. You can also edit **Instances** and review **Jobs** associated with this instance group.
.. |edit-button| image:: ../common/images/edit-button.png
:alt: Edit button
|IG - example IG successfully created|
.. |IG - example IG successfully created| image:: ../common/images/instance-group-example-ig-successfully-created.png
:alt: Instance group details showing how to view instances and jobs associated with an instance group
Associate instances to an instance group
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -75,6 +78,7 @@ To associate instances to an instance group:
|IG - select instances|
.. |IG - select instances| image:: ../common/images/instance-group-assoc-instances.png
:alt: Associating an instance with an instance group
3. In the following example, the instances added to the instance group displays along with information about their capacity.
@@ -83,6 +87,7 @@ This view also allows you to edit some key attributes associated with the instan
|IG - instances in IG callouts|
.. |IG - instances in IG callouts| image:: ../common/images/instance-group-instances-example-callouts.png
:alt: Edit attributes associated with instances in an instance group
View jobs associated with an instance group
@@ -93,6 +98,7 @@ To view the jobs associated with the instance group, click the **Jobs** tab of t
|IG - instances jobs|
.. |IG - instances jobs| image:: ../common/images/instance-group-jobs-list.png
:alt: Viewing jobs associated with an instance group
Each job displays the job status, ID, and name; type of job, time started and completed, who started the job; and applicable resources associated with it, such as template, inventory, project, |ee|, etc.

View File

@@ -3,10 +3,12 @@
Projects specify the branch, tag, or reference to use from source control in the ``scm_branch`` field. These are represented by the values specified in the Project Details fields as shown.
.. image:: ../common/images/projects-create-scm-project-branching-emphasized.png
:alt: Create New Project page with SCM branching options emphasized
Projects have the option to "Allow Branch Override". When checked, project admins can delegate branch selection to the job templates that use that project (requiring only project ``use_role``).
.. image:: ../common/images/projects-create-scm-project-branch-override-checked.png
:alt: Allow Branch Override checkbox option in Project selected
@@ -22,6 +24,7 @@ If **Clean** is checked, AWX discards modified files in its local copy of the re
.. _`Subversion`: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/subversion_module.html#parameters
.. image:: ../common/images/projects-create-scm-project-clean-checked.png
:alt: Clean checkbox option in Project selected
Project revision behavior
@@ -32,6 +35,7 @@ is stored when updated, and jobs using that project will employ this revision. P
This revision is shown in the **Source Control Revision** field of the job and its respective project update.
.. image:: ../common/images/jobs-output-branch-override-example.png
:alt: Project's Source Control Revision value
Consequently, offline job runs are impossible for non-default branches. To be sure that a job is running a static version from source control, use tags or commit hashes. Project updates do not save the revision of all branches, only the project default branch.

View File

@@ -106,4 +106,5 @@ The instance field ``capacity_adjustment`` allows you to select how much of one
To view or edit the capacity in the user interface, select the **Instances** tab of the Instance Group.
.. image:: ../common/images/instance-group-instances-capacity-callouts.png
.. image:: ../common/images/instance-group-instances-capacity-callouts.png
:alt: Instances tab of Instance Group showing sliders for capacity adjustment.

View File

@@ -28,6 +28,7 @@ Consider the following when setting up job slices:
- When executed, a sliced job splits each inventory into a number of "slice size" chunks. It then queues jobs of ansible-playbook runs on each chunk of the appropriate inventory. The inventory fed into ansible-playbook is a pared-down version of the original inventory that only contains the hosts in that particular slice. The completed sliced job that displays on the Jobs list are labeled accordingly, with the number of sliced jobs that have run:
.. image:: ../common/images/sliced-job-shown-jobs-list-view.png
:alt: Sliced job shown in Jobs list view
- These sliced jobs follow normal scheduling behavior (number of forks, queuing due to capacity, assignation to instance groups based on inventory mapping).
@@ -54,6 +55,7 @@ Job slice execution behavior
When jobs are sliced, they can run on any node and some may not run at the same time (insufficient capacity in the system, for example). When slice jobs are running, job details display the workflow and job slice(s) currently running, as well as a link to view their details individually.
.. image:: ../common/images/sliced-job-shown-jobs-output-view.png
:alt: Sliced job shown in Jobs output view
By default, job templates are not normally configured to execute simultaneously (``allow_simultaneous`` must be checked in the API or **Enable Concurrent Jobs** in the UI). Slicing overrides this behavior and implies ``allow_simultaneous`` even if that setting is unchecked. See :ref:`ug_JobTemplates` for information on how to specify this, as well as the number of job slices on your job template configuration.

View File

@@ -17,13 +17,16 @@ The **Templates** menu opens a list of the job templates that are currently avai
|Job templates - home with example job template|
.. |Job templates - home with example job template| image:: ../common/images/job-templates-home-with-example-job-template.png
:alt: Job templates - home with example job template
From this screen, you can launch (|launch|), edit (|edit|), and copy (|copy|) a job template. To delete a job template, you must select one or more templates and click the **Delete** button. Before deleting a job template, be sure it is not used in a workflow job template.
.. |edit| image:: ../common/images/edit-button.png
:alt: Edit button
.. |delete| image:: ../common/images/delete-button.png
.. |delete| image:: ../common/images/delete-button.png
:alt: Delete button
.. include:: ../common/work_items_deletion_warning.rst
@@ -33,6 +36,7 @@ From this screen, you can launch (|launch|), edit (|edit|), and copy (|copy|) a
Job templates can be used to build a workflow template. For templates that show the Workflow Visualizer (|wf-viz-icon|) icon next to them are workflow templates. Clicking it allows you to graphically build a workflow. Many parameters in a job template allow you to enable **Prompt on Launch** that can be modified at the workflow level, and do not affect the values assigned at the job template level. For instructions, see the :ref:`ug_wf_editor` section.
.. |wf-viz-icon| image:: ../common/images/wf-viz-icon.png
:alt: Workflow Visualizer icon
Create a Job Template
-----------------------
@@ -153,10 +157,13 @@ To create a new job template:
- Yes
.. |search| image:: ../common/images/search-button.png
:alt: Search button
.. |x-circle| image:: ../common/images/x-delete-button.png
:alt: Delete button
.. |x| image:: ../common/images/x-button.png
:alt: X button
3. **Options**: Specify options for launching this template, if necessary.
@@ -170,6 +177,7 @@ To create a new job template:
If you enable webhooks, other fields display, prompting for additional information:
.. image:: ../common/images/job-templates-options-webhooks.png
:alt: Job templates - options - webhooks
- **Webhook Service**: Select which service to listen for webhooks from
- **Webhook URL**: Automatically populated with the URL for the webhook service to POST requests to.
@@ -184,16 +192,19 @@ To create a new job template:
- **Prevent Instance Group Fallback**: Check this option to allow only the instance groups listed in the **Instance Groups** field above to execute the job. If unchecked, all available instances in the execution pool will be used based on the hierarchy described in :ref:`ag_instance_groups_control_where_job_runs`. Click the |help| icon for more information.
.. |help| image:: ../common/images/tooltips-icon.png
:alt: Tooltip
|Job templates - create new job template|
.. |Job templates - create new job template| image:: ../common/images/job-templates-create-new-job-template.png
:alt: Job templates - create new job template
4. When you have completed configuring the details of the job template, click **Save**.
Saving the template does not exit the job template page but advances to the Job Template Details tab for viewing. After saving the template, you can click **Launch** to launch the job, or click **Edit** to add or change the attributes of the template, such as permissions, notifications, view completed jobs, and add a survey (if the job type is not a scan). You must first save the template prior to launching, otherwise, the **Launch** button remains grayed-out.
.. image:: ../common/images/job-templates-job-template-details.png
:alt: Job templates - job template details
You can verify the template is saved when the newly created template appears on the Templates list view.
@@ -211,6 +222,7 @@ Work with Notifications
Clicking the **Notifications** tab allows you to review any notification integrations you have setup and their statuses, if they have ran.
.. image:: ../common/images/job-template-completed-notifications-view.png
:alt: Job templates - completed notifications view
Use the toggles to enable or disable the notifications to use with your particular template. For more detail, see :ref:`ug_notifications_on_off`.
@@ -223,11 +235,13 @@ View Completed Jobs
The **Completed Jobs** tab provides the list of job templates that have ran. Click **Expanded** to view details of each job, including its status, ID, and name; type of job, time started and completed, who started the job; and which template, inventory, project, and credential were used. You can filter the list of completed jobs using any of these criteria.
.. image:: ../common/images/job-template-completed-jobs-view.png
:alt: Job templates - completed jobs view
Sliced jobs that display on this list are labeled accordingly, with the number of sliced jobs that have run:
.. image:: ../common/images/sliced-job-shown-jobs-list-view.png
:alt: Sliced job shown in jobs list view
Scheduling
@@ -242,6 +256,7 @@ Access the schedules for a particular job template from the **Schedules** tab.
|Job Templates - schedule launch|
.. |Job Templates - schedule launch| image:: ../common/images/job-templates-schedules.png
:alt: Job Templates - schedule launch
Schedule a Job Template
@@ -316,6 +331,7 @@ To create a survey:
- **Default answer**: The default answer to the question. This value is pre-filled in the interface and is used if the answer is not provided by the user.
.. image:: ../common/images/job-template-create-survey.png
:alt: Job templates - create survey
3. Once you have entered the question information, click **Save** to add the question.
@@ -325,11 +341,13 @@ The survey question displays in the Survey list. For any question, you can click
|job-template-completed-survey|
.. |job-template-completed-survey| image:: ../common/images/job-template-completed-survey.png
:alt: Job templates - completed survey
If you have more than one survey question, use the **Edit Order** button to rearrange the order of the questions by clicking and dragging on the grid icon.
.. image:: ../common/images/job-template-rearrange-survey.png
:alt: Job templates - rearrange survey
4. To add more questions, click the **Add** button to add additional questions.
@@ -369,10 +387,12 @@ Launch a job template by any of the following ways:
- Access the job template list from the **Templates** menu on the left navigation bar or while in the Job Template Details view, scroll to the bottom to access the |launch| button from the list of templates.
.. image:: ../common/images/job-templates-home-with-example-job-template-launch.png
:alt: Job templates - home with example job template - launch
- While in the Job Template Details view of the job template you want to launch, click **Launch**.
.. |launch| image:: ../common/images/launch-button.png
:alt: Launch button
A job may require additional information to run. The following data may be requested at launch:
@@ -392,10 +412,12 @@ Below is an example job launch that prompts for Job Tags, and runs the example s
|job-launch-with-prompt-job-tags|
.. |job-launch-with-prompt-job-tags| image:: ../common/images/job-launch-with-prompt-at-launch-jobtags.png
:alt: Job launch with prompt job tags
|job-launch-with-prompt-survey|
.. |job-launch-with-prompt-survey| image:: ../common/images/job-launch-with-prompt-at-launch-survey.png
:alt: Job launch with prompt survey
.. note::
@@ -445,6 +467,7 @@ Upon launch, AWX automatically redirects the web browser to the Job Status page
When slice jobs are running, job lists display the workflow and job slices, as well as a link to view their details individually.
.. image:: ../common/images/sliced-job-shown-jobs-list-view.png
:alt: Sliced job shown in jobs list view
.. _ug_JobTemplates_bulk_api:
@@ -465,6 +488,7 @@ If you choose to copy Job Template, it **does not** copy any associated schedule
2. Click the |copy| button associated with the template you want to copy.
.. |copy| image:: ../common/images/copy-button.png
:alt: Copy button
The new template with the name of the template from which you copied and a timestamp displays in the list of templates.
@@ -517,6 +541,7 @@ The ``scan_files`` fact module is the only module that accepts parameters, passe
Scan job templates should enable ``become`` and use credentials for which ``become`` is a possibility. You can enable become by checking the **Enable Privilege Escalation** from the Options menu:
.. image:: ../common/images/job-templates-create-new-job-template-become.png
:alt: Job template with Privilege Escalation checked from the Options field.
Supported OSes for ``scan_facts.yml``
@@ -631,6 +656,7 @@ Fact Caching
AWX can store and retrieve facts on a per-host basis through an Ansible Fact Cache plugin. This behavior is configurable on a per-job template basis. Fact caching is turned off by default but can be enabled to serve fact requests for all hosts in an inventory related to the job running. This allows you to use job templates with ``--limit`` while still having access to the entire inventory of host facts. A global timeout setting that the plugin enforces per-host, can be specified (in seconds) through the Jobs settings menu:
.. image:: ../common/images/configure-awx-jobs-fact-cache-timeout.png
:alt: Jobs Settings window showing the location of the Per-Host Ansible Fact Cache Timeout parameter from the Edit Details screen.
Upon launching a job that uses fact cache (``use_fact_cache=True``), AWX will store all ``ansible_facts`` associated with each host in the inventory associated with the job. The Ansible Fact Cache plugin that ships with AWX will only be enabled on jobs with fact cache enabled (``use_fact_cache=True``).
@@ -659,7 +685,8 @@ Fact caching saves a significant amount of time over running fact gathering. If
You can choose to use cached facts in your job by enabling it in the **Options** field of the Job Templates window.
.. image:: ../common/images/job-templates-options-use-factcache.png
.. image:: ../common/images/job-templates-options-use-factcache.png
:alt: Job templates - options - use factcache
To clear facts, you need to run the Ansible ``clear_facts`` `meta task`_. Below is an example playbook that uses the Ansible ``clear_facts`` meta task.
@@ -836,6 +863,7 @@ To enable callbacks, check the *Provisioning Callbacks* checkbox in the Job Temp
If you intend to use AWX's provisioning callback feature with a dynamic inventory, Update on Launch should be set for the inventory group used in the Job Template.
.. image:: ../common/images/provisioning-callbacks-config.png
:alt: Provisioning callbacks config
Callbacks also require a Host Config Key, to ensure that foreign hosts with the URL cannot request configuration. Please provide a custom value for Host Config Key. The host key may be reused across multiple hosts to apply this job template against multiple hosts. Should you wish to control what hosts are able to request configuration, the key may be changed at any time.
@@ -978,6 +1006,7 @@ The following table notes the behavior (hierarchy) of variable precedence in AWX
**AWX Variable Precedence Hierarchy (last listed wins)**
.. image:: ../common/images/Architecture-AWX_Variable_Precedence_Hierarchy.png
:alt: AWX Variable Precedence Hierarchy
Relaunching Job Templates

View File

@@ -14,17 +14,20 @@ The User Interface offers a friendly graphical framework for your IT orchestrati
The new AWX User Interface is available for tech preview and is subject to change in a future release. To preview the new UI, click the **Enable Preview of New User Interface** toggle to **On** from the Miscellaneous System option of the Settings menu.
.. image:: ../common/images/configure-awx-system-misc-preview-newui.png
:alt: Enabling preview of new user interface in the Miscellaneous System option of the Settings menu.
After saving, logout and log back in to access the new UI from the preview banner. To return to the current UI, click the link on the top banner where indicated.
.. image:: ../common/images/ug-dashboard-preview-banner.png
:alt: Tech preview banner to view the new user interface.
Across the top-right side of the interface, you can access your user profile, the About page, view related documentation, and log out. Right below these options, you can view the activity stream for that user by clicking on the Activity Stream |activitystream| button.
.. |activitystream| image:: ../common/images/activitystream.png
:alt: Activity stream icon.
.. image:: ../common/images/ug-dashboard-top-nav.png
:alt: Main screen with arrow showing where the activity stream icon resides on the Dashboard.
@@ -57,7 +60,7 @@ Dashboard view
The **Dashboard** view begins with a summary of your hosts, inventories, and projects. Each of these is linked to the corresponding objects for easy access.
.. image:: ../common/images/ug-dashboard-topsummary.png
:alt: Dashboard showing a summary of your hosts, inventories, and projects; and job run statuses.
On the main Dashboard screen, a summary appears listing your current **Job Status**. The **Job Status** graph displays the number of successful and failed jobs over a specified time period. You can choose to limit the job types that are viewed, and to change the time horizon of the graph.
@@ -66,11 +69,12 @@ Also available for view are summaries of **Recent Jobs** and **Recent Templates*
The **Recent Jobs** section displays which jobs were most recently run, their status, and time when they were run as well.
.. image:: ../common/images/ug-dashboard-recent-jobs.png
:alt: Summary of the most recently used jobs
The **Recent Templates** section of this display shows a summary of the most recently used templates. You can also access this summary by clicking **Templates** from the left navigation bar.
.. image:: ../common/images/ug-dashboard-recent-templates.png
:alt: Summary of the most recently used templates
.. note::
@@ -83,6 +87,7 @@ Jobs view
Access the **Jobs** view by clicking **Jobs** from the left navigation bar. This view shows all the jobs that have ran, including projects, templates, management jobs, SCM updates, playbook runs, etc.
.. image:: ../common/images/ug-dashboard-jobs-view.png
:alt: Jobs view showing jobs that have ran, including projects, templates, management jobs, SCM updates, playbook runs, etc.
Schedules view
@@ -92,6 +97,7 @@ Access the Schedules view by clicking **Schedules** from the left navigation bar
.. image:: ../common/images/ug-dashboard-schedule-view.png
:alt: Schedules view showing all the scheduled jobs that are configured.
.. _ug_activitystreams:
@@ -110,15 +116,18 @@ Most screens have an Activity Stream (|activitystream|) button. Clicking this br
|Users - Activity Stream|
.. |Users - Activity Stream| image:: ../common/images/users-activity-stream.png
:alt: Summary of the recent activity on Activity Stream dashboard.
An Activity Stream shows all changes for a particular object. For each change, the Activity Stream shows the time of the event, the user that
initiated the event, and the action. The information displayed varies depending on the type of event. Clicking on the Examine (|examine|) button shows the event log for the change.
.. |examine| image:: ../common/images/examine-button.png
:alt: Examine button
|event log|
.. |event log| image:: ../common/images/activity-stream-event-log.png
:alt: Example of an event log of an Activity Stream instance.
The Activity Stream can be filtered by the initiating user (or the
system, if it was system initiated), and by any related object,

View File

@@ -68,6 +68,7 @@ To create a Notification Template:
2. Click the **Add** button.
.. image:: ../common/images/notifications-template-add-new.png
:alt: Create new notification template
3. Enter the name of the notification and a description in their respective fields, and specify the organization (required) it belongs to.
@@ -115,6 +116,7 @@ You must provide the following details to setup an email notification:
- Timeout (in seconds): allows you to specify up to 120 seconds, the length of time AWX may attempt connecting to the email server before giving up.
.. image:: ../common/images/notification-template-email.png
:alt: Email notification template
Grafana
------------
@@ -136,7 +138,7 @@ The other options of note are:
- Disable SSL Verification: SSL verification is on by default, but you can choose to turn off verification the authenticity of the target's certificate. Environments that use internal or private CA's should select this option to disable verification.
.. image:: ../common/images/notification-template-grafana.png
:alt: Grafana notification template
IRC
-----
@@ -154,6 +156,7 @@ Connectivity information is straightforward:
.. image:: ../common/images/notification-template-irc.png
:alt: IRC notification template
Mattermost
------------
@@ -167,6 +170,7 @@ The Mattermost notification type provides a simple interface to Mattermost's mes
- Disable SSL Verification: Turns off verification of the authenticity of the target's certificate. Environments that use internal or private CA's should select this option to disable verification.
.. image:: ../common/images/notification-template-mattermost.png
:alt: Mattermost notification template
PagerDuty
@@ -182,6 +186,8 @@ PagerDuty is a fairly straightforward integration. First, create an API Key in t
- Client Identifier: This will be sent along with the alert content to the pagerduty service to help identify the service that is using the api key/service. This is helpful if multiple integrations are using the same API key and service.
.. image:: ../common/images/notification-template-pagerduty.png
:alt: PagerDuty notification template
Rocket.Chat
-------------
@@ -194,6 +200,7 @@ The Rocket.Chat notification type provides an interface to Rocket.Chat's collabo
- Disable SSL Verification: Turns off verification of the authenticity of the target's certificate. Environments that use internal or private CA's should select this option to disable verification.
.. image:: ../common/images/notification-template-rocketchat.png
:alt: Rocket.Chat notification template
Slack
@@ -212,6 +219,7 @@ Once you have a bot/app set up, you must navigate to "Your Apps", click on the n
You must also invite the notification bot to join the channel(s) in question in Slack. Note that private messages are not supported.
.. image:: ../common/images/notification-template-slack.png
:alt: Slack notification template
Twilio
@@ -231,6 +239,8 @@ To setup Twilio, provide the following details:
- Account SID
.. image:: ../common/images/notification-template-twilio.png
:alt: Twilio notification template
Webhook
@@ -257,6 +267,8 @@ The parameters for configuring webhooks are:
.. image:: ../common/images/notification-template-webhook.png
:alt: Webhook notification template
Webhook payloads
@@ -333,6 +345,8 @@ Create custom notifications
You can :ref:`customize the text content <ir_notifications_reference>` of each of the :ref:`ug_notifications_types` by enabling the **Customize Messages** portion at the bottom of the notifications form using the toggle button.
.. image:: ../common/images/notification-template-customize.png
:alt: Custom notification template
You can provide a custom message for various job events:
@@ -347,10 +361,12 @@ You can provide a custom message for various job events:
The message forms vary depending on the type of notification you are configuring. For example, messages for email and PagerDuty notifications have the appearance of a typical email form with a subject and body, in which case, AWX displays the fields as **Message** and **Message Body**. Other notification types only expect a **Message** for each type of event:
.. image:: ../common/images/notification-template-customize-simple.png
:alt: Custom notification template example
The **Message** fields are pre-populated with a template containing a top-level variable, ``job`` coupled with an attribute, such as ``id`` or ``name``, for example. Templates are enclosed in curly braces and may draw from a fixed set of fields provided by AWX, as shown in the pre-populated **Messages** fields.
.. image:: ../common/images/notification-template-customize-simple-syntax.png
:alt: Custom notification template example syntax
This pre-populated field suggests commonly displayed messages to a recipient who is notified of an event. You can, however, customize these messages with different criteria by adding your own attribute(s) for the job as needed. Custom notification messages are rendered using Jinja - the same templating engine used by Ansible playbooks.
@@ -474,8 +490,8 @@ If you create a notification template that uses invalid syntax or references unu
If you save the notifications template without editing the custom message (or edit and revert back to the default values), the **Details** screen assumes the defaults and will not display the custom message tables. If you edit and save any of the values, the entire table displays in the **Details** screen.
.. image:: ../common/images/notifications-with-without-messages.png
.. image:: ../common/images/notifications-with-without-messages.png
:alt: Notification template with and without a custom message
.. _ug_notifications_on_off:
@@ -498,11 +514,12 @@ You can enable notifications on job start, job success, and job failure, or any
- Organizations
.. image:: ../common/images/projects-notifications-example-list.png
:alt: List of project notifications
For workflow templates that have approval nodes, in addition to *Start*, *Success*, and *Failure*, you can enable or disable certain approval-related events:
.. image:: ../common/images/wf-template-completed-notifications-view.png
:alt: List of project notifications with approval nodes option
Refer to :ref:`ug_wf_approval_nodes` for additional detail on working with these types of nodes.
@@ -516,6 +533,7 @@ Configure the ``host`` hostname for notifications
In the :ref:`System Settings <configure_awx_system>`, you can replace the default value in the **Base URL of the service** field with your preferred hostname to change the notification hostname.
.. image:: ../common/images/configure-awx-system-misc-baseurl.png
:alt: Configuring base URL with preferred hostname
Refreshing your license also changes the notification hostname. New installations of AWX should not have to set the hostname for notifications.

View File

@@ -11,6 +11,7 @@ An :term:`Organization` is a logical collection of **Users**, **Teams**, **Proje
|awx hierarchy|
.. |awx hierarchy| image:: ../common/images/AWXHierarchy.png
:alt: AWX Hierarchy
Access the Organizations page by clicking **Organizations** from the left navigation bar. The Organizations page displays all of the existing organizations for your installation. Organizations can be searched by **Name** or **Description**. Modify and remove organizations using the **Edit** and **Delete** buttons.
@@ -20,10 +21,12 @@ Access the Organizations page by clicking **Organizations** from the left naviga
|Organizations - home showing example organization|
.. |Organizations - home showing example organization| image:: ../common/images/organizations-home-showing-example-organization.png
:alt: Example of organizations home page
From this list view, you can edit the details of an organization (|edit button|) from the **Actions** menu.
.. |edit button| image:: ../common/images/edit-button.png
:alt: Edit button
.. _ug_organizations_create:
@@ -35,6 +38,7 @@ Creating a New Organization
|Organizations - new organization form|
.. |Organizations - new organization form| image:: ../common/images/organizations-new-organization-form.png
:alt: Create new organization form
2. An organization has several attributes that may be configured:
@@ -51,7 +55,7 @@ Once created, AWX displays the Organization details, and allows for the managing
|Organizations - show record for example organization|
.. |Organizations - show record for example organization| image:: ../common/images/organizations-show-record-for-example-organization.png
:alt: Organization details tab with edit, delete options
From the **Details** tab, you can edit or delete the organization.
@@ -73,6 +77,7 @@ Clicking on **Access** (beside **Details** when viewing your organization), disp
|Organizations - show users for example organization|
.. |Organizations - show users for example organization| image:: ../common/images/organizations-show-users-permissions-organization.png
:alt: Organization Access tab with user permissions
As you can manage the user membership for this Organization here, you can manage user membership on a per-user basis from the Users page by clicking **Users** from the left navigation bar. Organizations have a unique set of roles not described here. You can assign specific users certain levels of permissions within your organization, or allow them to act as an admin for a particular resource. Refer to :ref:`rbac-ug` for more information.
@@ -102,12 +107,14 @@ Work with Notifications
Clicking the **Notifications** tab allows you to review any notification integrations you have setup.
.. image:: ../common/images/organizations-notifications-samples-list.png
:alt: List of sample organization notifications
Use the toggles to enable or disable the notifications to use with your particular organization. For more detail, see :ref:`ug_notifications_on_off`.
If no notifications have been set up, you must create them from the **Notifications** option on the left navigation bar.
.. image:: ../common/images/organization-notifications-empty.png
:alt: Empty organization notifications list
Refer to :ref:`ug_notifications_types` for additional details on configuring various notification types.

View File

@@ -24,6 +24,7 @@ Assuming that the repository has already been configured for signing and verific
4. When the user syncs the project, AWX (already configured, in this scenario) pulls in the new changes, checks that the public key associated with the project in AWX matches the private key that the checksum manifest was signed with (this prevents tampering with the checksum manifest itself), then re-calculates checksums of each file in the manifest to ensure that the checksum matches (and thus that no file has changed). It also looks to ensure that all files are accounted for: They must have been either included in, or excluded from, the ``MANIFEST.in`` file discussed below; if files have been added or removed unexpectedly, verification will fail.
.. image:: ../common/images/content-sign-diagram.png
:alt: Content signing process diagram
Prerequisites
@@ -68,16 +69,19 @@ In order to use the GPG key for content singing and validation in AWX, you must
5. Click **Save** when done.
.. image:: ../common/images/credentials-gpg-details.png
:alt: Example GPG credential details
This credential can now be selected in :ref:`projects <ug_projects_add>`, and content verification will automatically take place on future project syncs.
.. image:: ../common/images/project-create-with-gpg-creds.png
:alt: Create project with example GPG credentials
.. note::
Use the project cache SCM timeout to control how often you want AWX to re-validate the signed content. When a project is configured to update on launch (of any job template configured to use that project), you can enable the cache timeout setting, which tells it to update after N seconds have passed since the last update. If validation is running too frequently, you can slow down how often project updates occur by specifying the time in the **Cache Timeout** field of the Option Details pane of the project.
.. image:: ../common/images/project-update-launch-cache-timeout.png
:alt: Checked Update Revision on Launch option with Cache Timeout value specified from the Create new project page

View File

@@ -19,18 +19,24 @@ The Projects page displays the list of the projects that are currently available
|Projects - home with example project|
.. |Projects - home with example project| image:: ../common/images/projects-list-all.png
:alt: Compact Projects list view with two projects shown.
.. image:: ../common/images/projects-list-all-expanded.png
:alt: Projects list view showing arrows used to expand and collapse projects in the view.
For each project listed, you can get the latest SCM revision (|refresh|), edit the project (|edit|), or copy the project attributes (|copy|), using the respective icons next to each project. Projects are allowed to be updated while a related job is running. In cases where you have a big project (around 10 GB), disk space on ``/tmp`` may be an issue.
.. |edit-icon| image:: ../common/images/edit-button.png
:alt: edit button
.. |copy| image:: ../common/images/copy-button.png
:alt: copy button
.. |refresh| image:: ../common/images/refresh-gray.png
:alt: Refresh button
.. |edit| image:: ../common/images/edit-button.png
:alt: edit button
**Status** indicates the state of the project and may be one of the following (note that you can also filter your view by specific status types):
@@ -72,6 +78,7 @@ To create a new project:
|Projects - create new project|
.. |Projects - create new project| image:: ../common/images/projects-create-new-project.png
:alt: Create New Project form
2. Enter the appropriate details into the following required fields:
@@ -119,6 +126,7 @@ If you have trouble adding a project path, check the permissions and SELinux con
Correct this issue by creating the appropriate playbook directories and checking out playbooks from your SCM or otherwise copying playbooks into the appropriate playbook directories.
.. |Projects - create new warning| image:: ../common/images/projects-create-manual-warning.png
:alt: Create New Project form showing warning associated with selecting Source Control Credential Type of Manual
.. _ug_projects_scm_types:
@@ -147,12 +155,14 @@ To configure playbooks to use source control, in the Project **Details** tab:
|Projects - create SCM project|
.. |Projects - create SCM project| image:: ../common/images/projects-create-scm-project.png
:alt: Create New Project form for Git Source Control Credential Type.
2. Enter the appropriate details into the following fields:
- **SCM URL** - See an example in the tooltip |tooltip|.
.. |tooltip| image:: ../common/images/tooltips-icon.png
:alt: tooltips icon
- **SCM Branch/Tag/Commit** - Optionally enter the SCM branch, tags, commit hashes, arbitrary refs, or revision number (if applicable) from the source control (Git or Subversion) to checkout. Some commit hashes and refs may not be available unless you also provide a custom refspec in the next field. If left blank, the default is HEAD which is the last checked out Branch/Tag/Commit for this project.
- **SCM Refspec** - This field is an option specific to git source control and only advanced users familiar and comfortable with git should specify which references to download from the remote repository. For more detail, see :ref:`job branch overriding <ug_job_branching>`.
@@ -168,6 +178,7 @@ To configure playbooks to use source control, in the Project **Details** tab:
- **Allow Branch Override** - Allows a job template or an inventory source that uses this project to launch with a specified SCM branch or revision other than that of the project's. For more detail, see :ref:`job branch overriding <ug_job_branching>`.
.. image:: ../common/images/projects-create-scm-project-branch-override-checked.png
:alt: create scm project branch override checked
4. Click **Save** to save your project.
@@ -198,6 +209,7 @@ To configure playbooks to use Red Hat Insights, in the Project **Details** tab:
- **Update Revision on Launch** - Updates the revision of the project to the current revision in the remote source control, as well as cache the roles directory from :ref:`Galaxy <ug_galaxy>` or :ref:`Collections <ug_collections>`. AWX ensures that the local revision matches and that the roles and collections are up-to-date with the last update. Also, to avoid job overflows if jobs are spawned faster than the project can sync, selecting this allows you to configure a Cache Timeout to cache prior project syncs for a certain number of seconds.
.. image:: ../common/images/projects-create-scm-insights.png
:alt: Create New Project form for Red Hat Insights Source Control Credential Type.
3. Click **Save** to save your project.
@@ -230,6 +242,7 @@ To configure playbooks to use a remote archive, in the Project **Details** tab:
- **Allow Branch Override** - Not recommended, as this option allows a job template that uses this project to launch with a specified SCM branch or revision other than that of the project's.
.. image:: ../common/images/projects-create-scm-rm-archive.png
:alt: Create New Project form for Remote Archive Source Control Credential Type.
.. note::
Since this SCM type is intended to support the concept of unchanging artifacts, it is advisable to disable Galaxy integration (for roles, at minimum).
@@ -253,15 +266,18 @@ Updating projects from source control
|projects - list all|
.. |projects - list all| image:: ../common/images/projects-list-all.png
:alt: Projects list view with the latest revision information of the projects that synched.
2. Click on project's status under the **Status** column to get further details about the update process.
.. image:: ../common/images/projects-list-status-more.png
:alt: Projects list view with example project with a successful status.
|Project - update status|
.. |Project - update status| image:: ../common/images/projects-update-status.png
:alt: Example project with real-time standard output details.
Work with Permissions
@@ -276,6 +292,7 @@ You can access the project permissions via the **Access** tab next to the **Deta
|Projects - permissions list for example project|
.. |Projects - permissions list for example project| image:: ../common/images/projects-permissions-example.png
:alt: Access tab of a sample project that shows list of users who have permissions to this project.
Add Permissions
@@ -290,12 +307,14 @@ Work with Notifications
Clicking the **Notifications** tab allows you to review any notification integrations you have setup.
.. image:: ../common/images/projects-notifications-example-list.png
:alt: List of notifications configured for this project.
Use the toggles to enable or disable the notifications to use with your particular project. For more detail, see :ref:`ug_notifications_on_off`.
If no notifications have been set up, you can configure them from the **Notifications** link from the left navigation bar to create a new notification.
.. image:: ../common/images/project-notifications-empty.png
:alt: Notifications Templates page with no notification templates found.
Refer to :ref:`ug_notifications_types` for additional details on configuring various notification types.
@@ -306,14 +325,17 @@ Work with Job Templates
Clicking on **Job Templates** allows you to add and review any job templates or workflow templates associated with this project.
.. image:: ../common/images/projects-templates-example-list.png
:alt: List of job templates associated with this project.
Click on the recent jobs that ran using that template to see its details and other useful information. You can sort this list by various criteria, and perform a search to filter the templates of interest.
.. image:: ../common/images/projects-templates-search-dropdown.png
:alt: Job Templates tab of the project showing an example drop-down menu that can be used to filter your search.
From this view, you can also launch (|launch|), edit (|edit|), or copy (|copy|) the template configuration.
.. |launch| image:: ../common/images/launch-button.png
:alt: launch button
Work with Schedules
@@ -326,6 +348,7 @@ Work with Schedules
Clicking on **Schedules** allows you to review any schedules set up for this project.
.. image:: ../common/images/generic-schedules-list-configured.png
:alt: List of configured schedules that may be used with this project.
Schedule a Project
@@ -358,12 +381,14 @@ At the end of a Project update, AWX searches for a file called ``requirements.ym
This file allows you to reference Galaxy roles or roles within other repositories which can be checked out in conjunction with your own project. The addition of this Ansible Galaxy support eliminates the need to create git submodules for achieving this result. Given that SCM projects (along with roles/collections) are pulled into and executed from a private job environment, a <private job directory> specific to the project within ``/tmp`` is created by default. However, you can specify another **Job Execution Path** based on your environment in the Jobs Settings tab of the Settings window:
.. image:: ../common/images/configure-awx-jobs-execution-path.png
:alt: Job Settings page showing where to configure the Job execution path.
The cache directory is a subdirectory inside the global projects folder. The content may be copied from the cache location to ``<job private directory>/requirements_roles`` location.
By default, AWX has a system-wide setting that allows roles to be dynamically downloaded from the ``roles/requirements.yml`` file for SCM projects. You may turn off this setting in the **Jobs settings** screen of the Settings menu by switching the **Enable Role Download** toggle button to **OFF**.
.. image:: ../common/images/configure-awx-jobs-download-roles.png
:alt: Job Settings page showing the option to Enable Role Download.
Whenever a project sync runs, AWX determines if the project source and any roles from Galaxy and/or Collections are out of date with the project. Project updates will download the roles inside the update.
@@ -377,6 +402,7 @@ In short, jobs would download the most recent roles before every job run. Roles
|update-on-launch|
.. |update-on-launch| image:: ../common/images/projects-scm-update-options-update-on-launch-checked.png
:alt: SCM update options Update Revision on Launch checked.
.. end reused section
@@ -405,6 +431,7 @@ In the User Interface, you can configure these settings in the Jobs settings win
.. image:: ../common/images/configure-awx-jobs-path-to-expose.png
:alt: Job Settings page showing example paths to expose to isolated jobs.
.. _ug_collections:
@@ -421,6 +448,7 @@ AWX supports project-specific `Ansible collections <https://docs.ansible.com/ans
By default, AWX has a system-wide setting that allows collections to be dynamically downloaded from the ``collections/requirements.yml`` file for SCM projects. You may turn off this setting in the **Jobs settings** tab of the Settings menu by switching the **Enable Collections Download** toggle button to **OFF**.
.. image:: ../common/images/configure-awx-jobs-download-collections.png
:alt: Job Settings page showing where to enable collection(s) download.
Roles and collections are locally cached for performance reasons, and you will need to select **Update Revision on Launch** in the project SCM Update Options to ensure this:
@@ -439,6 +467,7 @@ Before AWX can use |ah| as the default source for collections content, you need
2. Click the copy icon to copy the API token to the clipboard.
.. image:: ../common/images/projects-ah-loaded-token-shown.png
:alt: Connect to Hub page showing where to copy the offline token.
3. To use the public |ah|, create an |ah| credential using the copied token and pointing to the URLs shown in the **Server URL** and **SSO URL** fields of the token page:
@@ -449,15 +478,19 @@ Before AWX can use |ah| as the default source for collections content, you need
4. To use a private |ah|, create an |ah| credential using a token retrieved from the Repo Management dashboard of your local |ah| and pointing to the published repo URL as shown:
.. image:: ../common/images/projects-ah-repo-mgmt-get-token.png
:alt: The Repo Management dashboard of your local Automation Hub.
.. image:: ../common/images/projects-ah-repo-mgmt-repos-published.png
:alt: The Get token button next to the published repo URL in the Repo Management dashboard of your local Automation Hub.
You can create different repos with different namespaces/collections in them. But for each repo in |ah| you need to create a different |ah| credential. Copy the **Ansible CLI URL** from the |ah| UI in the format of ``https://$<hub_url>/api/galaxy/content/<repo you want to pull from>`` into the **Galaxy Server URL** field of the *Create Credential* form:
.. image:: ../common/images/projects-create-ah-credential.png
:alt: Create New Credential form for Ansible Galaxy/Automation Hub API Token Credential Type.
5. Navigate to the organization for which you want to be able to sync content from |ah| and add the new |ah| credential to the organization. This step allows you to associate each organization with the |ah| credential (i.e. repo) that you want to be able to use content from.
.. image:: ../common/images/projects-organizations-add-ah-credential.png
:alt: Edit example default organizations form with Ansible Galaxy and Automation Hub credentials.
.. note::
@@ -472,14 +505,17 @@ You can create different repos with different namespaces/collections in them. Bu
6. If the |ah| has self-signed certificates, click the toggle to enable the setting **Ignore Ansible Galaxy SSL Certificate Verification**. For **public Automation Hub**, which uses a signed certificate, click the toggle to disable it instead. Note this is a global setting:
.. image:: ../common/images/settings-jobs-ignore-galaxy-certs.png
:alt: Job Settings page showing where to enable the option to ignore Ansible Galaxy SSL Certificate Verification.
7. Create a project, where the source repository specifies the necessary collections in a requirements file located in the ``collections/requirements.yml`` file. Refer to the syntax described in the corresponding `Ansible documentation <https://docs.ansible.com/ansible/latest/user_guide/collections_using.html#install-multiple-collections-with-a-requirements-file>`_.
.. image:: ../common/images/projects-add-ah-source-repo.png
:alt: The URL for the Source Control URL in the Type Details section of the Create New Project form.
8. In the Projects list view, click |update| to run an update against this project. AWX fetches the Galaxy collections from the ``collections/requirements.yml`` file and report it as changed; and the collections will now be installed for any job template using this project.
.. |update| image:: ../common/images/refresh-gray.png
:alt: Refresh button.
.. note::

View File

@@ -1,3 +1,4 @@
.. _ug_security:
Security
@@ -197,7 +198,7 @@ The following table lists the RBAC system roles and a brief description of the h
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Admin Role - Organizations, Teams, Inventory, Projects, Job Templates | Manages all aspects of a defined Organization, Team, Inventory, Project, or Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Auditor Role - All | Views all aspects of a defined Organization, Project, Inventory, or Job Template |
| Auditor Role - All | Views all aspects of a defined Organization, Team, Inventory, Project, or Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Execute Role - Job Templates | Runs assigned Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+

View File

@@ -14,10 +14,12 @@ Access the Teams page by clicking **Teams** from the left navigation bar. The te
.. image:: ../common/images/organizations-teams-list.png
:alt: Teams page containing a list of teams and the organizations they belong to.
Clicking the Edit (|edit-button|) button next to the list of **Teams** allows you to edit details about the team. You can also review **Users** and **Permissions** associated with this Team.
.. |edit-button| image:: ../common/images/edit-button.png
:alt: Edit Button
.. _ug_team_create:
@@ -33,6 +35,7 @@ To create a new Team:
|Teams - create new team|
.. |Teams - create new team| image:: ../common/images/teams-create-new-team.png
:alt: Create New Team Form
2. Enter the appropriate details into the following fields:
@@ -47,6 +50,7 @@ Once the Team is successfully created, AWX opens the **Details** dialog, which a
|Teams - example team successfully created|
.. |Teams - example team successfully created| image:: ../common/images/teams-example-team-successfully-created.png
:alt: Example Team Successfully Created
Team Access
@@ -60,6 +64,7 @@ This tab displays the list of Users that are members of this Team. This list may
|Teams - users list|
.. |Teams - users list| image:: ../common/images/teams-users-list.png
:alt: Teams list showing the Access tab displaying a list of users and their roles.
.. _ug_teams_permissions:
@@ -78,10 +83,12 @@ In order to add a user to a team, the user must already be created. Refer to :re
To remove roles for a particular user, click the disassociate (x) button next to its resource.
.. image:: ../common/images/permissions-disassociate.png
:alt: Access tab with list of users and an arrow pointing to the disassociate button next to a user's role.
This launches a confirmation dialog, asking you to confirm the disassociation.
.. image:: ../common/images/permissions-disassociate-confirm.png
:alt: Disassociation Confirmation
Team Roles
@@ -97,6 +104,7 @@ Selecting the **Roles** view displays a list of the permissions that are current
|Teams - permissions list|
.. |Teams - permissions list| image:: ../common/images/teams-permissions-sample-roles.png
:alt: Permissions list with resource names, type and their associated roles.
The set of privileges assigned to Teams that provide the ability to read, modify, and administer projects, inventories, and other AWX elements are permissions. By default, the Team is given the "read" permission (also called a role).
@@ -111,33 +119,28 @@ To add permissions to a Team:
1. Click the **Add** button, which opens the Add Permissions Wizard.
.. image:: ../common/images/teams-users-add-permissions-form.png
:alt: Add Permissions Form
:alt: Add Teams Permissions Wizard step 1, choose the resource type.
2. Click to select the object for which the team will have access and click **Next**.
3. Click to select the resource to assign team roles and click **Next**.
.. image:: ../common/images/teams-permissions-templates-select.png
:alt: Add Teams Permissions Wizard step 2, choose the resources from the list, Demo Job template selected.
4. Click the checkbox beside the role to assign that role to your chosen type of resource. Different resources have different options available.
.. image:: ../common/images/teams-permissions-template-roles.png
:alt: Add Teams Permissions Wizard step 3, choose the roles to apply to the previously selected resource.
5. Click **Save** when done, and the Add Permissions Wizard closes to display the updated profile for the user with the roles assigned for each selected resource.
5. Click **Save** when done, and the Add Permissions Wizard closes to display the updated profile for the team with the roles assigned for each selected resource.
.. image:: ../common/images/teams-permissions-sample-roles.png
To remove Permissions for a particular resource, click the disassociate (x) button next to its resource. This launches a confirmation dialog, asking you to confirm the disassociation.
:alt: Updated profile for each team's resources and their roles.
To remove Permissions for a particular resource, click the disassociate (x) button next to its resource. This launches a confirmation dialog, asking you to confirm the disassociation.
.. note::
You can also add teams, individual, or multiple users and assign them permissions at the object level (projects, inventories, job templates, and workflow templates) as well. This feature reduces the time for an organization to onboard many users at one time.

View File

@@ -1,11 +1,10 @@
.. _ug_users:
.. _ug_users:
Users
-----
.. index::
single: users
A :term:`User` is someone who has access to AWX with associated permissions and credentials. Access the Users page by clicking **Users** from the left navigation bar. The User list may be sorted and searched by **Username**, **First Name**, or **Last Name** and click the headers to toggle your sorting preference.
@@ -14,7 +13,6 @@ A :term:`User` is someone who has access to AWX with associated permissions and
You can easily view permissions and user type information by looking beside their user name in the User overview screen.
.. _ug_users_create:
Create a User
@@ -50,6 +48,7 @@ Three types of Users can be assigned:
Once the user is successfully created, the **User** dialog opens for that newly created User.
.. |edit-button| image:: ../common/images/edit-button.png
:alt: Edit button
.. image:: ../common/images/users-edit-user-form.png
:alt: Edit User Form
@@ -63,10 +62,12 @@ The same window opens whether you click on the user's name, or the Edit (|edit-b
If the user is not a newly-created user, the user's details screen displays the last login activity of that user.
.. image:: ../common/images/users-last-login-info.png
:alt: User details with last login information
When you log in as yourself, and view the details of your own user profile, you can manage tokens from your user profile. See :ref:`ug_users_tokens` for more detail.
.. image:: ../common/images/user-with-token-button.png
:alt: User details with Tokens tab highlighted
.. _ug_users_delete:
@@ -80,10 +81,10 @@ Before you can delete a user, you must have user permissions. When you delete a
2. Select the check box(es) for the user(s) that you want to remove and click **Delete**.
.. image:: ../common/images/users-home-users-checked-delete.png
:alt: Users list view with two users checked
3. Click **Delete** in the confirmation warning message to permanently delete the user.
Users - Organizations
~~~~~~~~~~~~~~~~~~~~~
@@ -96,6 +97,7 @@ Organization membership cannot be modified from this display panel.
|Users - Organizations list for example user|
.. |Users - Organizations list for example user| image:: ../common/images/users-organizations-list-for-example-user.png
:alt: Users - Organizations list for example user
Users - Teams
~~~~~~~~~~~~~
@@ -110,7 +112,7 @@ Until a Team has been created and the user has been assigned to that team, the a
|Users - teams list for example user|
.. |Users - teams list for example user| image:: ../common/images/users-teams-list-for-example-user.png
:alt: Users - teams list for example user - empty
.. _ug_users_roles:
@@ -121,7 +123,6 @@ Users - Roles
pair: users; permissions
pair: users; roles
The set of permissions assigned to this user (role-based access controls) that provide the ability to read, modify, and administer projects, inventories, job templates, and other AWX elements are Roles.
.. note::
@@ -133,6 +134,7 @@ This screen displays a list of the roles that are currently assigned to the sele
|Users - permissions list for example user|
.. |Users - permissions list for example user| image:: ../common/images/users-permissions-list-for-example-user.png
:alt: Users - permissions list for example user
.. _ug_users_permissions:
@@ -144,31 +146,31 @@ To add permissions to a particular user:
1. Click the **Add** button, which opens the Add Permissions Wizard.
.. image:: ../common/images/users-add-permissions-form.png
:alt: Add Permissions Form
:alt: Add User Permissions Form, first step, Add resource type
2. Click to select the object for which the user will have access and click **Next**.
3. Click to select the resource to assign team roles and click **Next**.
.. image:: ../common/images/users-permissions-IG-select.png
:alt: Add User Permissions Form, second step, Select items from list - instance group checked
4. Click the checkbox beside the role to assign that role to your chosen type of resource. Different resources have different options available.
.. image:: ../common/images/users-permissions-IG-roles.png
:alt: Add User Permissions Form, final step, Select roles to apply - "Use" role checked
5. Click **Save** when done, and the Add Permissions Wizard closes to display the updated profile for the user with the roles assigned for each selected resource.
.. image:: ../common/images/users-permissions-sample-roles.png
:alt: Users - Permissions Sample Roles
To remove Permissions for a particular resource, click the disassociate (x) button next to its resource. This launches a confirmation dialog, asking you to confirm the disassociation.
.. note::
You can also add teams, individual, or multiple users and assign them permissions at the object level (templates, credentials, inventories, projects, organizations, or instance groups) as well. This feature reduces the time for an organization to onboard many users at one time.
.. _ug_users_tokens:
Users - Tokens
@@ -179,4 +181,3 @@ The **Tokens** tab will only be present for your user (yourself). Before you add
1. If not already selected, click on your user from the Users list view to configure your OAuth 2 tokens.
.. include:: ../common/add-token.rst

View File

@@ -15,18 +15,16 @@ A :term:`workflow job template` links together a sequence of disparate resources
The **Templates** menu opens a list of the workflow and job templates that are currently available. The default view is collapsed (Compact), showing the template name, template type, and the statuses of the jobs that ran using that template, but you can click **Expanded** to view more information. This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template. From this screen, you can launch (|launch|), edit (|edit|), and copy (|copy|) a workflow job template.
.. |delete| image:: ../common/images/delete-button.png
Only workflow templates have the Workflow Visualizer icon (|wf-viz-icon|) as a shortcut for accessing the workflow editor.
.. |wf-viz-icon| image:: ../common/images/wf-viz-icon.png
:alt: Workflow vizualizer icon
|Wf templates - home with example wf template|
.. |Wf templates - home with example wf template| image:: ../common/images/wf-templates-home-with-example-wf-template.png
:alt: Job templates list view with example of workflow template and arrow pointing to the Workflow vizualizer icon.
.. note::
@@ -41,10 +39,12 @@ To create a new workflow job template:
1. Click the |add options template| button then select **Workflow Template** from the menu list.
.. |add options template| image:: ../common/images/add-options-template.png
:alt: Create new template Add drop-down options.
|Wf templates - create new wf template|
.. |Wf templates - create new wf template| image:: ../common/images/wf-templates-create-new-wf-template.png
:alt: Create new workflow template form.
2. Enter the appropriate details into the following fields:
@@ -53,6 +53,9 @@ To create a new workflow job template:
If a field has the **Prompt on launch** checkbox selected, launching the workflow template, or when the workflow template is used within another workflow template, it will prompt for the value for that field upon launch. Most prompted values will override any values set in the workflow job template; exceptions are noted below.
.. |delete| image:: ../common/images/delete-button.png
:alt: Delete button.
.. list-table::
:widths: 10 35 30
:header-rows: 1
@@ -106,9 +109,11 @@ To create a new workflow job template:
For more information about **Job Tags** and **Skip Tags**, refer to `Tags <https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html>`_ in the Ansible documentation.
.. |x-circle| image:: ../common/images/x-delete-button.png
:alt: x delete button
.. |x| image:: ../common/images/x-button.png
:alt: x button
3. **Options**: Specify options for launching this workflow job template, if necessary.
@@ -136,6 +141,7 @@ For more information about **Job Tags** and **Skip Tags**, refer to `Tags <https
Saving the template exits the workflow template page and the Workflow Visualizer opens to allow you to build a workflow. See the :ref:`ug_wf_editor` section for further instructions. Otherwise, you may close the Workflow Visualizer to return to the Details tab of the newly saved template in order to review, edit, add permissions, notifications, schedules, and surveys, or view completed jobs and build a workflow template at a later time. Alternatively, you can click **Launch** to launch the workflow, but you must first save the template prior to launching, otherwise, the **Launch** button remains grayed-out. Also, note the **Notifications** tab is present only after the template has been saved.
.. image:: ../common/images/wf-templates-wf-template-saved.png
:alt: Details tab of the newly created workflow template.
@@ -146,6 +152,8 @@ Work with Permissions
Clicking on **Access** allows you to review, grant, edit, and remove associated permissions for users as well as team members.
.. image:: ../common/images/wf-template-completed-permissions-view.png
:alt: Access tab of the newly created workflow template showing two user roles and their permissions.
Click the **Add** button to create new permissions for this workflow template by following the prompts to assign them accordingly.
@@ -156,13 +164,15 @@ Work with Notifications
Clicking on **Notifications** allows you to review any notification integrations you have setup. The **Notifications** tab is present only after the template has been saved.
.. .. image:: ../common/images/wf-template-completed-notifications-view.png
.. image:: ../common/images/wf-template-completed-notifications-view.png
:alt: Notifications tab of the newly created workflow template showing four notification configurations with one notification set for Approval.
Use the toggles to enable or disable the notifications to use with your particular template. For more detail, see :ref:`ug_notifications_on_off`.
If no notifications have been set up, see :ref:`ug_notifications_create` for detail.
.. image:: ../common/images/wf-template-no-notifications-blank.png
:alt: Notifications tab of the newly created workflow template showing no notifications set up.
Refer to :ref:`ug_notifications_types` for additional details on configuring various notification types.
@@ -174,20 +184,14 @@ View Completed Jobs
The **Completed Jobs** tab provides the list of workflow templates that have ran. Click **Expanded** to view the various details of each job.
.. .. image:: ../common/images/wf-template-completed-jobs-list.png
.. image:: ../common/images/wf-template-completed-jobs-list.png
:alt: Jobs tab of the example workflow template showing completed jobs.
From this view, you can click the job ID - name of the workflow job and see its graphical representation. The example below shows the job details of a workflow job.
.. image:: ../common/images/wf-template-jobID-detail-example.png
.. If a workflow template is used in another workflow, the jobs details indicate a parent workflow.
.. .. image:: ../common/images/wf-template-job-detail-with-parent.png
.. In the above example, click the parent workflow template, **Overall**, to view its Job Details page and the graphical details of the nodes and statuses of each as they were launched.
.. .. image:: ../common/images/wf-template-jobs-detail-example.png
:alt: Details of the job output for the selected workflow template by job ID
The nodes are marked with labels that help you identify them at a glance. See the legend_ in the :ref:`ug_wf_editor` section for more information.
@@ -201,6 +205,7 @@ Work with Schedules
Clicking on **Schedules** allows you to review any schedules set up for this template.
.. .. image:: ../common/images/templates-schedules-example-list.png
:alt: workflow template - schedule list example
@@ -246,8 +251,8 @@ To create a survey:
1. Click the **Survey** tab to bring up the **Add Survey** window.
.. figure:: ../common/images/wf-template-create-survey.png
:alt: Workflow Job Template - create survey
.. image:: ../common/images/wf-template-create-survey.png
:alt: Workflow Job Template showing the Create survey form.
Use the **ON/OFF** toggle button at the top of the screen to quickly activate or deactivate this survey prompt.
@@ -283,6 +288,7 @@ A stylized version of the survey is presented in the Preview pane. For any quest
|Workflow-template-completed-survey|
.. |Workflow-template-completed-survey| image:: ../common/images/wf-template-completed-survey.png
:alt: Workflow Job Template showing completed survey and arrows pointing to the re-ordering icons.
Optional Survey Questions
@@ -324,16 +330,20 @@ You can set up any combination of two or more of the following node types to bui
1. In the details/edit view of a workflow template, click the **Visualizer** tab or from the Templates list view, click the (|wf-viz-icon|) icon to launch the Workflow Visualizer.
.. image:: ../common/images/wf-editor-create-new.png
:alt: Workflow Visualizer start page.
2. Click the |start| button to display a list of nodes to add to your workflow.
.. |start| image:: ../common/images/wf-start-button.png
:alt: Workflow Visualizer Start button.
.. image:: ../common/images/wf-editor-create-new-add-template-list.png
:alt: Workflow Visualizer wizard, step 1 specifying the node type.
3. On the right pane, select the type of node you want to add from the drop-down menu:
.. image:: ../common/images/wf-add-node-selections.png
:alt: Node type showing the drop-down menu of node type options.
If selecting an **Approval** node, see :ref:`ug_wf_approval_nodes` for further detail.
@@ -360,9 +370,7 @@ For subsequent nodes, you can select one of the following scenarios (edge type)
- Choose **All** to ensure that *all* nodes complete as specified, before converging and triggering the next node. The purpose of ALL nodes is to make sure that every parent met it's expected outcome in order to run the child node. The workflow checks to make sure every parent behaved as expected in order to run the child node. Otherwise, it will not run the child node.
If selected, the graphical view will label the node as **ALL**.
.. image:: ../common/images/wf-editor-convergent-node-all.png
If selected, the graphical view will indicate the node type with a representative color. Refer to the legend (|compass|) to see the corresponding run scenario and their job types.
.. note::
@@ -372,45 +380,51 @@ For subsequent nodes, you can select one of the following scenarios (edge type)
7. If a job template used in the workflow has **Prompt on Launch** selected for any of its parameters, a **Prompt** button appears, allowing you to change those values at the node level. Use the wizard to change the value(s) in each of the tabs and click **Confirm** in the Preview tab.
.. image:: ../common/images/wf-editor-prompt-button-wizard.png
:alt: Workflow Visualizer wizard with Prompt on Launch options.
Likewise, if a workflow template used in the workflow has **Prompt on Launch** selected for the inventory option, use the wizard to supply the inventory at the prompt. If the parent workflow has its own inventory, it will override any inventory that is supplied here.
.. image:: ../common/images/wf-editor-prompt-button-inventory-wizard.png
:alt: Workflow Visualizer wizard with Prompt on Launch for Inventory.
.. note::
For workflow job templates with promptable fields that are required, but do not have a default, you must provide those values when creating a node before the **Select** button becomes enabled. The two cases that disable the **Select** button until a value is provided via the **Prompt** button: 1) when you select the **Prompt on Launch** checkbox in a workflow job template, but do not provide a default, or 2) when you create a survey question that is required but do not provide a default answer. However, this is **NOT** the case with credentials. Credentials that require a password on launch are **not permitted** when creating a workflow node, since everything needed to launch the node must be provided when the node is created. So, if a workflow job template prompts for credentials, AWX prevents you from being able to select a credential that requires a password.
You must also click **Select** when the prompt wizard closes in order to apply the changes at that node. Otherwise, any changes you make will revert back to the values set in the actual job template.
.. image:: ../common/images/wf-editor-wizard-buttons.png
Once the node is created, it is labeled with its job type. A template that is associated with each workflow node will run based on the selected run scenario as it proceeds. Click the compass (|compass|) icon to display the legend for each run scenario and their job types.
.. _legend:
.. |compass| image:: ../common/images/wf-editor-compass-button.png
:alt: Workflow Visualizer legend button.
.. image:: ../common/images/wf-editor-key-dropdown-list.png
:alt: Workflow Visualizer legend expanded.
8. Hovering over a node allows you to add |add node| another node, view info |info node| about the node, edit |edit| the node details, edit an existing link |edit link|, or delete |delete node| the selected node.
.. |add node| image:: ../common/images/wf-editor-add-button.png
:alt: Add node icon.
.. |edit link| image:: ../common/images/wf-editor-edit-link.png
:alt: Edit link icon.
.. |delete node| image:: ../common/images/wf-editor-delete-button.png
:alt: Delete node icon.
.. |info node| image:: ../common/images/wf-editor-info-button.png
:alt: View node details icon.
.. |edit| image:: ../common/images/edit-button.png
:alt: Edit node details icon.
.. image:: ../common/images/wf-editor-create-new-add-template.png
:alt: Building a new example workflow job template in the Workflow Visualizer
9. When done adding/editing a node, click **Select** to save any modifications and render it on the graphical view. For possible ways to build your workflow, see :ref:`ug_wf_building_scenarios`.
9. When done adding/editing a node, click **Save** to save any modifications and render it on the graphical view. For possible ways to build your workflow, see :ref:`ug_wf_building_scenarios`.
10. When done with building your workflow template, click **Save** to save your entire workflow template and return to the new workflow template details page.
.. important::
Clicking **Close** on this pane will not save your work, but instead, closes the entire Workflow Visualizer and you will have to start over.
Closing the wizard without saving will not save your work, but instead, closes the entire Workflow Visualizer and you will have to start where you last saved.
.. _ug_wf_approval_nodes:
@@ -421,22 +435,27 @@ Approval nodes
Choosing an **Approval** node requires user intervention in order to advance the workflow. This functions as a means to pause the workflow in between playbooks so that a user can give approval to continue on to the next playbook in the workflow, giving the user a specified amount of time to intervene, but also allows the user to continue as quickly as possible without having to wait on some other trigger.
.. image:: ../common/images/wf-node-approval-form.png
:alt: Workflow Visualizer Approval node form.
The default for the timeout is none, but you can specify the length of time before the request expires and automatically gets denied. After selecting and supplying the information for the approval node, it displays on the graphical view with a pause (|pause|) icon next to it.
.. |pause| image:: ../common/images/wf-node-approval-icon.png
:alt: Workflow node - approval icon.
.. image:: ../common/images/wf-node-approval-node.png
:alt: Workflow Visualizer showing approval node with pause icon.
The approver is anyone who can execute the workflow job template containing the approval nodes, has org admin or above privileges (for the org associated with that workflow job template), or any user who has the *Approve* permission explicitly assigned to them within that specific workflow job template.
.. image:: ../common/images/wf-node-approval-notifications.png
:alt: Workflows requesting approval from notifications
If pending approval nodes are not approved within the specified time limit (if an expiration was assigned) or they are denied, then they are marked as "timed out" or "failed", respectively, and move on to the next "on fail node" or "always node". If approved, the "on success" path is taken. If you try to POST in the API to a node that has already been approved, denied or timed out, an error message notifies you that this action is redundant, and no further steps will be taken.
Below shows the various levels of permissions allowed on approval workflows:
.. image:: ../common/images/wf-node-approval-rbac.png
:alt: Workflow nodes approval RBAC table.
.. source file located on google spreadsheet "Workflow approvals chart"
@@ -448,18 +467,22 @@ Node building scenarios
You can add a sibling node by clicking the |add node| on the parent node:
.. image:: ../common/images/wf-editor-create-sibling-node.png
:alt: Workflow Visualizer showing how to create a sibling node.
You can insert another node in between nodes by hovering over the line that connects the two until the |add node| appears. Clicking on the |add node| automatically inserts the node between the two nodes.
.. image:: ../common/images/wf-editor-insert-node-template.png
:alt: Workflow Visualizer showing how to insert a node.
To add a root node to depict a split scenario, click the |start| button again:
.. image:: ../common/images/wf-editor-create-new-add-template-split.png
:alt: Workflow Visualizer showing how depict a split scnario.
At any node where you want to create a split scenario, hover over the node from which the split scenario begins and click the |add node|. This essentially adds multiple nodes from the same parent node, creating sibling nodes:
.. image:: ../common/images/wf-editor-create-siblings.png
:alt: Workflow Visualizer showing how to create sibling nodes.
.. note::
@@ -471,8 +494,9 @@ If you want to undo the last inserted node, click on another node without making
Below is an example of a workflow that contains all three types of jobs that is initiated by a job template that if it fails to run, proceed to the project sync job, and regardless of whether that fails or succeeds, proceed to the inventory sync job.
.. image:: ../common/images/wf-editor-create-new-add-template-example.png
:alt: Workflow Visualizer showing a workflow job that contains a job template, a project, and an inventory source.
Remember to refer to the Key at the top of the window to identify the meaning of the symbols and colors associated with the graphical depiction.
Remember to refer to the Legend at the top of the window to identify the meaning of the symbols and colors associated with the graphical depiction.
.. note::
@@ -481,6 +505,7 @@ Remember to refer to the Key at the top of the window to identify the meaning of
.. image:: ../common/images/wf-node-delete-scenario.png
:alt: Workflow Visualizer showing a workflow job with a deleted node.
The following ways you can modify your nodes:
@@ -490,14 +515,22 @@ The following ways you can modify your nodes:
- To edit the edge type for an existing link (success/failure/always), click on the link. The right pane displays the current selection. Make your changes and click **Save** to apply them to the graphical view.
.. image:: ../common/images/wf-editor-wizard-edit-link.png
:alt: Workflow Visualizer showing the wizard to edit the link.
- To add a new link from one node to another, click the link |edit link| icon that appears on each node. Doing this highlights the nodes that are possible to link to. These feasible options are indicated by the dotted lines. Invalid options are indicated by grayed out boxes (nodes) that would otherwise produce an invalid link. The example below shows the **Demo Project** as a possible option for the **e2e-ec20de52-project** to link to, as indicated by the arrows:
.. image:: ../common/images/wf-node-link-scenario.png
:alt: Workflow showing linking scenerio between two nodes.
- To remove a link, click the link and click the **Unlink** button.
When linked, specify the type of run scenario you would like the link to have in the Add Link prompt.
.. image:: ../common/images/wf-editor-wizard-add-link-prompt.png
:alt: Workflow Visualizer prompt specifying the run type when adding a new link.
- To remove a link, click the link and click the **Unlink** (|delete node|) icon and click **Remove** at the prompt to confirm.
.. image:: ../common/images/wf-editor-wizard-unlink.png
:alt: Workflow Visualizer showing the wizard to remove the link.
This button only appears in the right hand panel if the target or child node has more than one parent. All nodes must be linked to at least one other node at all times so you must create a new link before removing an old one.
@@ -505,6 +538,7 @@ This button only appears in the right hand panel if the target or child node has
Click the Tools icon (|tools|) to zoom, pan, or reposition the view. Alternatively, you can drag the workflow diagram to reposition it on the screen or use the scroll on your mouse to zoom.
.. |tools| image:: ../common/images/tools.png
:alt: Workflow Visualizer tools icon.
@@ -519,19 +553,19 @@ Launch a workflow template by any of the following ways:
- Access the workflow templates list from the **Templates** menu on the left navigation bar or while in the workflow template Details view, scroll to the bottom to access the |launch| button from the list of templates.
.. image:: ../common/images/wf-templates-wf-template-launch.png
:alt: Templates list view with arrow pointing to the launch button of the workflow job template.
- While in the Workflow Job Template Details view of the job you want to launch, click **Launch**.
.. |launch| image:: ../common/images/launch-button.png
:alt: Workflow template launch button.
Along with any extra variables set in the workflow job template and survey, AWX automatically adds the same variables as those added for a workflow job template upon launch. Additionally, AWX automatically redirects the web browser to the Jobs Details page for this job, displaying the progress and the results.
Events related to approvals on workflows display in the Activity Stream (|activity-stream|) with detailed information about the approval requests, if any.
Events related to approvals on workflows display at the top in the Activity Stream (|activity-stream|) with detailed information about the approval requests, if any.
.. |activity-stream| image:: ../common/images/activitystream.png
.. .. image:: ../common/images/wf-activity-stream-events.png
:alt: Activity Stream icon.
Copy a Workflow Template
-------------------------------
@@ -543,10 +577,12 @@ AWX allows you the ability to copy a workflow template. If you choose to copy a
2. Click the |copy| button.
.. |copy| image:: ../common/images/copy-button.png
:alt: Copy button.
A new template opens with the name of the template from which you copied and a timestamp.
.. image:: ../common/images/wf-list-view-copy-example.png
:alt: Templates list view with example copied workflow.
Select the copied template and replace the contents of the **Name** field with a new name, and provide or modify the entries in the other fields to complete this template.
@@ -592,3 +628,4 @@ The following table notes the behavior (hierarchy) of variable precedence in AWX
**Variable Precedence Hierarchy (last listed wins)**
.. image:: ../common/images/Architecture-AWX_Variable_Precedence_Hierarchy-Workflows.png
:alt: AWX Variable Precedence Hierarchy for Workflows

View File

@@ -4,7 +4,7 @@ Stand-alone execution nodes can be added to run alongside the Kubernetes deploym
Hop nodes can be added to sit between the control plane of AWX and stand alone execution nodes. These machines will not be a part of the AWX Kubernetes cluster. The machines will be registered in AWX as node type "hop", meaning they will only handle inbound / outbound traffic for otherwise unreachable nodes in a different or more strict network.
Below is an example of an AWX Task pod with two excution nodes. Traffic to execution node 2 flows through a hop node that is setup between it and the control plane.
Below is an example of an AWX Task pod with two execution nodes. Traffic to execution node 2 flows through a hop node that is setup between it and the control plane.
```
AWX TASK POD
@@ -33,7 +33,7 @@ Adding an execution instance involves a handful of steps:
### Start machine
Bring a machine online with a compatible Red Hat family OS (e.g. RHEL 8 and 9). This machines needs a static IP, or a resolvable DNS hostname that the AWX cluster can access. If the listerner_port is defined, the machine will also need an available open port to establish inbound TCP connections on (e.g. 27199).
Bring a machine online with a compatible Red Hat family OS (e.g. RHEL 8 and 9). This machines needs a static IP, or a resolvable DNS hostname that the AWX cluster can access. If the listener_port is defined, the machine will also need an available open port to establish inbound TCP connections on (e.g. 27199).
In general the more CPU cores and memory the machine has, the more jobs that can be scheduled to run on that machine at once. See https://docs.ansible.com/automation-controller/4.2.1/html/userguide/jobs.html#at-capacity-determination-and-job-impact for more information on capacity.
@@ -48,7 +48,7 @@ Use the Instance page or `api/v2/instances` endpoint to add a new instance.
- `peers` is a list of instance hostnames to connect outbound to.
- `peers_from_control_nodes` boolean, if True, control plane nodes will automatically peer to this instance.
Below is a table of configuartions for the [diagram](#adding-execution-nodes-to-awx) above.
Below is a table of configurations for the [diagram](#adding-execution-nodes-to-awx) above.
| instance name | listener_port | peers_from_control_nodes | peers |
|------------------|---------------|-------------------------|--------------|

View File

@@ -181,7 +181,7 @@ Operator hub PRs are generated via an Ansible Playbook. See someone on the AWX t
## Revert a Release
Decide whether or not you can just fall-forward with a new AWX Release to fix a bad release. If you need to remove published artifacts from publically facing repositories, follow the steps below.
Decide whether or not you can just fall-forward with a new AWX Release to fix a bad release. If you need to remove published artifacts from publicly facing repositories, follow the steps below.
Here are the steps needed to revert an AWX and an AWX-Operator release. Depending on your use case, follow the steps for reverting just an AWX release, an Operator release or both.
@@ -195,7 +195,7 @@ Here are the steps needed to revert an AWX and an AWX-Operator release. Dependin
![Tag-Revert-1-Image](img/tag-revert-1.png)
[comment]: <> (Need an image here for actually deleting an orphaned tag, place here during next release)
3. Navigate to the [AWX Operator Release Page]() and delete the AWX-Operator release that needss to tbe removed.
3. Navigate to the [AWX Operator Release Page]() and delete the AWX-Operator release that needs to be removed.
![Revert-2-Image](img/revert-2.png)

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,7 +1,5 @@
[pytest]
DJANGO_SETTINGS_MODULE = awx.main.tests.settings_for_test
python_paths = /var/lib/awx/venv/tower/lib/python3.8/site-packages
site_dirs = /var/lib/awx/venv/tower/lib/python3.8/site-packages
python_files = *.py
addopts = --reuse-db --nomigrations --tb=native
markers =

View File

@@ -49,19 +49,6 @@ Make sure to delete the old tarball if it is an upgrade.
Anything pinned in `*.in` files involves additional manual work in
order to upgrade. Some information related to that work is outlined here.
### Django
For any upgrade of Django, it must be confirmed that
we don't regress on FIPS support before merging.
See internal integration test knowledge base article `how_to_test_FIPS`
for instructions.
If operating in a FIPS environment, `hashlib.md5()` will raise a `ValueError`,
but will support the `usedforsecurity` keyword on RHEL and Centos systems.
This used to be a problem with `names_digest` function in Django, but
was fixed upstream in Django 4.1.
### django-split-settings
When we attemed to upgrade past 1.0.0 the build process in GitHub failed on the docker build step with the following error:

Some files were not shown because too many files have changed in this diff Show More