Compare commits

..

41 Commits

Author SHA1 Message Date
Alan Rominger
9745058546 Only block commits if black fails for certain paths (#14531) 2023-10-10 10:12:57 -04:00
Aviral Katiyar
c97a48b165 Fix: #14510 Add alt-text codeblock to Images for Userguide: jobs.rst (#14530)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-09 16:40:56 -06:00
Rohit Raj
259bca0113 docs: Update workflows.rst (#14537) 2023-10-06 15:30:47 -06:00
Aviral Katiyar
92c2b4e983 Fix: #14500 Added alt text to images for Userguide: credential_plugins.rst (#14527)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-06 14:53:23 -06:00
Seth Foster
127a0cff23 Set ip_address to empty string
ip_address cannot be null, so set to
empty instead of None

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2023-10-05 22:53:16 -04:00
Aviral Katiyar
a0ef25006a Fix: #14499 Added alt text to images for Userguide: applications_auth.rst (#14526)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-05 14:22:10 -06:00
Chris Meyers
50c98a52f7 Update setting_up.rst (#14542) 2023-10-05 15:06:40 -04:00
Michelle McCausland
4008d72af6 issue-14522: Add alt-text codeblock to Images for Userguide: webhooks.rst (#14529)
Signed-off-by: Michelle McCausland <mmccausl@redhat.com>
2023-10-05 17:40:07 +01:00
Alan Rominger
e72e9f94b9 Fix collection test flake due to successful canceled command (#14519) 2023-10-04 09:09:29 -04:00
Sasa Jovicic
9d60b0b9c6 Fix #12815 Direct links to AWX do not reroute the user after authentication (#14399)
Signed-off-by: Sasa993 <jovicic.sasa@hotmail.com>
Co-authored-by: Sasa Jovicic <sjovicic@anexia-it.com>
2023-10-03 16:55:22 -04:00
Aviral Katiyar
05b58c4df6 Fix : #14490 Fixed the required spelling errors (#14507)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
2023-10-03 14:15:13 -06:00
TVo
b1b960fd17 Updated Forum terminology and removed mailing list (#14491) 2023-10-03 19:24:19 +01:00
Jakub Laskowski
3c8f71e559 Fixed wrong arguments order in DomainPasswordGrantAuthorizer (#14441)
Signed-off-by: Jakub Laskowski <jakub.laskowski9@gmail.com>
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-10-03 11:54:57 -04:00
Alan Rominger
f5922f76fa DROP unnecessary unpartioned event tables (#14055) 2023-10-03 11:49:23 -04:00
kurokobo
05582702c6 fix: make type conversions work correctly (related #14487) (#14489)
Signed-off-by: kurokobo <2920259+kurokobo@users.noreply.github.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-09-30 04:02:10 +00:00
Alan Rominger
1d340c5b4e Add a section for postgres max_connections value (#14482) 2023-09-28 10:28:52 -04:00
TVo
15925f1416 Simplified release notes for AWX (#14485) 2023-09-27 14:50:57 -06:00
Salma Kochay
6e06a20cca add subscription usage page 2023-09-27 10:57:04 -04:00
Hao Liu
bb3acbb8ad Debug log for scheduler commit duration (#14035)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-09-27 09:46:55 -04:00
Hao Liu
a88e47930c Update django version to address CVE-2023-41164 (#14460) 2023-09-27 09:36:02 -04:00
Hao Liu
a0d4515ba4 Explicitly set collection version during promotion (#14484) 2023-09-26 14:19:22 -04:00
Alan Rominger
770cc10a78 Get rid of names_digest hack no longer needed (#14459) 2023-09-26 12:09:30 -04:00
Alan Rominger
159dd62d84 Add null value handling in create_partition (#14480) 2023-09-25 18:28:44 -04:00
TVo
640e5db9c6 Removed references of IRC and fixed formatting in "Work Items" section. (#14478)
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-09-25 11:24:39 -06:00
Alan Rominger
9ed527eb26 Consolidate image and server setup in several checks (#14477) 2023-09-25 09:02:20 -04:00
Alan Rominger
29ad6e1eaa Fix bug, None was used instead of empty for DB outage (#14463) 2023-09-21 14:30:25 -04:00
Alan Rominger
3e607f8964 AAP-15927 Use ATTACH PARTITION to avoid exclusive table lock for events (#14433) 2023-09-21 14:27:04 -04:00
TVo
c9d1a4d063 Added release notes for version 23.1.0 (#14471) 2023-09-21 11:02:38 -06:00
Hao Liu
a290b082db Use ldap container hostname for LDAP config (#14473) 2023-09-21 11:31:51 -04:00
Hao Liu
6d3c22e801 Update how to get involved with matrix and forum (#14472) 2023-09-20 18:33:04 +00:00
Michael Abashian
1f91773a3c Simplify docs string base generation 2023-09-20 13:16:54 -04:00
Hao Liu
7b846e1e49 Add makefile target to load dev image into Kind (#13775)
Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Rick Elrod <rick@elrod.me>
2023-09-19 13:34:10 -04:00
Don Naro
f7a2de8a07 Contributor guide and adjusted titles (#14447)
Co-authored-by: Thanhnguyet Vo <tvo@ansible.com>
2023-09-18 10:40:47 -06:00
Andrew Klychkov
194c214f03 userguide/execution_environments.rst: replace building paragraphs with ref to Get started EE guide (#14429) 2023-09-15 10:20:46 -04:00
Christian Adams
77e30dd4b2 Add link to script for publishing operator on OperatorHub (#14442) 2023-09-15 09:32:19 -04:00
jessicamack
9d7421b9bc Update README (#14452)
Signed-off-by: jessicamack <jmack@redhat.com>
2023-09-14 20:20:06 +00:00
Alan Rominger
3b8e662916 Remove conditional paths due to conflict with required checks (#14450) 2023-09-14 16:19:42 -04:00
Alan Rominger
aa3228eec9 Fix continue-on-error GH actions bug, always run archive step instead 2023-09-14 19:45:07 +00:00
Alan Rominger
7b0598c7d8 Continue workflow steps to save logs from failed tests (#14448) 2023-09-14 18:23:22 +00:00
Ivan Aragonés Muniesa
49832d6379 don't pass the 'organization' or other fields to the search of the instance group or execution environments (#14223) 2023-09-14 09:31:05 -04:00
Alan Rominger
8feeb5f1fa Allow saving github creds in user folder (#14435) 2023-09-12 15:47:12 -04:00
67 changed files with 1441 additions and 388 deletions

View File

@@ -0,0 +1,28 @@
name: Setup images for AWX
description: Builds new awx_devel image
inputs:
github-token:
description: GitHub Token for registry access
required: true
runs:
using: composite
steps:
- name: Get python version from Makefile
shell: bash
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Log in to registry
shell: bash
run: |
echo "${{ inputs.github-token }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull latest devel image to warm cache
shell: bash
run: docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
- name: Build image for current source checkout
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
make docker-compose-build

View File

@@ -1,12 +1,8 @@
# This currently *always* uses the "warm build cache" image
# We should do something to allow forcing a rebuild, probably by looking for
# some string in the commit message or something.
name: Run AWX (devel environment)
name: Run AWX docker-compose
description: Runs AWX with `make docker-compose`
inputs:
github-token:
description: GitHub Token for registry access
description: GitHub Token to pass to awx_devel_image
required: true
build-ui:
description: Should the UI be built?
@@ -23,9 +19,10 @@ outputs:
runs:
using: composite
steps:
- name: Get python version from Makefile
shell: bash
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ inputs.github-token }}
- name: Upgrade ansible-core
shell: bash
@@ -35,19 +32,6 @@ runs:
shell: bash
run: sudo apt-get install -y gettext
- name: Log in to registry
shell: bash
run: |
echo "${{ inputs.github-token }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull latest available devel image and build HEAD on top of it
shell: bash
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
make docker-compose-build
- name: Start AWX
shell: bash
run: |

View File

@@ -7,9 +7,6 @@ env:
COMPOSE_TAG: ${{ github.base_ref || 'devel' }}
on:
pull_request:
paths-ignore:
- 'docs/**'
- '.github/workflows/docs.yml'
jobs:
common-tests:
name: ${{ matrix.tests.name }}
@@ -40,16 +37,27 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run check ${{ matrix.tests.name }}
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make github_ci_runner
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make docker-runner
dev-env:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run smoke test
run: make github_ci_setup && ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
run: ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
awx-operator:
runs-on: ubuntu-latest
@@ -159,11 +167,13 @@ jobs:
# Upload coverage report as artifact
- uses: actions/upload-artifact@v3
if: always()
with:
name: coverage-${{ matrix.target-regex.name }}
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: collection-integration-${{ matrix.target-regex.name }}.log

View File

@@ -2,9 +2,6 @@
name: Docsite CI
on:
pull_request:
paths:
- 'docs/**'
- '.github/workflows/docs.yml'
jobs:
docsite-build:
name: docsite test build

View File

@@ -26,7 +26,6 @@ jobs:
with:
build-ui: true
github-token: ${{ secrets.GITHUB_TOKEN }}
log-filename: e2e-${{ matrix.job }}.log
- name: Pull awx_cypress_base image
run: |
@@ -71,5 +70,6 @@ jobs:
awx-pf-tests run --project .
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: e2e-${{ matrix.job }}.log

View File

@@ -40,8 +40,12 @@ jobs:
if: ${{ github.repository_owner != 'ansible' }}
- name: Build collection and publish to galaxy
env:
COLLECTION_NAMESPACE: ${{ env.collection_namespace }}
COLLECTION_VERSION: ${{ github.event.release.tag_name }}
COLLECTION_TEMPLATE_VERSION: true
run: |
COLLECTION_TEMPLATE_VERSION=true COLLECTION_NAMESPACE=${{ env.collection_namespace }} make build_collection
make build_collection
if [ "$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
echo "Galaxy release already done"; \
else \

View File

@@ -6,6 +6,7 @@ DOCKER_COMPOSE ?= docker-compose
OFFICIAL ?= no
NODE ?= node
NPM_BIN ?= npm
KIND_BIN ?= $(shell which kind)
CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
@@ -78,7 +79,7 @@ I18N_FLAG_FILE = .i18n_built
sdist \
ui-release ui-devel \
VERSION PYTHON_VERSION docker-compose-sources \
.git/hooks/pre-commit github_ci_setup github_ci_runner
.git/hooks/pre-commit
clean-tmp:
rm -rf tmp/
@@ -323,21 +324,10 @@ test:
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
## Login to Github container image registry, pull image, then build image.
github_ci_setup:
# GITHUB_ACTOR is automatic github actions env var
# CI_GITHUB_TOKEN is defined in .github files
echo $(CI_GITHUB_TOKEN) | docker login ghcr.io -u $(GITHUB_ACTOR) --password-stdin
docker pull $(DEVEL_IMAGE_NAME) || : # Pre-pull image to warm build cache
$(MAKE) docker-compose-build
## Runs AWX_DOCKER_CMD inside a new docker container.
docker-runner:
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)
## Builds image and runs AWX_DOCKER_CMD in it, mainly for .github checks.
github_ci_runner: github_ci_setup docker-runner
test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
if [ "$(VENV_BASE)" ]; then \
@@ -664,6 +654,9 @@ awx-kube-dev-build: Dockerfile.kube-dev
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
kind-dev-load: awx-kube-dev-build
$(KIND_BIN) load docker-image $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG)
# Translation TASKS
# --------------------------------------

View File

@@ -1,5 +1,5 @@
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX Mailing List](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://groups.google.com/g/awx-project)
[![IRC Chat - #ansible-awx](https://img.shields.io/badge/IRC-%23ansible--awx-blueviolet.svg)](https://libera.chat)
[![Ansible Matrix](https://img.shields.io/badge/matrix-Ansible%20Community-blueviolet.svg?logo=matrix)](https://chat.ansible.im/#/welcome) [![Ansible Discourse](https://img.shields.io/badge/discourse-Ansible%20Community-yellowgreen.svg?logo=discourse)](https://forum.ansible.com)
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
@@ -37,5 +37,6 @@ Get Involved
We welcome your feedback and ideas. Here's how to reach us with feedback and questions:
- Join the `#ansible-awx` channel on irc.libera.chat
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)
- Join the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com)
- Join the [Ansible Community Forum](https://forum.ansible.com)
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)

View File

@@ -52,39 +52,14 @@ try:
except ImportError: # pragma: no cover
MODE = 'production'
import hashlib
try:
import django # noqa: F401
HAS_DJANGO = True
except ImportError:
HAS_DJANGO = False
pass
else:
from django.db.backends.base import schema
from django.db.models import indexes
from django.db.backends.utils import names_digest
from django.db import connection
if HAS_DJANGO is True:
# See upgrade blocker note in requirements/README.md
try:
names_digest('foo', 'bar', 'baz', length=8)
except ValueError:
def names_digest(*args, length):
"""
Generate a 32-bit digest of a set of arguments that can be used to shorten
identifying names. Support for use in FIPS environments.
"""
h = hashlib.md5(usedforsecurity=False)
for arg in args:
h.update(arg.encode())
return h.hexdigest()[:length]
schema.names_digest = names_digest
indexes.names_digest = names_digest
def find_commands(management_dir):
# Modified version of function from django/core/management/__init__.py.

View File

@@ -418,6 +418,10 @@ class SettingsWrapper(UserSettingsHolder):
"""Get value while accepting the in-memory cache if key is available"""
with _ctit_db_wrapper(trans_safe=True):
return self._get_local(name)
# If the last line did not return, that means we hit a database error
# in that case, we should not have a local cache value
# thus, return empty as a signal to use the default
return empty
def __getattr__(self, name):
value = empty

View File

@@ -13,6 +13,7 @@ from unittest import mock
from django.conf import LazySettings
from django.core.cache.backends.locmem import LocMemCache
from django.core.exceptions import ImproperlyConfigured
from django.db.utils import Error as DBError
from django.utils.translation import gettext_lazy as _
import pytest
@@ -331,3 +332,18 @@ def test_in_memory_cache_works(settings):
with mock.patch.object(settings, '_get_local') as mock_get:
assert settings.AWX_VAR == 'DEFAULT'
mock_get.assert_not_called()
@pytest.mark.defined_in_file(AWX_VAR=[])
def test_getattr_with_database_error(settings):
"""
If a setting is defined via the registry and has a null-ish default which is not None
then referencing that setting during a database outage should give that default
this is regression testing for a bug where it would return None
"""
settings.registry.register('AWX_VAR', field_class=fields.StringListField, default=[], category=_('System'), category_slug='system')
settings._awx_conf_memoizedcache.clear()
with mock.patch('django.db.backends.base.base.BaseDatabaseWrapper.ensure_connection') as mock_ensure:
mock_ensure.side_effect = DBError('for test')
assert settings.AWX_VAR == []

View File

@@ -54,7 +54,9 @@ tss_inputs = {
def tss_backend(**kwargs):
if kwargs.get("domain"):
authorizer = DomainPasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'], kwargs['domain'])
authorizer = DomainPasswordGrantAuthorizer(
base_url=kwargs['server_url'], username=kwargs['username'], domain=kwargs['domain'], password=kwargs['password']
)
else:
authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'])
secret_server = SecretServer(kwargs['server_url'], authorizer)

View File

@@ -195,14 +195,35 @@ class Command(BaseCommand):
delete_meta.delete_jobs()
return (delete_meta.jobs_no_delete_count, delete_meta.jobs_to_delete_count)
def _cascade_delete_job_events(self, model, pk_list):
def _handle_unpartitioned_events(self, model, pk_list):
"""
If unpartitioned job events remain, it will cascade those from jobs in pk_list
if the unpartitioned table is no longer necessary, it will drop the table
"""
tblname = unified_job_class_to_event_table_name(model)
rel_name = model().event_parent_key
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM pg_tables WHERE tablename = '_unpartitioned_{tblname}';")
row = cursor.fetchone()
if row is None:
self.logger.debug(f'Unpartitioned table for {rel_name} does not exist, you are fully migrated')
return
if pk_list:
with connection.cursor() as cursor:
tblname = unified_job_class_to_event_table_name(model)
pk_list_csv = ','.join(map(str, pk_list))
rel_name = model().event_parent_key
cursor.execute(f"DELETE FROM _unpartitioned_{tblname} WHERE {rel_name} IN ({pk_list_csv})")
with connection.cursor() as cursor:
# same as UnpartitionedJobEvent.objects.aggregate(Max('created'))
cursor.execute(f'SELECT MAX("_unpartitioned_{tblname}"."created") FROM "_unpartitioned_{tblname}"')
row = cursor.fetchone()
last_created = row[0]
if last_created:
self.logger.info(f'Last event created in _unpartitioned_{tblname} was {last_created.isoformat()}')
else:
self.logger.info(f'Table _unpartitioned_{tblname} has no events in it')
if (last_created is None) or (last_created < self.cutoff):
self.logger.warning(f'Dropping table _unpartitioned_{tblname} since no records are newer than {self.cutoff}')
cursor.execute(f'DROP TABLE _unpartitioned_{tblname}')
def cleanup_jobs(self):
batch_size = 100000
@@ -227,7 +248,7 @@ class Command(BaseCommand):
_, results = qs_batch.delete()
deleted += results['main.Job']
self._cascade_delete_job_events(Job, pk_list)
self._handle_unpartitioned_events(Job, pk_list)
return skipped, deleted
@@ -250,7 +271,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(AdHocCommand, pk_list)
self._handle_unpartitioned_events(AdHocCommand, pk_list)
skipped += AdHocCommand.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -278,7 +299,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(ProjectUpdate, pk_list)
self._handle_unpartitioned_events(ProjectUpdate, pk_list)
skipped += ProjectUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -306,7 +327,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(InventoryUpdate, pk_list)
self._handle_unpartitioned_events(InventoryUpdate, pk_list)
skipped += InventoryUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -330,7 +351,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(SystemJob, pk_list)
self._handle_unpartitioned_events(SystemJob, pk_list)
skipped += SystemJob.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted

View File

@@ -125,14 +125,15 @@ class InstanceManager(models.Manager):
with advisory_lock('instance_registration_%s' % hostname):
if settings.AWX_AUTO_DEPROVISION_INSTANCES:
# detect any instances with the same IP address.
# if one exists, set it to None
inst_conflicting_ip = self.filter(ip_address=ip_address).exclude(hostname=hostname)
if inst_conflicting_ip.exists():
for other_inst in inst_conflicting_ip:
other_hostname = other_inst.hostname
other_inst.ip_address = None
other_inst.save(update_fields=['ip_address'])
logger.warning("IP address {0} conflict detected, ip address unset for host {1}.".format(ip_address, other_hostname))
# if one exists, set it to ""
if ip_address:
inst_conflicting_ip = self.filter(ip_address=ip_address).exclude(hostname=hostname)
if inst_conflicting_ip.exists():
for other_inst in inst_conflicting_ip:
other_hostname = other_inst.hostname
other_inst.ip_address = ""
other_inst.save(update_fields=['ip_address'])
logger.warning("IP address {0} conflict detected, ip address unset for host {1}.".format(ip_address, other_hostname))
# Return existing instance that matches hostname or UUID (default to UUID)
if node_uuid is not None and node_uuid != UUID_DEFAULT and self.filter(uuid=node_uuid).exists():

View File

@@ -124,6 +124,13 @@ class TaskBase:
self.record_aggregate_metrics()
sys.exit(1)
def get_local_metrics(self):
data = {}
for k, metric in self.subsystem_metrics.METRICS.items():
if k.startswith(self.prefix) and metric.metric_has_changed:
data[k[len(self.prefix) + 1 :]] = metric.current_value
return data
def schedule(self):
# Always be able to restore the original signal handler if we finish
original_sigusr1 = signal.getsignal(signal.SIGUSR1)
@@ -146,10 +153,14 @@ class TaskBase:
signal.signal(signal.SIGUSR1, original_sigusr1)
commit_start = time.time()
logger.debug(f"Commiting {self.prefix} Scheduler changes")
if self.prefix == "task_manager":
self.subsystem_metrics.set(f"{self.prefix}_commit_seconds", time.time() - commit_start)
local_metrics = self.get_local_metrics()
self.record_aggregate_metrics()
logger.debug(f"Finishing {self.prefix} Scheduler")
logger.debug(f"Finished {self.prefix} Scheduler, timing data:\n{local_metrics}")
class WorkflowManager(TaskBase):

View File

@@ -23,7 +23,7 @@ from django.core.exceptions import ObjectDoesNotExist, FieldDoesNotExist
from django.utils.dateparse import parse_datetime
from django.utils.translation import gettext_lazy as _
from django.utils.functional import cached_property
from django.db import connection, transaction, ProgrammingError
from django.db import connection, transaction, ProgrammingError, IntegrityError
from django.db.models.fields.related import ForeignObjectRel, ManyToManyField
from django.db.models.fields.related_descriptors import ForwardManyToOneDescriptor, ManyToManyDescriptor
from django.db.models.query import QuerySet
@@ -1164,13 +1164,24 @@ def create_partition(tblname, start=None):
try:
with transaction.atomic():
with connection.cursor() as cursor:
cursor.execute(f"SELECT EXISTS (SELECT FROM information_schema.tables WHERE table_name = '{tblname}_{partition_label}');")
row = cursor.fetchone()
if row is not None:
for val in row: # should only have 1
if val is True:
logger.debug(f'Event partition table {tblname}_{partition_label} already exists')
return
cursor.execute(
f'CREATE TABLE IF NOT EXISTS {tblname}_{partition_label} '
f'PARTITION OF {tblname} '
f'FOR VALUES FROM (\'{start_timestamp}\') to (\'{end_timestamp}\');'
f'CREATE TABLE {tblname}_{partition_label} (LIKE {tblname} INCLUDING DEFAULTS INCLUDING CONSTRAINTS); '
f'ALTER TABLE {tblname} ATTACH PARTITION {tblname}_{partition_label} '
f'FOR VALUES FROM (\'{start_timestamp}\') TO (\'{end_timestamp}\');'
)
except ProgrammingError as e:
logger.debug(f'Caught known error due to existing partition: {e}')
except (ProgrammingError, IntegrityError) as e:
if 'already exists' in str(e):
logger.info(f'Caught known error due to partition creation race: {e}')
else:
raise
def cleanup_new_process(func):

View File

@@ -33,6 +33,7 @@ import Roles from './models/Roles';
import Root from './models/Root';
import Schedules from './models/Schedules';
import Settings from './models/Settings';
import SubscriptionUsage from './models/SubscriptionUsage';
import SystemJobs from './models/SystemJobs';
import SystemJobTemplates from './models/SystemJobTemplates';
import Teams from './models/Teams';
@@ -82,6 +83,7 @@ const RolesAPI = new Roles();
const RootAPI = new Root();
const SchedulesAPI = new Schedules();
const SettingsAPI = new Settings();
const SubscriptionUsageAPI = new SubscriptionUsage();
const SystemJobsAPI = new SystemJobs();
const SystemJobTemplatesAPI = new SystemJobTemplates();
const TeamsAPI = new Teams();
@@ -132,6 +134,7 @@ export {
RootAPI,
SchedulesAPI,
SettingsAPI,
SubscriptionUsageAPI,
SystemJobsAPI,
SystemJobTemplatesAPI,
TeamsAPI,

View File

@@ -0,0 +1,16 @@
import Base from '../Base';
class SubscriptionUsage extends Base {
constructor(http) {
super(http);
this.baseUrl = 'api/v2/host_metric_summary_monthly/';
}
readSubscriptionUsageChart(dateRange) {
return this.http.get(
`${this.baseUrl}?date__gte=${dateRange}&order_by=date&page_size=100`
);
}
}
export default SubscriptionUsage;

View File

@@ -75,6 +75,7 @@ function SessionProvider({ children }) {
const [sessionCountdown, setSessionCountdown] = useState(0);
const [authRedirectTo, setAuthRedirectTo] = useState('/');
const [isUserBeingLoggedOut, setIsUserBeingLoggedOut] = useState(false);
const [isRedirectLinkReceived, setIsRedirectLinkReceived] = useState(false);
const {
request: fetchLoginRedirectOverride,
@@ -99,6 +100,7 @@ function SessionProvider({ children }) {
const logout = useCallback(async () => {
setIsUserBeingLoggedOut(true);
setIsRedirectLinkReceived(false);
if (!isSessionExpired.current) {
setAuthRedirectTo('/logout');
window.localStorage.setItem(SESSION_USER_ID, null);
@@ -112,6 +114,18 @@ function SessionProvider({ children }) {
return <Redirect to="/login" />;
}, [setSessionTimeout, setSessionCountdown]);
useEffect(() => {
const unlisten = history.listen((location, action) => {
if (action === 'POP') {
setIsRedirectLinkReceived(true);
}
});
return () => {
unlisten(); // ensure that the listener is removed when the component unmounts
};
}, [history]);
useEffect(() => {
if (!isAuthenticated(document.cookie)) {
return () => {};
@@ -176,6 +190,8 @@ function SessionProvider({ children }) {
logout,
sessionCountdown,
setAuthRedirectTo,
isRedirectLinkReceived,
setIsRedirectLinkReceived,
}),
[
authRedirectTo,
@@ -186,6 +202,8 @@ function SessionProvider({ children }) {
logout,
sessionCountdown,
setAuthRedirectTo,
isRedirectLinkReceived,
setIsRedirectLinkReceived,
]
);

View File

@@ -17,6 +17,7 @@ import Organizations from 'screens/Organization';
import Projects from 'screens/Project';
import Schedules from 'screens/Schedule';
import Settings from 'screens/Setting';
import SubscriptionUsage from 'screens/SubscriptionUsage/SubscriptionUsage';
import Teams from 'screens/Team';
import Templates from 'screens/Template';
import TopologyView from 'screens/TopologyView';
@@ -61,6 +62,11 @@ function getRouteConfig(userProfile = {}) {
path: '/host_metrics',
screen: HostMetrics,
},
{
title: <Trans>Subscription Usage</Trans>,
path: '/subscription_usage',
screen: SubscriptionUsage,
},
],
},
{
@@ -189,6 +195,7 @@ function getRouteConfig(userProfile = {}) {
'unique_managed_hosts'
) {
deleteRoute('host_metrics');
deleteRoute('subscription_usage');
}
if (userProfile?.isSuperUser || userProfile?.isSystemAuditor)
return routeConfig;
@@ -197,6 +204,7 @@ function getRouteConfig(userProfile = {}) {
deleteRoute('management_jobs');
deleteRoute('topology_view');
deleteRoute('instances');
deleteRoute('subscription_usage');
if (userProfile?.isOrgAdmin) return routeConfig;
if (!userProfile?.isNotificationAdmin) deleteRoute('notification_templates');

View File

@@ -31,6 +31,7 @@ describe('getRouteConfig', () => {
'/activity_stream',
'/workflow_approvals',
'/host_metrics',
'/subscription_usage',
'/templates',
'/credentials',
'/projects',
@@ -61,6 +62,7 @@ describe('getRouteConfig', () => {
'/activity_stream',
'/workflow_approvals',
'/host_metrics',
'/subscription_usage',
'/templates',
'/credentials',
'/projects',

View File

@@ -45,7 +45,8 @@ const Login = styled(PFLogin)`
function AWXLogin({ alt, isAuthenticated }) {
const [userId, setUserId] = useState(null);
const { authRedirectTo, isSessionExpired } = useSession();
const { authRedirectTo, isSessionExpired, isRedirectLinkReceived } =
useSession();
const isNewUser = useRef(true);
const hasVerifiedUser = useRef(false);
@@ -179,7 +180,8 @@ function AWXLogin({ alt, isAuthenticated }) {
return <LoadingSpinner />;
}
if (userId && hasVerifiedUser.current) {
const redirect = isNewUser.current ? '/home' : authRedirectTo;
const redirect =
isNewUser.current && !isRedirectLinkReceived ? '/home' : authRedirectTo;
return <Redirect to={redirect} />;
}

View File

@@ -0,0 +1,319 @@
import React, { useEffect, useCallback } from 'react';
import { string, number, shape, arrayOf } from 'prop-types';
import * as d3 from 'd3';
import { t } from '@lingui/macro';
import { PageContextConsumer } from '@patternfly/react-core';
import UsageChartTooltip from './UsageChartTooltip';
function UsageChart({ id, data, height, pageContext }) {
const { isNavOpen } = pageContext;
// Methods
const draw = useCallback(() => {
const margin = { top: 15, right: 25, bottom: 105, left: 70 };
const getWidth = () => {
let width;
// This is in an a try/catch due to an error from jest.
// Even though the d3.select returns a valid selector with
// style function, it says it is null in the test
try {
width =
parseInt(d3.select(`#${id}`).style('width'), 10) -
margin.left -
margin.right || 700;
} catch (error) {
width = 700;
}
return width;
};
// Clear our chart container element first
d3.selectAll(`#${id} > *`).remove();
const width = getWidth();
function transition(path) {
path.transition().duration(1000).attrTween('stroke-dasharray', tweenDash);
}
function tweenDash(...params) {
const l = params[2][params[1]].getTotalLength();
const i = d3.interpolateString(`0,${l}`, `${l},${l}`);
return (val) => i(val);
}
const x = d3.scaleTime().rangeRound([0, width]);
const y = d3.scaleLinear().range([height, 0]);
// [consumed, capacity]
const colors = d3.scaleOrdinal(['#06C', '#C9190B']);
const svg = d3
.select(`#${id}`)
.append('svg')
.attr('width', width + margin.left + margin.right)
.attr('height', height + margin.top + margin.bottom)
.attr('z', 100)
.append('g')
.attr('id', 'chart-container')
.attr('transform', `translate(${margin.left}, ${margin.top})`);
// Tooltip
const tooltip = new UsageChartTooltip({
svg: `#${id}`,
colors,
label: t`Hosts`,
});
const parseTime = d3.timeParse('%Y-%m-%d');
const formattedData = data?.reduce(
(formatted, { date, license_consumed, license_capacity }) => {
const MONTH = parseTime(date);
const CONSUMED = +license_consumed;
const CAPACITY = +license_capacity;
return formatted.concat({ MONTH, CONSUMED, CAPACITY });
},
[]
);
// Scale the range of the data
const largestY = formattedData?.reduce((a_max, b) => {
const b_max = Math.max(b.CONSUMED > b.CAPACITY ? b.CONSUMED : b.CAPACITY);
return a_max > b_max ? a_max : b_max;
}, 0);
x.domain(d3.extent(formattedData, (d) => d.MONTH));
y.domain([
0,
largestY > 4 ? largestY + Math.max(largestY / 10, 1) : 5,
]).nice();
const capacityLine = d3
.line()
.curve(d3.curveMonotoneX)
.x((d) => x(d.MONTH))
.y((d) => y(d.CAPACITY));
const consumedLine = d3
.line()
.curve(d3.curveMonotoneX)
.x((d) => x(d.MONTH))
.y((d) => y(d.CONSUMED));
// Add the Y Axis
svg
.append('g')
.attr('class', 'y-axis')
.call(
d3
.axisLeft(y)
.ticks(
largestY > 3
? Math.min(largestY + Math.max(largestY / 10, 1), 10)
: 5
)
.tickSize(-width)
.tickFormat(d3.format('d'))
)
.selectAll('line')
.attr('stroke', '#d7d7d7');
svg.selectAll('.y-axis .tick text').attr('x', -5).attr('font-size', '14');
// text label for the y axis
svg
.append('text')
.attr('transform', 'rotate(-90)')
.attr('y', 0 - margin.left)
.attr('x', 0 - height / 2)
.attr('dy', '1em')
.style('text-anchor', 'middle')
.text(t`Unique Hosts`);
// Add the X Axis
let ticks;
const maxTicks = Math.round(
formattedData.length / (formattedData.length / 2)
);
ticks = formattedData.map((d) => d.MONTH);
if (formattedData.length === 13) {
ticks = formattedData
.map((d, i) => (i % maxTicks === 0 ? d.MONTH : undefined))
.filter((item) => item);
}
svg.select('.domain').attr('stroke', '#d7d7d7');
svg
.append('g')
.attr('class', 'x-axis')
.attr('transform', `translate(0, ${height})`)
.call(
d3
.axisBottom(x)
.tickValues(ticks)
.tickSize(-height)
.tickFormat(d3.timeFormat('%m/%y'))
)
.selectAll('line')
.attr('stroke', '#d7d7d7');
svg
.selectAll('.x-axis .tick text')
.attr('x', -25)
.attr('font-size', '14')
.attr('transform', 'rotate(-65)');
// text label for the x axis
svg
.append('text')
.attr(
'transform',
`translate(${width / 2} , ${height + margin.top + 50})`
)
.style('text-anchor', 'middle')
.text(t`Month`);
const vertical = svg
.append('path')
.attr('class', 'mouse-line')
.style('stroke', 'black')
.style('stroke-width', '3px')
.style('stroke-dasharray', '3, 3')
.style('opacity', '0');
const handleMouseOver = (event, d) => {
tooltip.handleMouseOver(event, d);
// show vertical line
vertical.transition().style('opacity', '1');
};
const handleMouseMove = function mouseMove(event) {
const [pointerX] = d3.pointer(event);
vertical.attr('d', () => `M${pointerX},${height} ${pointerX},${0}`);
};
const handleMouseOut = () => {
// hide tooltip
tooltip.handleMouseOut();
// hide vertical line
vertical.transition().style('opacity', 0);
};
const dateFormat = d3.timeFormat('%m/%y');
// Add the consumed line path
svg
.append('path')
.data([formattedData])
.attr('class', 'line')
.style('fill', 'none')
.style('stroke', () => colors(1))
.attr('stroke-width', 2)
.attr('d', consumedLine)
.call(transition);
// create our consumed line circles
svg
.selectAll('dot')
.data(formattedData)
.enter()
.append('circle')
.attr('r', 3)
.style('stroke', () => colors(1))
.style('fill', () => colors(1))
.attr('cx', (d) => x(d.MONTH))
.attr('cy', (d) => y(d.CONSUMED))
.attr('id', (d) => `consumed-dot-${dateFormat(d.MONTH)}`)
.on('mouseover', (event, d) => handleMouseOver(event, d))
.on('mousemove', handleMouseMove)
.on('mouseout', handleMouseOut);
// Add the capacity line path
svg
.append('path')
.data([formattedData])
.attr('class', 'line')
.style('fill', 'none')
.style('stroke', () => colors(0))
.attr('stroke-width', 2)
.attr('d', capacityLine)
.call(transition);
// create our capacity line circles
svg
.selectAll('dot')
.data(formattedData)
.enter()
.append('circle')
.attr('r', 3)
.style('stroke', () => colors(0))
.style('fill', () => colors(0))
.attr('cx', (d) => x(d.MONTH))
.attr('cy', (d) => y(d.CAPACITY))
.attr('id', (d) => `capacity-dot-${dateFormat(d.MONTH)}`)
.on('mouseover', handleMouseOver)
.on('mousemove', handleMouseMove)
.on('mouseout', handleMouseOut);
// Create legend
const legend_keys = [t`Subscriptions consumed`, t`Subscription capacity`];
let totalWidth = width / 2 - 175;
const lineLegend = svg
.selectAll('.lineLegend')
.data(legend_keys)
.enter()
.append('g')
.attr('class', 'lineLegend')
.each(function formatLegend() {
const current = d3.select(this);
current.attr('transform', `translate(${totalWidth}, ${height + 90})`);
totalWidth += 200;
});
lineLegend
.append('text')
.text((d) => d)
.attr('font-size', '14')
.attr('transform', 'translate(15,9)'); // align texts with boxes
lineLegend
.append('rect')
.attr('fill', (d) => colors(d))
.attr('width', 10)
.attr('height', 10);
}, [data, height, id]);
useEffect(() => {
draw();
}, [draw, isNavOpen]);
useEffect(() => {
function handleResize() {
draw();
}
window.addEventListener('resize', handleResize);
handleResize();
return () => window.removeEventListener('resize', handleResize);
}, [draw]);
return <div id={id} />;
}
UsageChart.propTypes = {
id: string.isRequired,
data: arrayOf(shape({})).isRequired,
height: number.isRequired,
};
const withPageContext = (Component) =>
function contextComponent(props) {
return (
<PageContextConsumer>
{(pageContext) => <Component {...props} pageContext={pageContext} />}
</PageContextConsumer>
);
};
export default withPageContext(UsageChart);

View File

@@ -0,0 +1,177 @@
import * as d3 from 'd3';
import { t } from '@lingui/macro';
class UsageChartTooltip {
constructor(opts) {
this.label = opts.label;
this.svg = opts.svg;
this.colors = opts.colors;
this.draw();
}
draw() {
this.toolTipBase = d3.select(`${this.svg} > svg`).append('g');
this.toolTipBase.attr('id', 'chart-tooltip');
this.toolTipBase.attr('overflow', 'visible');
this.toolTipBase.style('opacity', 0);
this.toolTipBase.style('pointer-events', 'none');
this.toolTipBase.attr('transform', 'translate(100, 100)');
this.boxWidth = 200;
this.textWidthThreshold = 20;
this.toolTipPoint = this.toolTipBase
.append('rect')
.attr('transform', 'translate(10, -10) rotate(45)')
.attr('x', 0)
.attr('y', 0)
.attr('height', 20)
.attr('width', 20)
.attr('fill', '#393f44');
this.boundingBox = this.toolTipBase
.append('rect')
.attr('x', 10)
.attr('y', -41)
.attr('rx', 2)
.attr('height', 82)
.attr('width', this.boxWidth)
.attr('fill', '#393f44');
this.circleBlue = this.toolTipBase
.append('circle')
.attr('cx', 26)
.attr('cy', 0)
.attr('r', 7)
.attr('stroke', 'white')
.attr('fill', this.colors(1));
this.circleRed = this.toolTipBase
.append('circle')
.attr('cx', 26)
.attr('cy', 26)
.attr('r', 7)
.attr('stroke', 'white')
.attr('fill', this.colors(0));
this.consumedText = this.toolTipBase
.append('text')
.attr('x', 43)
.attr('y', 4)
.attr('font-size', 12)
.attr('fill', 'white')
.text(t`Subscriptions consumed`);
this.capacityText = this.toolTipBase
.append('text')
.attr('x', 43)
.attr('y', 28)
.attr('font-size', 12)
.attr('fill', 'white')
.text(t`Subscription capacity`);
this.icon = this.toolTipBase
.append('text')
.attr('fill', 'white')
.attr('stroke', 'white')
.attr('x', 24)
.attr('y', 30)
.attr('font-size', 12);
this.consumed = this.toolTipBase
.append('text')
.attr('fill', 'white')
.attr('font-size', 12)
.attr('x', 122)
.attr('y', 4)
.attr('id', 'consumed-count')
.text('0');
this.capacity = this.toolTipBase
.append('text')
.attr('fill', 'white')
.attr('font-size', 12)
.attr('x', 122)
.attr('y', 28)
.attr('id', 'capacity-count')
.text('0');
this.date = this.toolTipBase
.append('text')
.attr('fill', 'white')
.attr('stroke', 'white')
.attr('x', 20)
.attr('y', -21)
.attr('font-size', 12);
}
handleMouseOver = (event, data) => {
let consumed = 0;
let capacity = 0;
const [x, y] = d3.pointer(event);
const tooltipPointerX = x + 75;
const formatTooltipDate = d3.timeFormat('%m/%y');
if (!event) {
return;
}
const toolTipWidth = this.toolTipBase.node().getBoundingClientRect().width;
const chartWidth = d3
.select(`${this.svg}> svg`)
.node()
.getBoundingClientRect().width;
const overflow = 100 - (toolTipWidth / chartWidth) * 100;
const flipped = overflow < (tooltipPointerX / chartWidth) * 100;
if (data) {
consumed = data.CONSUMED || 0;
capacity = data.CAPACITY || 0;
this.date.text(formatTooltipDate(data.MONTH || null));
}
this.capacity.text(`${capacity}`);
this.consumed.text(`${consumed}`);
this.consumedTextWidth = this.consumed.node().getComputedTextLength();
this.capacityTextWidth = this.capacity.node().getComputedTextLength();
const maxTextPerc = (this.jobsWidth / this.boxWidth) * 100;
const threshold = 40;
const overage = maxTextPerc / threshold;
let adjustedWidth;
if (maxTextPerc > threshold) {
adjustedWidth = this.boxWidth * overage;
} else {
adjustedWidth = this.boxWidth;
}
this.boundingBox.attr('width', adjustedWidth);
this.toolTipBase.attr('transform', `translate(${tooltipPointerX}, ${y})`);
if (flipped) {
this.toolTipPoint.attr('transform', 'translate(-20, -10) rotate(45)');
this.boundingBox.attr('x', -adjustedWidth - 20);
this.circleBlue.attr('cx', -adjustedWidth);
this.circleRed.attr('cx', -adjustedWidth);
this.icon.attr('x', -adjustedWidth - 2);
this.consumedText.attr('x', -adjustedWidth + 17);
this.capacityText.attr('x', -adjustedWidth + 17);
this.consumed.attr('x', -this.consumedTextWidth - 20 - 12);
this.capacity.attr('x', -this.capacityTextWidth - 20 - 12);
this.date.attr('x', -adjustedWidth - 5);
} else {
this.toolTipPoint.attr('transform', 'translate(10, -10) rotate(45)');
this.boundingBox.attr('x', 10);
this.circleBlue.attr('cx', 26);
this.circleRed.attr('cx', 26);
this.icon.attr('x', 24);
this.consumedText.attr('x', 43);
this.capacityText.attr('x', 43);
this.consumed.attr('x', adjustedWidth - this.consumedTextWidth);
this.capacity.attr('x', adjustedWidth - this.capacityTextWidth);
this.date.attr('x', 20);
}
this.toolTipBase.style('opacity', 1);
this.toolTipBase.interrupt();
};
handleMouseOut = () => {
this.toolTipBase
.transition()
.delay(15)
.style('opacity', 0)
.style('pointer-events', 'none');
};
}
export default UsageChartTooltip;

View File

@@ -0,0 +1,53 @@
import React from 'react';
import styled from 'styled-components';
import { t, Trans } from '@lingui/macro';
import { Banner, Card, PageSection } from '@patternfly/react-core';
import { InfoCircleIcon } from '@patternfly/react-icons';
import { useConfig } from 'contexts/Config';
import useBrandName from 'hooks/useBrandName';
import ScreenHeader from 'components/ScreenHeader';
import SubscriptionUsageChart from './SubscriptionUsageChart';
const MainPageSection = styled(PageSection)`
padding-top: 24px;
padding-bottom: 0;
& .spacer {
margin-bottom: var(--pf-global--spacer--lg);
}
`;
function SubscriptionUsage() {
const config = useConfig();
const brandName = useBrandName();
return (
<>
{config?.ui_next && (
<Banner variant="info">
<Trans>
<p>
<InfoCircleIcon /> A tech preview of the new {brandName} user
interface can be found <a href="/ui_next/dashboard">here</a>.
</p>
</Trans>
</Banner>
)}
<ScreenHeader
streamType="all"
breadcrumbConfig={{ '/subscription_usage': t`Subscription Usage` }}
/>
<MainPageSection>
<div className="spacer">
<Card id="dashboard-main-container">
<SubscriptionUsageChart />
</Card>
</div>
</MainPageSection>
</>
);
}
export default SubscriptionUsage;

View File

@@ -0,0 +1,167 @@
import React, { useCallback, useEffect, useState } from 'react';
import styled from 'styled-components';
import { t } from '@lingui/macro';
import {
Card,
CardHeader,
CardActions,
CardBody,
CardTitle,
Flex,
FlexItem,
PageSection,
Select,
SelectVariant,
SelectOption,
Text,
} from '@patternfly/react-core';
import useRequest from 'hooks/useRequest';
import { SubscriptionUsageAPI } from 'api';
import { useUserProfile } from 'contexts/Config';
import ContentLoading from 'components/ContentLoading';
import UsageChart from './ChartComponents/UsageChart';
const GraphCardHeader = styled(CardHeader)`
margin-bottom: var(--pf-global--spacer--lg);
`;
const ChartCardTitle = styled(CardTitle)`
padding-right: 24px;
font-size: 20px;
font-weight: var(--pf-c-title--m-xl--FontWeight);
`;
const CardText = styled(Text)`
padding-right: 24px;
`;
const GraphCardActions = styled(CardActions)`
margin-left: initial;
padding-left: 0;
`;
function SubscriptionUsageChart() {
const [isPeriodDropdownOpen, setIsPeriodDropdownOpen] = useState(false);
const [periodSelection, setPeriodSelection] = useState('year');
const userProfile = useUserProfile();
const calculateDateRange = () => {
const today = new Date();
let date = '';
switch (periodSelection) {
case 'year':
date =
today.getMonth() < 10
? `${today.getFullYear() - 1}-0${today.getMonth() + 1}-01`
: `${today.getFullYear() - 1}-${today.getMonth() + 1}-01`;
break;
case 'two_years':
date =
today.getMonth() < 10
? `${today.getFullYear() - 2}-0${today.getMonth() + 1}-01`
: `${today.getFullYear() - 2}-${today.getMonth() + 1}-01`;
break;
case 'three_years':
date =
today.getMonth() < 10
? `${today.getFullYear() - 3}-0${today.getMonth() + 1}-01`
: `${today.getFullYear() - 3}-${today.getMonth() + 1}-01`;
break;
default:
date =
today.getMonth() < 10
? `${today.getFullYear() - 1}-0${today.getMonth() + 1}-01`
: `${today.getFullYear() - 1}-${today.getMonth() + 1}-01`;
break;
}
return date;
};
const {
isLoading,
result: subscriptionUsageChartData,
request: fetchSubscriptionUsageChart,
} = useRequest(
useCallback(async () => {
const data = await SubscriptionUsageAPI.readSubscriptionUsageChart(
calculateDateRange()
);
return data.data.results;
}, [periodSelection]),
[]
);
useEffect(() => {
fetchSubscriptionUsageChart();
}, [fetchSubscriptionUsageChart, periodSelection]);
if (isLoading) {
return (
<PageSection>
<Card>
<ContentLoading />
</Card>
</PageSection>
);
}
return (
<Card>
<Flex style={{ justifyContent: 'space-between' }}>
<FlexItem>
<ChartCardTitle>{t`Subscription Compliance`}</ChartCardTitle>
</FlexItem>
<FlexItem>
<CardText component="small">
{t`Last recalculation date:`}{' '}
{userProfile.systemConfig.HOST_METRIC_SUMMARY_TASK_LAST_TS.slice(
0,
10
)}
</CardText>
</FlexItem>
</Flex>
<GraphCardHeader>
<GraphCardActions>
<Select
variant={SelectVariant.single}
placeholderText={t`Select period`}
aria-label={t`Select period`}
typeAheadAriaLabel={t`Select period`}
className="periodSelect"
onToggle={setIsPeriodDropdownOpen}
onSelect={(event, selection) => {
setIsPeriodDropdownOpen(false);
setPeriodSelection(selection);
}}
selections={periodSelection}
isOpen={isPeriodDropdownOpen}
noResultsFoundText={t`No results found`}
ouiaId="subscription-usage-period-select"
>
<SelectOption key="year" value="year">
{t`Past year`}
</SelectOption>
<SelectOption key="two_years" value="two_years">
{t`Past two years`}
</SelectOption>
<SelectOption key="three_years" value="three_years">
{t`Past three years`}
</SelectOption>
</Select>
</GraphCardActions>
</GraphCardHeader>
<CardBody>
<UsageChart
period={periodSelection}
height={600}
id="d3-usage-line-chart-root"
data={subscriptionUsageChartData}
/>
</CardBody>
</Card>
);
}
export default SubscriptionUsageChart;

View File

@@ -2,16 +2,9 @@ export default function getDocsBaseUrl(config) {
let version = 'latest';
const licenseType = config?.license_info?.license_type;
if (licenseType && licenseType !== 'open') {
if (config?.version) {
if (parseFloat(config?.version.split('-')[0]) >= 4.3) {
version = parseFloat(config?.version.split('-')[0]);
} else {
version = config?.version.split('-')[0];
}
}
} else {
version = 'latest';
if (licenseType && licenseType !== 'open' && config?.version) {
version = parseFloat(config?.version.split('-')[0]).toFixed(1);
}
return `https://docs.ansible.com/automation-controller/${version}`;
}

View File

@@ -6,7 +6,7 @@ describe('getDocsBaseUrl', () => {
license_info: {
license_type: 'open',
},
version: '18.0.0',
version: '18.4.4',
});
expect(result).toEqual(
@@ -19,11 +19,11 @@ describe('getDocsBaseUrl', () => {
license_info: {
license_type: 'enterprise',
},
version: '4.0.0',
version: '18.4.4',
});
expect(result).toEqual(
'https://docs.ansible.com/automation-controller/4.0.0'
'https://docs.ansible.com/automation-controller/18.4'
);
});
@@ -32,17 +32,17 @@ describe('getDocsBaseUrl', () => {
license_info: {
license_type: 'enterprise',
},
version: '4.0.0-beta',
version: '7.0.0-beta',
});
expect(result).toEqual(
'https://docs.ansible.com/automation-controller/4.0.0'
'https://docs.ansible.com/automation-controller/7.0'
);
});
it('should return latest version if license info missing', () => {
const result = getDocsBaseUrl({
version: '18.0.0',
version: '18.4.4',
});
expect(result).toEqual(

View File

@@ -273,6 +273,26 @@ def main():
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
# We need to clear out the name from the search fields so we can use name_or_id in the following searches
if 'name' in search_fields:
del search_fields['name']
# Create the data that gets sent for create and update
new_fields = {}
if execution_environment is not None:
if execution_environment == '':
new_fields['execution_environment'] = ''
else:
ee = module.get_one('execution_environments', name_or_id=execution_environment, **{'data': search_fields})
if ee is None:
ee2 = module.get_one('execution_environments', name_or_id=execution_environment)
if ee2 is None or ee2['organization'] is not None:
module.fail_json(msg='could not find execution_environment entry with name {0}'.format(execution_environment))
else:
new_fields['execution_environment'] = ee2['id']
else:
new_fields['execution_environment'] = ee['id']
association_fields = {}
if credentials is not None:
@@ -280,9 +300,9 @@ def main():
for item in credentials:
association_fields['credentials'].append(module.resolve_name_to_id('credentials', item))
# We need to clear out the name from the search fields so we can use name_or_id in the following searches
if 'name' in search_fields:
del search_fields['name']
# We need to clear out the organization from the search fields the searches for labels and instance_groups doesnt support it and won't be needed anymore
if 'organization' in search_fields:
del search_fields['organization']
if labels is not None:
association_fields['labels'] = []
@@ -302,8 +322,6 @@ def main():
else:
association_fields['instance_groups'].append(instance_group_id['id'])
# Create the data that gets sent for create and update
new_fields = {}
if rrule is not None:
new_fields['rrule'] = rrule
new_fields['name'] = new_name if new_name else (module.get_item_name(existing_item) if existing_item else name)
@@ -338,16 +356,6 @@ def main():
if timeout is not None:
new_fields['timeout'] = timeout
if execution_environment is not None:
if execution_environment == '':
new_fields['execution_environment'] = ''
else:
ee = module.get_one('execution_environments', name_or_id=execution_environment, **{'data': search_fields})
if ee is None:
module.fail_json(msg='could not find execution_environment entry with name {0}'.format(execution_environment))
else:
new_fields['execution_environment'] = ee['id']
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(
existing_item,

View File

@@ -89,7 +89,7 @@ def coerce_type(module, value):
if not HAS_YAML:
module.fail_json(msg="yaml is not installed, try 'pip install pyyaml'")
return yaml.safe_load(value)
elif value.lower in ('true', 'false', 't', 'f'):
elif value.lower() in ('true', 'false', 't', 'f'):
return {'t': True, 'f': False}[value[0].lower()]
try:
return int(value)

View File

@@ -108,8 +108,9 @@
- assert:
that:
- wait_results is successful
- 'wait_results.status == "successful"'
- 'wait_results.status in ["successful", "canceled"]'
fail_msg: "Ad hoc command stdout: {{ lookup('awx.awx.controller_api', 'ad_hoc_commands/' + command.id | string + '/stdout/?format=json') }}"
success_msg: "Ad hoc command finished with status {{ wait_results.status }}"
- name: Delete the Credential
credential:

View File

@@ -225,6 +225,7 @@
schedule:
name: "{{ sched2 }}"
state: present
organization: Default
unified_job_template: "{{ jt1 }}"
rrule: "DTSTART:20191219T130551Z RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1"
description: "This hopefully will work"

View File

@@ -1,4 +1,42 @@
---
- name: Initialize starting project vvv setting to false
awx.awx.settings:
name: "PROJECT_UPDATE_VVV"
value: false
- name: Change project vvv setting to true
awx.awx.settings:
name: "PROJECT_UPDATE_VVV"
value: true
register: result
- name: Changing setting to true should have changed the value
assert:
that:
- "result is changed"
- name: Change project vvv setting to true
awx.awx.settings:
name: "PROJECT_UPDATE_VVV"
value: true
register: result
- name: Changing setting to true again should not change the value
assert:
that:
- "result is not changed"
- name: Change project vvv setting back to false
awx.awx.settings:
name: "PROJECT_UPDATE_VVV"
value: false
register: result
- name: Changing setting back to false should have changed the value
assert:
that:
- "result is changed"
- name: Set the value of AWX_ISOLATION_SHOW_PATHS to a baseline
settings:
name: AWX_ISOLATION_SHOW_PATHS

View File

@@ -1,10 +1,22 @@
.. _ag_start:
==================
AWX Administration
==================
=============================
Administering AWX Deployments
=============================
Learn how to administer AWX deployments through custom scripts, management jobs, and DevOps workflows.
This guide assumes at least basic understanding of the systems that you manage and maintain with AWX.
This guide applies to the latest version of AWX only.
The content in this guide is updated frequently and might contain functionality that is not available in previous versions.
Likewise content in this guide can be removed or replaced if it applies to functionality that is no longer available in the latest version.
**Join us online**
We talk about AWX documentation on Matrix at `#docs:ansible.im <https://matrix.to/#/#docs:ansible.im>`_ and on libera IRC at ``#ansible-docs`` if you ever want to join us and chat about the docs!
You can also find lots of AWX discussion and get answers to questions at `forum.ansible.com <https://forum.ansible.com/>`_.
AWX Administration
.. toctree::
:maxdepth: 2

View File

@@ -26,7 +26,7 @@ Vertical scaling improvements
.. index::
pair: improvements; scaling
Control nodes are responsible for processing the output of jobs and writing them to the database. The process that does this is called the callback receiver. The callback receiver has a configurable number of workers, controlled by the setting ``JOB_EVENT_WORKERS``. In the past, the default for this setting was always 4, regardless of the CPU or memory capacity of the node. Now, in traditional virtual machines, the ``JOB_EVENT_WORKERS`` will be set to the same as the number of CPU if that is greater than 4. This means administrators that provision larger control nodes will see greater ability for those nodes to keep up with the job output created by jobs without having to manually adjust ``JOB_EVENT_WORKERS``.
Control nodes are responsible for processing the output of jobs and writing them to the database. The process that does this is called the callback receiver. The callback receiver has a configurable number of workers, controlled by the setting ``JOB_EVENT_WORKERS``. In the past, the default for this setting was always 4, regardless of the CPU or memory capacity of the node. Now, in traditional virtual machines, the ``JOB_EVENT_WORKERS`` will be set to the same as the number of CPU if that is greater than 4. This means administrators that provision larger control nodes will see greater ability for those nodes to keep up with the job output created by jobs without having to manually adjust ``JOB_EVENT_WORKERS``.
Job scheduling improvements
@@ -34,9 +34,9 @@ Job scheduling improvements
.. index::
pair: improvements; scheduling
When jobs are created either via a schedule, a workflow, the UI or the API, they are first created in Pending state. To determine when and where to run this job, a background task called the Task Manager collects all pending and running jobs and determines where capacity is available to run the job. In previous versions of AWX, scheduling slowed as the number of pending and running jobs increased, and the Task Manager was vulnerable to timing out without having made any progress. The scenario exhibits symptoms of having thousands of pending jobs, available capacity, but no jobs starting.
When jobs are created either via a schedule, a workflow, the UI or the API, they are first created in Pending state. To determine when and where to run this job, a background task called the Task Manager collects all pending and running jobs and determines where capacity is available to run the job. In previous versions of AWX, scheduling slowed as the number of pending and running jobs increased, and the Task Manager was vulnerable to timing out without having made any progress. The scenario exhibits symptoms of having thousands of pending jobs, available capacity, but no jobs starting.
Optimizations in the job scheduler have made scheduling faster, as well as safeguards to better ensure the scheduler commits its progress even if it is nearing time out. Additionally, work that previously occurred in the Task Manager that blocked its progress has been decoupled into separate, non-blocking work units executed by the Dispatcher.
Optimizations in the job scheduler have made scheduling faster, as well as safeguards to better ensure the scheduler commits its progress even if it is nearing time out. Additionally, work that previously occurred in the Task Manager that blocked its progress has been decoupled into separate, non-blocking work units executed by the Dispatcher.
Database resource usage improvements
@@ -47,7 +47,7 @@ Database resource usage improvements
The use of database connections by running jobs has dramatically decreased, which removes a previous limit to concurrent running jobs, as well reduces pressure on memory consumption of PostgreSQL.
Each job in AWX has a worker process, called the dispatch worker, on the control node that started the process, which submits the work to the execution node via the Receptor, as well as consumes the output of the job and puts it in the Redis queue for the callback receiver to serialize the output and write it to the database as job events.
Each job in AWX has a worker process, called the dispatch worker, on the control node that started the process, which submits the work to the execution node via the Receptor, as well as consumes the output of the job and puts it in the Redis queue for the callback receiver to serialize the output and write it to the database as job events.
The dispatch worker is also responsible for noticing if the job has been canceled by the user in order to then cancel the receptor work unit. In the past, the worker maintained multiple open database connections per job. This caused two main problems:
@@ -98,7 +98,7 @@ Capacity Planning
Example capacity planning exercise
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: exercise; capacity planning
pair: exercise; capacity planning
Determining the number and size of instances to support the desired workload must take into account the following:
@@ -183,13 +183,13 @@ Control nodes
^^^^^^^^^^^^^^
Vertically scaling a control node increases the number of jobs it can perform control tasks for, which requires both more CPU and memory. In general, scaling CPU alongside memory in the same proportion is recommended (e.g. 1 CPU: 4GB RAM). Even in the case where memory consumption is observed to be high, increasing the CPU of an instance can often relieve pressure, as most memory consumption of control nodes is usually from unprocessed events.
As mentioned in the :ref:`ag_performance_improvements` section, increasing the number of CPU can also increase the job event processing rate of a control node. At this time, vertically scaling a control node does not increase the number of workers that handle web requests, so horizontally scaling is more effective, if the desire is to increase the API availability.
As mentioned in the :ref:`ag_performance_improvements` section, increasing the number of CPU can also increase the job event processing rate of a control node. At this time, vertically scaling a control node does not increase the number of workers that handle web requests, so horizontally scaling is more effective, if the desire is to increase the API availability.
Execution Nodes
^^^^^^^^^^^^^^^^
Vertical scaling an execution node will provide more forks for job execution. As mentioned in the example, a host with 16 GB of memory will by default, be assigned the capacity to run 137 “forks”, which at the default setting of 5 forks/job, will be able to run around 22 jobs concurrently. In general, scaling CPU alongside memory in the same proportion is recommended. Like control and hybrid nodes, there is a “capacity adjustment” on each execution instance that can be used to align actual utilization with the estimation of capacity consumption AWX makes. By default, all nodes are set to the top range of the capacity AWX estimates the node to have. If actual monitoring data reveals the node to be over-utilized, decreasing the capacity adjustment can help bring this in line with actual usage.
Vertically scaling execution will do exactly what the user expects and increase the number of concurrent jobs an instance can run. One downside is that concurrently running jobs on the same execution node, while isolated from each other in the sense that they cannot access the others data, can impact the other's performance, if a particular job is very resource-consumptive and overwhelms the node to the extent that it degrades performance of the entire node. Horizontal scaling the execution plane (e.g deploying more execution nodes) can provide some additional isolation of workloads, as well as allowing administrators to assign different instances to different instance groups, which can then be assigned to Organizations, Inventories, or Job Templates. This can enable something like an instance group that can only be used for running jobs against a “production” Inventory, this way jobs for development do not end up eating up capacity and causing higher priority jobs to queue waiting for capacity.
Vertically scaling execution will do exactly what the user expects and increase the number of concurrent jobs an instance can run. One downside is that concurrently running jobs on the same execution node, while isolated from each other in the sense that they cannot access the others data, can impact the other's performance, if a particular job is very resource-consumptive and overwhelms the node to the extent that it degrades performance of the entire node. Horizontal scaling the execution plane (e.g deploying more execution nodes) can provide some additional isolation of workloads, as well as allowing administrators to assign different instances to different instance groups, which can then be assigned to Organizations, Inventories, or Job Templates. This can enable something like an instance group that can only be used for running jobs against a “production” Inventory, this way jobs for development do not end up eating up capacity and causing higher priority jobs to queue waiting for capacity.
Hop Nodes
@@ -198,7 +198,7 @@ Hop nodes have very low memory and CPU utilization and there is no significant m
Hybrid nodes
^^^^^^^^^^^^^
Hybrid nodes perform both execution and control tasks, so vertically scaling these nodes both increases the number of jobs they can run, and now in 4.3.0, how many events they can process.
Hybrid nodes perform both execution and control tasks, so vertically scaling these nodes both increases the number of jobs they can run, and now in 4.3.0, how many events they can process.
Capacity planning for Operator based Deployments
@@ -240,23 +240,23 @@ The following are configurable settings in the database that may help improve pe
- ``work_mem`` (integer)
- ``maintenance_work_mem`` (integer)
All of these parameters reside under the ``postgresql.conf`` file (inside ``$PDATA`` directory), which manages the configurations of the database server.
All of these parameters reside under the ``postgresql.conf`` file (inside ``$PDATA`` directory), which manages the configurations of the database server.
The **shared_buffers** parameter determines how much memory is dedicated to the server for caching data. Set in ``postgresql.conf``, the default value for this parameter is::
#sharedPostgres_buffers = 128MB
The value should be set at 15%-25% of the machines total RAM. For example: if your machines RAM size is 32 GB, then the recommended value for ``shared_buffers`` is 8 GB. Please note that the database server needs to be restarted after this change.
The **work_mem** parameter basically provides the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. Sort operations are used for order by, distinct, and merge join operations. Hash tables are used in hash joins and hash based aggregation. Set in ``postgresql.conf``, the default value for this parameter is::
#work_mem = 4MB
Setting the correct value of ``work_mem`` parameter can result in less disk-swapping, and therefore far quicker queries.
Setting the correct value of ``work_mem`` parameter can result in less disk-swapping, and therefore far quicker queries.
We can use the formula below to calculate the optimal ``work_mem`` value for the database server::
Total RAM * 0.25 / max_connections
Total RAM * 0.25 / max_connections
The ``max_connections`` parameter is one of the GUC parameters to specify the maximum number of concurrent connections to the database server. Please note setting a large ``work_mem`` can cause issues like PostgreSQL server going out of memory (OOM), if there are too many open connections to the database.
@@ -264,10 +264,40 @@ The **maintenance_work_mem** parameter basically provides the maximum amount of
#maintenance_work_mem = 64MB
It is recommended to set this value higher than ``work_mem``; this can improve performance for vacuuming. In general, it should calculated as::
It is recommended to set this value higher than ``work_mem``; this can improve performance for vacuuming. In general, it should calculated as::
Total RAM * 0.05
Max Connections
~~~~~~~~~~~~~~~~~~~~~
For a realistic method of determining a value of ``max_connections``, a ballpark formula for AWX is outlined here.
Database connections will scale with the number of control and hybrid nodes.
Per-node connection needs are listed here.
* Callback Receiver workers: 4 connections per node or the number of CPUs per node, whichever is larger
* Dispatcher Workers: instance (forks) capacity plus 7
* uWSGI workers: 16 connections per node
* Listeners and auxiliary services: 4 connections per node
* Reserve for installer and other actions: 5 connections in total
Each of these points represent maximum expected connection use in high-load circumstances.
To apply this, consider a cluster with 3 hybrid nodes, each with 8 CPUs and 16 GB of RAM.
The capacity formula will determine a capacity of 132 forks per node based on the memory and capacity formula.
(3 nodes) x (
(8 CPUs / node) x (1 connection / CPU) +
(132 forks / node) x (1 connection / fork) + (7 connections / node) +
(16 connections / node) +
(4 connections / node)
) + (5 connections)
Adding up all the components comes out to 506 for this example cluster.
Practically, this means that the max_connections should be set to something higher than this.
Additional connections should be added to account for other platform components.
This calculation is most sensitive to the number of forks per node. Database connections are briefly opened at the start of and end of jobs. Environments where bursts of many jobs start at once will be most likely to reach the theoretical max number of open database connections.
The max number of jobs that would be started concurrently can be adjusted by modifying the effective capacity of the instances. This can be done with the SYSTEM_TASK_ABS_MEM setting, the capacity adjustment on instances, or with instance groups max jobs or max forks.
AWX Settings
~~~~~~~~~~~~~~~~~~~~~
@@ -332,7 +362,7 @@ Task Manager (Job Scheduling) Settings
pair: settings; job scheduling
The task manager is a periodic task that collects tasks that need to be scheduled and determines what instances have capacity and are eligible for running them. Its job is to find and assign the control and execution instances, update the jobs status to waiting, and send the message to the control node via ``pg_notify`` for the dispatcher to pick up the task and start running it.
As mentioned in the :ref:`ag_performance_improvements` section, a number of optimizations and refactors of this process were implemented in version 4.3. One such refactor was to fix a defect that when the task manager did reach its timeout, it was terminated in such a way that it did not make any progress. Multiple changes were implemented to fix this, so that as the task manager approaches its timeout, it makes an effort to exit and commit any progress made on that run. These issues generally arise when there are thousands of pending jobs, so may not be applicable to your use case.
The first “short-circuit” available to limit how much work the task manager attempts to do in one run is ``START_TASK_LIMIT``. The default is 100 jobs, which is a safe default. If there are remaining jobs to schedule, a new run of the task manager will be scheduled to run immediately after the current run. Users who are willing to risk potentially longer individual runs of the task manager in order to start more jobs in individual run may consider increasing the ``START_TASK_LIMIT``. One metric, the Prometheus metrics, available in ``/api/v2/metrics`` observes how long individual runs of the task manager take is “task_manager__schedule_seconds”.

View File

@@ -0,0 +1,22 @@
.. _contributor_guide:
=======================
AWX Contributor's Guide
=======================
Want to get involved with the AWX community?
Great!
There are so many ways you can contribute to AWX.
**Join us online**
You can chat with us and ask questions on Matrix at `#awx:ansible.com <https://matrix.to/#/#awx:ansible.com>`_ or visit the `Ansible Community Forum <https://forum.ansible.com/c/project/7/>`_ to find contributor resources.
.. toctree::
:maxdepth: 2
:numbered:
intro
setting_up
work_items
report_issues

View File

@@ -0,0 +1,9 @@
Introduction
=============
Hi there! We're excited to have you as a contributor.
Have questions about this document or anything not covered here? Come chat with us and ask questions on Matrix at `#awx:ansible.com <https://matrix.to/#/#awx:ansible.com>`_.
Also visit the `Ansible Community Forum <https://forum.ansible.com/c/project/7/>`_ to find contributor resources where you can also submit your questions or concerns.

View File

@@ -0,0 +1,22 @@
.. _docs_report_issues:
Reporting Issues
================
To report issues you find in the AWX documentation, use the GitHub `issue tracker <https://github.com/ansible/awx/issues>`_ for filing bugs. In order to save time, and help us respond to issues quickly, make sure to fill out as much of the issue template
as possible. Version information, and an accurate reproducing scenario are critical to helping us identify the problem.
Be sure to attach the ``component:docs`` label to your issue. These labels are determined by the template data. Please use the template and fill it out as accurately as possible.
Please don't use the issue tracker as a way to ask how to do something. Instead, discuss it on on the `Ansible Community Forum <https://forum.ansible.com/c/project/7/>`_, or you can chat with us and ask questions on Matrix at `#awx:ansible.com <https://matrix.to/#/#awx:ansible.com>`_.
Before opening a new issue, please use the issue search feature to see if what you're experiencing has already been reported. If you have any extra detail to provide, please comment. Otherwise, rather than posting a "me too" comment, please consider giving it a `"thumbs up" <https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comment>`_ to give us an indication of the severity of the problem.
See `How issues are resolved <https://github.com/ansible/awx/blob/devel/ISSUES.md#how-issues-are-resolved>`_ for more information about the triaging and resolution process.
Getting help
-------------
If you require additional assistance, join the discussions on the `Ansible Community Forum <https://forum.ansible.com/c/project/7/>`_. Specify with tags ``#documentation`` and ``#awx`` to narrow down the area(s) of interest. For more information on tags, see `Navigating the Ansible forum — Tags, Categories, and Concepts <https://forum.ansible.com/t/navigating-the-ansible-forum-tags-categories-and-concepts/39>`_. You may also reach out to us and ask questions on Matrix at `#awx:ansible.com <https://matrix.to/#/#awx:ansible.com>`_.

View File

@@ -0,0 +1,76 @@
Setting up your development environment
========================================
The AWX docs are developed using the Python toolchain. The content itself is authored in ReStructuredText (rst).
Prerequisites
---------------
.. contents::
:local:
Fork and clone the AWX repo
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you have not done so already, you'll need to fork the AWX repo on GitHub. For more on how to do this, see `Fork a Repo <https://help.github.com/articles/fork-a-repo/>`_.
Install python and setuptools
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install the setuptools package on Linux using pip:
1. If not already installed, `download the latest version of Python3 <https://www.geeksforgeeks.org/how-to-download-and-install-python-latest-version-on-linux/>`_ on your machine.
2. Check if pip3 and python3 are correctly installed in your system using the following command:
::
python3 --version
pip3 --version
3. Upgrade pip3 to the latest version to prevent installation issues:
::
pip3 install --upgrade pip
4. Install Setuptools:
::
pip3 install setuptools
5. Verify whether the Setuptools has been properly installed:
::
python3 -c 'import setuptools'
If no errors are returned, then the package was installed properly.
6. Install the tox package so you can build the docs locally:
::
pip3 install tox
Run local build of the docs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To build the docs on your local machine, use the tox utility. In your forked branch of your AWX repo, run:
::
tox -e docs
Access the AWX user interface
------------------------------
To access an instance of the AWX interface, refer to `Build and run the development environment <https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md#setting-up-your-development-environment>`_ for detail. Once you have your environment setup, you can access the AWX UI by logging into it at `https://localhost:8043 <https://localhost:8043>`_, and access the API directly at `https://localhost:8043/api/ <https://localhost:8043/api/>`_.

View File

@@ -0,0 +1,45 @@
What should I work on?
=======================
Good first issue
-----------------
We have a `"good first issue" label` <https://github.com/ansible/awx/issues?q=is%3Aopen+label%3A%22good+first+issue%22+label%3Acomponent%3Adocs+) we put on some doc issues that might be a good starting point for new contributors with the following filter:
::
is:open label:"good first issue" label:component:docs
Fixing and updating the documentation are always appreciated, so reviewing the backlog of issues is always a good place to start.
Things to know prior to submitting revisions
----------------------------------------------
- All doc revisions or additions are done through pull requests against the ``devel`` branch.
- You must use ``git commit --signoff`` for any commit to be merged, and agree that usage of ``--signoff`` constitutes agreement with the terms of `DCO 1.1 <https://github.com/ansible/awx/blob/devel/DCO_1_1.md>`_.
- Take care to make sure no merge commits are in the submission, and use ``git rebase`` vs ``git merge`` for this reason.
- If collaborating with someone else on the same branch, consider using ``--force-with-lease`` instead of ``--force``. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see `git push docs <https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt>`_.
- If submitting a large doc change, it's a good idea to join the `Ansible Community Forum <https://forum.ansible.com/c/project/7/>`_, and talk about what you would like to do or add first. Use the ``#documentation`` and ``#awx`` tags to help notify relevant people of the topic. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed. For more information on tags, see `Navigating the Ansible forum — Tags, Categories, and Concepts <https://forum.ansible.com/t/navigating-the-ansible-forum-tags-categories-and-concepts/39>`_.
- We ask all of our community members and contributors to adhere to the `Ansible code of conduct <http://docs.ansible.com/ansible/latest/community/code_of_conduct.html>`_. If you have questions, or need assistance, please reach out to our community team at `codeofconduct@ansible.com <mailto:codeofconduct@ansible.com>`_.
.. Note::
- Issue assignment will only be done for maintainers of the project. If you decide to work on an issue, please feel free to add a comment in the issue to let others know that you are working on it; but know that we will accept the first pull request from whomever is able to fix an issue. Once your PR is accepted we can add you as an assignee to an issue upon request.
- If you work in a part of the docs that is going through active development, your changes may be rejected, or you may be asked to `rebase`. A good idea before starting work is to have a discussion with us and ask questions on Matrix at `#awx:ansible.com <https://matrix.to/#/#awx:ansible.com>`_ or discuss your ideas on the `Ansible Community Forum <https://forum.ansible.com/c/project/7/>`_.
- If you find an issue with the functions of the UI or API, please see the `Reporting Issues <https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md#reporting-issues>`_ section to open an issue.
- If you find an issue with the docs themselves, refer to :ref:`docs_report_issues`.
Translations
-------------
At this time we do not accept PRs for adding additional language translations as we have an automated process for generating our translations. This is because translations require constant care as new strings are added and changed in the code base. Because of this the .po files are overwritten during every translation release cycle. We also can't support a lot of translations on AWX as its an open source project and each language adds time and cost to maintain. If you would like to see AWX translated into a new language please create an issue and ask others you know to upvote the issue. Our translation team will review the needs of the community and see what they can do around supporting additional language.
If you find an issue with an existing translation, please see the `Reporting Issues <https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md#reporting-issues>`_ section to open an issue and our translation team will work with you on a resolution.

View File

@@ -5,36 +5,37 @@ Ansible AWX helps teams manage complex multi-tier deployments by adding control,
.. toctree::
:maxdepth: 2
:caption: AWX Quickstart
:caption: Get started
quickstart/index
.. toctree::
:maxdepth: 2
:caption: User Guide
:caption: Community
contributor/index
.. toctree::
:maxdepth: 2
:caption: Users
userguide/index
.. toctree::
:maxdepth: 2
:caption: AWX Administration
administration/index
.. toctree::
:maxdepth: 2
:caption: AWX REST API
:caption: Developers
rest_api/index
.. toctree::
:maxdepth: 2
:caption: Upgrades and Migrations
:caption: Administrators
administration/index
upgrade_migration/index
.. toctree::
:maxdepth: 2
:caption: Release Notes
:caption: Release notes
release_notes/index

View File

@@ -4,7 +4,17 @@
AWX Quickstart
==============
AWX Quickstart
Complete the basic steps for using AWX and running your first playbook.
This guide applies to the latest version of AWX only.
The content in this guide is updated frequently and might contain functionality that is not available in previous versions.
Likewise content in this guide can be removed or replaced if it applies to functionality that is no longer available in the latest version.
**Join us online**
We talk about AWX documentation on Matrix at `#docs:ansible.im <https://matrix.to/#/#docs:ansible.im>`_ and on libera IRC at ``#ansible-docs`` if you ever want to join us and chat about the docs!
You can also find lots of AWX discussion and get answers to questions at `forum.ansible.com <https://forum.ansible.com/>`_.
.. toctree::
:maxdepth: 2

View File

@@ -1,10 +1,20 @@
.. _releasenotes_start:
=================
AWX Release Notes
=================
=============
Release Notes
=============
AWX Release Notes
AWX release notes, known issues, and related reference materials.
This guide applies to the latest version of AWX only.
The content in this guide is updated frequently and might contain functionality that is not available in previous versions.
Likewise content in this guide can be removed or replaced if it applies to functionality that is no longer available in the latest version.
**Join us online**
We talk about AWX documentation on Matrix at `#docs:ansible.im <https://matrix.to/#/#docs:ansible.im>`_ if you ever want to join us and chat about the docs!
You can also find lots of AWX discussion and get answers to questions on the `Ansible Community Forum <https://forum.ansible.com/c/project/7/>`_.
.. toctree::
:maxdepth: 2

View File

@@ -5,32 +5,16 @@ Release Notes
**************
.. index::
pair: release notes; v23.00
pair: release notes; v23.0.0
pair: release notes; v23.1.0
pair: release notes; v23.2.0
For versions older than 23.0.0, refer to `AWX Release Notes <https://github.com/ansible/awx/releases>`_.
.. Removed relnotes_current from common/.
23.0.0
-------
- See `What's Changed for 23.2.0 <https://github.com/ansible/awx/releases/tag/23.2.0>`_.
- Added hop nodes support for k8s (@fosterseth #13904)
- Reverted "Improve performance for AWX CLI export (#13182)" (@jbradberry #14342)
- Corrected spelling on database downtime and tolerance variable (@tuxpreacher #14347)
- Fixed schedule rruleset (@KaraokeKev #13611)
- Updates ``python-tss-sdk`` dependency (@delinea-sagar #14207)
- Fixed UI_NEXT build process broken by ansible/ansible-ui#766 (@TheRealHaoLiu #14349)
- Fixed task and web docs (@abwalczyk #14350)
- Fixed UI_NEXT build step file path issue (@TheRealHaoLiu #14357)
- Added required epoch time field for Splunk HEC event receiver (@digitalbadger-uk #14246)
- Fixed edit constructed inventory hanging loading state (@marshmalien #14343)
- Added location for locales in nginx config (@mabashian #14368)
- Updated cryptography for CVE-2023-38325 (@relrod #14358)
- Applied ``AWX_TASK_ENV`` when performing credential plugin lookups (@AlanCoding #14271)
- Enforced mutually exclusive options in credential module of the collection (@djdanielsson #14363)
- Added an example to clarify that the ``awx.subscriptions`` module should be used prior to ``awx.license`` (@phess #14351)
- Fixed default Redis URL to pass check in redis-py>4.4 (@ChandlerSwift #14344)
- Fixed typo in the description of ``scm_update_on_launch`` (@bxbrenden #14382)
- Fixed CVE-2023-40267 (@TheRealHaoLiu #14388)
- Updated PR body checks (@AlanCoding #14389)
- See `What's Changed for 23.1.0 <https://github.com/ansible/awx/releases/tag/23.1.0>`_.
- See `What's Changed for 23.0.0 <https://github.com/ansible/awx/releases/tag/23.0.0>`_.

View File

@@ -1,10 +1,20 @@
.. _api_start:
============
AWX REST API
============
=================
AWX API Reference
=================
AWX REST API
Developer reference for the AWX API.
This guide applies to the latest version of AWX only.
The content in this guide is updated frequently and might contain functionality that is not available in previous versions.
Likewise content in this guide can be removed or replaced if it applies to functionality that is no longer available in the latest version.
**Join us online**
We talk about AWX documentation on Matrix at `#docs:ansible.im <https://matrix.to/#/#docs:ansible.im>`_ and on libera IRC at ``#ansible-docs`` if you ever want to join us and chat about the docs!
You can also find lots of AWX discussion and get answers to questions at `forum.ansible.com <https://forum.ansible.com/>`_.
.. toctree::
:maxdepth: 2

View File

@@ -1,10 +1,20 @@
.. _upgrade_migration_start:
=======================================
Upgrading and Migrating AWX Deployments
=======================================
=======================
Upgrades and Migrations
=======================
Upgrading and Migrating AWX Deployments
Review important information before upgrading or migrating AWX deployments.
This guide applies to the latest version of AWX only.
The content in this guide is updated frequently and might contain functionality that is not available in previous versions.
Likewise content in this guide can be removed or replaced if it applies to functionality that is no longer available in the latest version.
**Join us online**
We talk about AWX documentation on Matrix at `#docs:ansible.im <https://matrix.to/#/#docs:ansible.im>`_ and on libera IRC at ``#ansible-docs`` if you ever want to join us and chat about the docs!
You can also find lots of AWX discussion and get answers to questions at `forum.ansible.com <https://forum.ansible.com/>`_.
.. toctree::
:maxdepth: 2

View File

@@ -31,11 +31,12 @@ Access the Applications page by clicking **Applications** from the left navigati
|Applications - home with example apps|
.. |Applications - home with example apps| image:: ../common/images/apps-list-view-examples.png
:alt: Applications list view
If no other applications exist, only a gray box with a message to add applications displays.
.. image:: ../common/images/apps-list-view-empty.png
:alt: No applications found in the list view
.. _ug_applications_auth_create:
@@ -59,6 +60,7 @@ The New Application window opens.
|Create application|
.. |Create application| image:: ../common/images/apps-create-new.png
:alt: Create new application dialog
3. Enter the following details in **Create New Application** window:
@@ -72,7 +74,7 @@ The New Application window opens.
4. When done, click **Save** or **Cancel** to abandon your changes. Upon saving, the client ID displays in a pop-up window.
.. image:: ../common/images/apps-client-id-popup.png
:alt: Client ID popup
Applications - Tokens
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -86,6 +88,7 @@ Selecting the **Tokens** view displays a list of the users that have tokens to a
|Applications - tokens list|
.. |Applications - tokens list| image:: ../common/images/apps-tokens-list-view-examples.png
:alt: Application tokens list view
Tokens can only access resources that its associated user can access, and can be limited further by specifying the scope of the token.
@@ -108,3 +111,4 @@ Tokens are added through the Users screen and can be associated with an applicat
To verify the application in the example above now shows the user with the appropriate token, go to the **Tokens** tab of the Applications window:
.. image:: ../common/images/apps-tokens-list-view-example2.png
:alt: Verifying a specific user application token

View File

@@ -45,17 +45,20 @@ Use the AWX User Interface to configure and use each of the supported 3-party se
3. For any of the fields below the **Type Details** area that you want to link to the external credential, click the |key| button of the input field. You are prompted to set the input source to use to retrieve your secret information.
.. |key| image:: ../common/images/key-mgmt-button.png
:alt: Icon for managing external credentials
.. image:: ../common/images/credentials-link-credential-prompt.png
:alt: Credential section of the external secret management system dialog
4. Select the credential you want to link to, and click **Next**. This takes you to the **Metadata** tab of the input source. This example shows the Metadata prompt for HashiVault Secret Lookup. Metadata is specific to the input source you select. See the :ref:`ug_metadata_creds_inputs` table for details.
.. image:: ../common/images/credentials-link-metadata-prompt.png
:alt: Metadata section of the external secret management system dialog
5. Click **Test** to verify connection to the secret management system. If the lookup is unsuccessful, an error message like this one displays:
.. image:: ../common/images/credentials-link-metadata-test-error.png
:alt: Example exception dialog for credentials lookup
6. When done, click **OK**. This closes the prompt window and returns you to the Details screen of your target credential. **Repeat these steps**, starting with :ref:`step 3 above <ag_credential_plugins_link_step>` to complete the remaining input fields for the target credential. By linking the information in this manner, AWX retrieves sensitive information, such as username, password, keys, certificates, and tokens from the 3rd-party management systems and populates that data into the remaining fields of the target credential form.
7. If necessary, supply any information manually for those fields that do not use linking as a way of retrieving sensitive information. Refer to the appropriate :ref:`ug_credentials_cred_types` for more detail about each of the fields.
@@ -200,7 +203,7 @@ You need the Centrify Vault web service running to store secrets in order for th
Below shows an example of a configured CyberArk AIM credential.
.. image:: ../common/images/credentials-create-centrify-vault-credential.png
:alt: Example new centrify vault credential lookup dialog
.. _ug_credentials_cyberarkccp:
@@ -222,7 +225,7 @@ You need the CyberArk Central Credential Provider web service running to store s
Below shows an example of a configured CyberArk CCP credential.
.. image:: ../common/images/credentials-create-cyberark-ccp-credential.png
:alt: Example new CyberArk vault credential lookup dialog
.. _ug_credentials_cyberarkconjur:
@@ -245,7 +248,7 @@ When **CyberArk Conjur Secrets Manager Lookup** is selected for **Credential Typ
Below shows an example of a configured CyberArk Conjur credential.
.. image:: ../common/images/credentials-create-cyberark-conjur-credential.png
:alt: Example new CyberArk Conjur Secret lookup dialog
.. _ug_credentials_hashivault:
@@ -268,7 +271,7 @@ When **HashiCorp Vault Secret Lookup** is selected for **Credential Type**, prov
For more detail about Approle and its fields, refer to the `Vault documentation for Approle Auth Method <https://www.vaultproject.io/docs/auth/approle>`_. Below shows an example of a configured HashiCorp Vault Secret Lookup credential.
.. image:: ../common/images/credentials-create-hashicorp-kv-credential.png
:alt: Example new HashiCorp Vault Secret lookup dialog
.. _ug_credentials_hashivaultssh:
@@ -292,7 +295,7 @@ For more detail about Approle and its fields, refer to the `Vault documentation
Below shows an example of a configured HashiCorp SSH Secrets Engine credential.
.. image:: ../common/images/credentials-create-hashicorp-ssh-credential.png
:alt: Example new HashiCorp Vault Signed SSH credential lookup dialog
.. _ug_credentials_azurekeyvault:
@@ -314,7 +317,7 @@ When **Microsoft Azure Key Vault** is selected for **Credential Type**, provide
Below shows an example of a configured Microsoft Azure KMS credential.
.. image:: ../common/images/credentials-create-azure-kms-credential.png
:alt: Example new Microsoft Azure Key Vault credential lookup dialog
.. _ug_credentials_thycoticvault:
@@ -334,6 +337,7 @@ When **Thycotic DevOps Secrets Vault** is selected for **Credential Type**, prov
Below shows an example of a configured Thycotic DevOps Secrets Vault credential.
.. image:: ../common/images/credentials-create-thycotic-devops-credential.png
:alt: Example new Thycotic DevOps Secrets Vault credential lookup dialog
@@ -354,5 +358,6 @@ When **Thycotic Secrets Server** is selected for **Credential Type**, provide th
Below shows an example of a configured Thycotic Secret Server credential.
.. image:: ../common/images/credentials-create-thycotic-server-credential.png
:alt: Example new Thycotic Secret Server credential lookup dialog

View File

@@ -613,7 +613,7 @@ Source Control credentials have several attributes that may be configured:
.. note::
Source Control credentials cannot be configured as "**Prompt on launch**".
If you are using a GitHub account for a Source Control credential and you have 2FA (Two Factor Authenication) enabled on your account, you will need to use your Personal Access Token in the password field rather than your account password.
If you are using a GitHub account for a Source Control credential and you have 2FA (Two Factor Authentication) enabled on your account, you will need to use your Personal Access Token in the password field rather than your account password.
Thycotic DevOps Secrets Vault

View File

@@ -16,88 +16,14 @@ Execution Environments
Building an Execution Environment
---------------------------------
.. index::
single: execution environment
pair: build; execution environment
Using Ansible content that depends on non-default dependencies (custom virtual environments) can be tricky. Packages must be installed on each node, play nicely with other software installed on the host system, and be kept in sync. Previously, jobs ran inside of a virtual environment at ``/var/lib/awx/venv/ansible`` by default, which was pre-loaded with dependencies for ansible-runner and certain types of Ansible content used by the Ansible control machine.
To help simplify this process, container images can be built that serve as Ansible `control nodes <https://docs.ansible.com/ansible/latest/network/getting_started/basic_concepts.html#control-node>`_. These container images are referred to as automation |ees|, which you can create with ansible-builder and then ansible-runner can make use of those images.
Install ansible-builder
~~~~~~~~~~~~~~~~~~~~~~~~
In order to build images, either installations of podman or docker is required along with the ansible-builder Python package. The ``--container-runtime`` option needs to correspond to the Podman/Docker executable you intend to use.
Refer to the latest `Quickstart for Ansible Builder <https://ansible.readthedocs.io/projects/builder/en/latest/#quickstart-for-ansible-builder>`_ for detail.
.. _build_ee:
Build an execution environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ansible-builder is used to create an |ee|.
An |ee| is expected to contain:
- Ansible
- Ansible Runner
- Ansible Collections
- Python and/or system dependencies of:
- modules/plugins in collections
- content in ansible-base
- custom user needs
Building a new |ee| involves a definition (a ``.yml`` file) that specifies which content you would like to include in your |ee|, such as collections, Python requirements, and system-level packages. The content from the output generated from migrating to |ees| has some of the required data that can be piped to a file or pasted into this definition file.
Run the builder
~~~~~~~~~~~~~~~~
Once you created a definition, use this procedure to build your |ee|.
The ``ansible-builder build`` command takes an |ee| definition as an input. It outputs the build context necessary for building an |ee| image, and proceeds with building that image. The image can be re-built with the build context elsewhere, and produces the same result. By default, it looks for a file named ``execution-environment.yml`` in the current directory.
For the illustration purposes, the following example ``execution-environment.yml`` file is used as a starting point:
::
---
version: 3
dependencies:
galaxy: requirements.yml
The content of ``requirements.yml``:
::
---
collections:
- name: awx.awx
To build an |ee| using the files above, run:
::
$ ansible-builder build
...
STEP 7: COMMIT my-awx-ee
--> 09c930f5f6a
09c930f5f6ac329b7ddb321b144a029dbbfcc83bdfc77103968b7f6cdfc7bea2
Complete! The build context can be found at: context
In addition to producing a ready-to-use container image, the build context is preserved, which can be rebuilt at a different time and/or location with the tooling of your choice, such as ``docker build`` or ``podman build``.
For additional information about the ``ansible-builder build`` command, refer to Ansible's `CLI Usage <https://ansible.readthedocs.io/projects/builder/en/latest/usage/#cli-usage>`_ documentation.
The `Getting started with Execution Environments guide` will give you a brief technology overview and show you how to build and test your first |ee| in a few easy steps.
Use an execution environment in jobs
------------------------------------
In order to use an |ee| in a job, a few components are required:
- An |ee| must have been created using |ab|. See :ref:`build_ee` for detail. Once an |ee| is created, you can use it to run jobs. Use the AWX user interface to specify the |ee| to use in your job templates.
- Use the AWX user interface to specify the |ee| you :ref:`build<ug_build_ees>` to use in your job templates.
- Depending on whether an |ee| is made available for global use or tied to an organization, you must have the appropriate level of administrator privileges in order to use an |ee| in a job. |Ees| tied to an organization require Organization administrators to be able to run jobs with those |ees|.

View File

@@ -37,7 +37,7 @@ Glossary
Facts are simply things that are discovered about remote nodes. While they can be used in playbooks and templates just like variables, facts are things that are inferred, rather than set. Facts are automatically discovered when running plays by executing the internal setup module on the remote nodes. You never have to call the setup module explicitly, it just runs, but it can be disabled to save time if it is not needed. For the convenience of users who are switching from other configuration management systems, the fact module also pulls in facts from the ohai and facter tools if they are installed, which are fact libraries from Chef and Puppet, respectively.
Forks
Ansible and AWX talk to remote nodes in parallel and the level of parallelism can be set serveral ways--during the creation or editing of a Job Template, by passing ``--forks``, or by editing the default in a configuration file. The default is a very conservative 5 forks, though if you have a lot of RAM, you can easily set this to a value like 50 for increased parallelism.
Ansible and AWX talk to remote nodes in parallel and the level of parallelism can be set several ways--during the creation or editing of a Job Template, by passing ``--forks``, or by editing the default in a configuration file. The default is a very conservative 5 forks, though if you have a lot of RAM, you can easily set this to a value like 50 for increased parallelism.
Group
A set of hosts in Ansible that can be addressed as a set, of which many may exist within a single Inventory.

View File

@@ -1,10 +1,21 @@
.. _ug_start:
==========
User Guide
==========
===================
Automating with AWX
===================
User Guide
Learn how to use AWX functionality to scale and manage your automation.
This guide assumes moderate familiarity with Ansible, including concepts such as **Playbooks**, **Variables**, and **Tags**.
This guide applies to the latest version of AWX only.
The content in this guide is updated frequently and might contain functionality that is not available in previous versions.
Likewise content in this guide can be removed or replaced if it applies to functionality that is no longer available in the latest version.
**Join us online**
We talk about AWX documentation on Matrix at `#docs:ansible.im <https://matrix.to/#/#docs:ansible.im>`_ and on libera IRC at ``#ansible-docs`` if you ever want to join us and chat about the docs!
You can also find lots of AWX discussion and get answers to questions at `forum.ansible.com <https://forum.ansible.com/>`_.
.. toctree::
:maxdepth: 2

View File

@@ -896,7 +896,7 @@ Amazon Web Services EC2
3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in the main procedure for :ref:`adding a source <ug_add_inv_common_fields>`.
4. Use the **Source Variables** field to override variables used by the ``aws_ec2`` inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed description of these variables, view the `aws_ec2 inventory plugin documenation <https://cloud.redhat.com/ansible/automation-hub/repo/published/amazon/aws/content/inventory/aws_ec2>`__.
4. Use the **Source Variables** field to override variables used by the ``aws_ec2`` inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed description of these variables, view the `aws_ec2 inventory plugin documentation <https://cloud.redhat.com/ansible/automation-hub/repo/published/amazon/aws/content/inventory/aws_ec2>`__.
|Inventories - create source - AWS EC2 example|
@@ -924,7 +924,7 @@ Google Compute Engine
3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in the main procedure for :ref:`adding a source <ug_add_inv_common_fields>`.
4. Use the **Source Variables** field to override variables used by the ``gcp_compute`` inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed description of these variables, view the `gcp_compute inventory plugin documenation <https://cloud.redhat.com/ansible/automation-hub/repo/published/google/cloud/content/inventory/gcp_compute>`__.
4. Use the **Source Variables** field to override variables used by the ``gcp_compute`` inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed description of these variables, view the `gcp_compute inventory plugin documentation <https://cloud.redhat.com/ansible/automation-hub/repo/published/google/cloud/content/inventory/gcp_compute>`__.
.. _ug_source_azure:

View File

@@ -14,10 +14,11 @@ The default view is collapsed (**Compact**) with the job name, status, job type,
|Jobs - home with example job|
.. |Jobs - home with example job| image:: ../common/images/jobs-home-with-example-job.png
:alt: Jobs List with Example Jobs
.. image:: ../common/images/jobs-list-all-expanded.png
:alt: Expanded Jobs List
Actions you can take from this screen include viewing the details and standard output of a particular job, relaunching (|launch|) jobs, or removing selected jobs. The relaunch operation only applies to relaunches of playbook runs and does not apply to project/inventory updates, system jobs, workflow jobs, etc.
Actions you can take from this screen include viewing the details and standard output of a particular job, relaunching (|launch|) jobs, or removing selected jobs.The relaunch operation only applies to relaunches of playbook runs and does not apply to project/inventory updates, system jobs, workflow jobs, etc.
.. _ug_job_results:
@@ -29,12 +30,13 @@ When a job relaunches, you are directed the Jobs Output screen as the job runs.
.. image:: ../common/images/job-details-view-filters.png
:alt: Filter options in the Jobs Output window
- The **Stdout** option is the default display that shows the job processes and output
- The **Event** option allows you to filter by the event(s) of interest, such as errors, host failures, host retries, items skipped, etc. You can include as many events in the filter as necessary.
.. image:: ../common/images/job-details-view-filters-examples.png
:alt: Selected filter examples from the Jobs Output window
- The **Advanced** option is a refined search that allows you a combination of including or excluding criteria, searching by key, or by lookup type. For details about using Search, refer to the :ref:`ug_search` chapter.
@@ -55,14 +57,19 @@ When an inventory sync is executed, the full results automatically display in th
The icons at the top right corner of the Output tab allow you to relaunch (|launch|), download (|download|) the job output, or delete (|delete|) the job.
.. |launch| image:: ../common/images/launch-button.png
:alt: Launch Action Button
.. |delete| image:: ../common/images/delete-button.png
:alt: Delete Action Button
.. |cancel| image:: ../common/images/job-cancel-button.png
:alt: Cancel Action Button
.. |download| image:: ../common/images/download.png
:alt: Download Action Button
|job details example of inventory sync|
.. |job details example of inventory sync| image:: ../common/images/jobs-show-job-results-for-inv-sync.png
:alt: Example output for a successful Inventory Sync job
.. note:: An inventory update can be performed while a related job is running. In cases where you have a big project (around 10 GB), disk space on ``/tmp`` may be an issue.
@@ -71,11 +78,12 @@ The icons at the top right corner of the Output tab allow you to relaunch (|laun
Inventory sync details
~~~~~~~~~~~~~~~~~~~~~~~
Access the **Details** tab to provide details about the job execution.
Access the **Details** tab to provide details about the job execution.
.. image:: ../common/images/jobs-show-job-details-for-inv-sync.png
:alt: Example details for an Inventory Sync job
Notable details of the job executed are:
Notable details of the job executed are:
- **Status**: Can be any of the following:
@@ -109,15 +117,17 @@ SCM Inventory Jobs
When an inventory sourced from an SCM is executed, the full results automatically display in the Output tab. This shows the same information you would see if you ran it through the Ansible command line, and can be useful for debugging. The icons at the top right corner of the Output tab allow you to relaunch (|launch|), download (|download|) the job output, or delete (|delete|) the job.
.. image:: ../common/images/jobs-show-job-results-for-scm-job.png
:alt: Example output for a successful SCM job
SCM inventory details
~~~~~~~~~~~~~~~~~~~~~~
Access the **Details** tab to provide details about the job execution and its associated project.
Access the **Details** tab to provide details about the job execution and its associated project.
.. image:: ../common/images/jobs-show-job-details-for-scm-job.png
:alt: Example details for an SCM job
Notable details of the job executed are:
Notable details of the job executed are:
- **Status**: Can be any of the following:
@@ -157,6 +167,7 @@ Playbook Run Jobs
When a playbook is executed, the full results automatically display in the Output tab. This shows the same information you would see if you ran it through the Ansible command line, and can be useful for debugging.
.. image:: ../common/images/jobs-show-job-results-for-example-job.png
:alt: Example output for a successful playbook run
The events summary captures a tally of events that were run as part of this playbook:
@@ -169,7 +180,7 @@ The events summary captures a tally of events that were run as part of this play
- the amount of time it took to complete the playbook run in the **Elapsed** field
.. image:: ../common/images/jobs-events-summary.png
:alt: Example summary details for a playbook
The icons next to the events summary allow you to relaunch (|launch|), download (|download|) the job output, or delete (|delete|) the job.
@@ -178,7 +189,7 @@ The host status bar runs across the top of the Output view. Hover over a section
|Job - All Host Events|
.. |Job - All Host Events| image:: ../common/images/job-all-host-events.png
:alt: Show All Host Events
The output for a Playbook job is also accessible after launching a job from the **Jobs** tab of its Job Templates page.
@@ -200,24 +211,27 @@ Use Search to look up specific events, hostnames, and their statuses. To filter
These statuses also display at bottom of each Stdout pane, in a group of "stats" called the Host Summary fields.
.. image:: ../common/images/job-std-out-host-summary-rescued-ignored.png
:alt: Example summary details in standard output
The example below shows a search with only unreachable hosts.
.. image:: ../common/images/job-std-out-filter-failed.png
:alt: Example of errored jobs filtered by unreachable hosts
For more details about using the Search, refer to the :ref:`ug_search` chapter.
The standard output view displays all the events that occur on a particular job. By default, all rows are expanded so that all the details are displayed. Use the collapse-all button (|collapse-all|) to switch to a view that only contains the headers for plays and tasks. Click the (|expand-all|) button to view all lines of the standard output.
.. |collapse-all| image:: ../common/images/job-details-view-std-out-collapse-all-icon.png
:alt: Collapse All Icon
.. |expand-all| image:: ../common/images/job-details-view-std-out-expand-all-icon.png
:alt: Expand All Icon
Alternatively, you can display all the details of a specific play or task by clicking on the arrow icons next to them. Click an arrow from sideways to downward to expand the lines associated with that play or task. Click the arrow back to the sideways position to collapse and hide the lines.
.. image:: ../common/images/job-details-view-std-out-expand-collapse-icons.png
:alt: Expand and Collapse Icons
Things to note when viewing details in the expand/collapse mode:
@@ -250,7 +264,7 @@ The **Host Details** dialog shows information about the host affected by the sel
- if applicable, the Ansible **Module** for the task, and any *arguments* for that module
.. image:: ../common/images/job-details-host-hostevent.png
:alt: Host Events Details
To view the results in JSON format, click on the **JSON** tab. To view the output of the task, click the **Standard Out**. To view errors from the output, click **Standard Error**.
@@ -262,7 +276,7 @@ Playbook run details
Access the **Details** tab to provide details about the job execution.
.. image:: ../common/images/jobs-show-job-details-for-example-job.png
:alt: Example Job details for a playbook run
Notable details of the job executed are:

View File

@@ -529,7 +529,7 @@ Reset the ``AWX_URL_BASE``
The primary way that AWX determines how the base URL (``AWX_URL_BASE``) is defined is by looking at an incoming request and setting the server address based on that incoming request.
AWX takes settings values from the database first. If no settings values are found, it falls back to using the values from the settings files. If a user posts a license by navigating to the AWX host's IP adddress, the posted license is written to the settings entry in the database.
AWX takes settings values from the database first. If no settings values are found, it falls back to using the values from the settings files. If a user posts a license by navigating to the AWX host's IP address, the posted license is written to the settings entry in the database.
To change the ``AWX_URL_BASE`` if the wrong address has been picked up, navigate to **Miscellaneous System settings** from the Settings menu using the DNS entry you wish to appear in notifications, and re-add your license.

View File

@@ -33,7 +33,7 @@ Prerequisites
.. _`How to create GPG keypairs`: https://www.redhat.com/sysadmin/creating-gpg-keypairs
Vist the `GnuPG documentation <https://www.gnupg.org/documentation/index.html>`_ for more information regarding GPG keys.
Visit the `GnuPG documentation <https://www.gnupg.org/documentation/index.html>`_ for more information regarding GPG keys.
You can verify that you have a valid GPG keypair and in your default GnuPG keyring, with the following command:

View File

@@ -39,6 +39,7 @@ AWX has the ability to run jobs based on a triggered webhook event coming in. Jo
g. In the Scope fields, the automation webhook only needs repo scope access, with the exception of invites. For information about other scopes, click the link right above the table to access the docs.
.. image:: ../common/images/webhooks-create-webhook-github-scope.png
:alt: Link to more information on scopes
h. Click the **Generate Token** button.
@@ -50,26 +51,31 @@ AWX has the ability to run jobs based on a triggered webhook event coming in. Jo
b. Make note of the name of this credential, as it will be used in the job template that posts back to GitHub.
.. image:: ../common/images/webhooks-create-credential-github-PAT-token.png
:alt: Enter your generated PAT into the Token field
c. Go to the job template with which you want to enable webhooks, and select the webhook service and credential you created in the previous step.
.. image:: ../common/images/webhooks-job-template-gh-webhook-credential.png
:alt: Select the webhook service and credential you created
|
d. Click **Save**. Now your job template is set up to be able to post back to GitHub. An example of one may look like this:
.. image:: ../common/images/webhooks-awx-to-github-status.png
:alt: An example GitHub status that shows all checks have passed
.. _ug_webhooks_setup_github:
3. Go to a specific GitHub repo you want to configure webhooks and click **Settings**.
.. image:: ../common/images/webhooks-github-repo-settings.png
:alt: Settings link in your GitHub repo
4. Under Options, click **Webhooks**.
.. image:: ../common/images/webhooks-github-repo-settings-options.png
:alt: Webhooks link under Options
5. On the Webhooks page, click **Add webhook**.
@@ -80,22 +86,24 @@ AWX has the ability to run jobs based on a triggered webhook event coming in. Jo
c. Copy the contents of the **Webhook Key** from the job template above and paste it in the **Secret** field.
d. Leave **Enable SSL verification** selected.
.. image:: ../common/images/webhooks-github-repo-add-webhook.png
|
.. image:: ../common/images/webhooks-github-repo-add-webhook.png
:alt: Add Webhook page
e. Next, you must select the types of events you want to trigger a webhook. Any such event will trigger the Job or Workflow. In order to have job status (pending, error, success) sent back to GitHub, you must select **Pull requests** in the individual events section.
.. image:: ../common/images/webhooks-github-repo-choose-events.png
:alt: List of trigger events for the webhook
f. Leave **Active** checked and click **Add Webhook**.
.. image:: ../common/images/webhooks-github-repo-add-webhook-actve.png
:alt: Active option and Add Webhook button
7. After your webhook is configured, it displays in the list of webhooks active for your repo, along with the ability to edit or delete it. Click on a webhook, and it brings you to the Manage webhook screen. Scroll to the very bottom of the screen to view all the delivery attempts made to your webhook and whether they succeeded or failed.
.. image:: ../common/images/webhooks-github-repo-webhooks-deliveries.png
:alt: An example listing of recent deliveries
For more information, refer to the `GitHub Webhooks developer documentation <https://developer.github.com/webhooks/>`_.
@@ -113,12 +121,14 @@ AWX has the ability to run jobs based on a triggered webhook event coming in. Jo
b. On the sidebar, under User Settings, click **Access Tokens**.
.. image:: ../common/images/webhooks-create-webhook-gitlab-settings.png
:alt: Access Tokens link under User Settings
c. In the **Name** field, enter a brief description about what this PAT will be used for.
d. Skip the **Expires at** field unless you want to set an expiration date for your webhook.
e. In the Scopes fields, select the ones applicable to your integration. For AWX, API is the only selection necessary.
.. image:: ../common/images/webhooks-create-webhook-gitlab-scope.png
:alt: Personal Access Token page
f. Click the **Create personal access token** button.
@@ -130,16 +140,19 @@ AWX has the ability to run jobs based on a triggered webhook event coming in. Jo
b. Make note of the name of this credential, as it will be used in the job template that posts back to GitHub.
.. image:: ../common/images/webhooks-create-credential-gitlab-PAT-token.png
:alt: Create New Credential page
c. Go to the job template with which you want to enable webhooks, and select the webhook service and credential you created in the previous step.
.. image:: ../common/images/webhooks-job-template-gl-webhook-credential.png
:alt: Select the webhook credential you created
|
d. Click **Save**. Now your job template is set up to be able to post back to GitLab. An example of one may look like this:
.. image:: ../common/images/webhooks-awx-to-gitlab-status.png
:alt: An example GitLab status message
.. _ug_webhooks_setup_gitlab:
@@ -147,6 +160,7 @@ AWX has the ability to run jobs based on a triggered webhook event coming in. Jo
3. Go to a specific GitLab repo you want to configure webhooks and click **Settings > Integrations**.
.. image:: ../common/images/webhooks-gitlab-repo-settings.png
:alt: Integrations link under Settings
4. To complete the Integrations page, you need to :ref:`enable webhooks in a job template <ug_jt_enable_webhooks>` (or in a :ref:`workflow job template <ug_wfjt_enable_webhooks>`), which will provide you with the following information:
@@ -157,6 +171,7 @@ AWX has the ability to run jobs based on a triggered webhook event coming in. Jo
e. Click **Add webhook**.
.. image:: ../common/images/webhooks-gitlab-repo-add-webhook.png
:alt: Integrations page
5. After your webhook is configured, it displays in the list of Project Webhooks for your repo, along with the ability to test events, edit or delete the webhook. Testing a webhook event displays the results at the top of the page whether it succeeded or failed.
@@ -170,5 +185,7 @@ Payload output
The entire payload is exposed as an extra variable. To view the payload information, go to the Jobs Detail view of the job template that ran with the webhook enabled. In the **Extra Variables** field of the Details pane, view the payload output from the ``awx_webhook_payload`` variable, as shown in the example below.
.. image:: ../common/images/webhooks-jobs-extra-vars-payload.png
:alt: Details page with payload output
.. image:: ../common/images/webhooks-jobs-extra-vars-payload-expanded.png
:alt: Variables field expanded view

View File

@@ -1,25 +1,22 @@
.. _ug_workflows:
Workflows
============
.. index::
single: workflows
Workflows allow you to configure a sequence of disparate job templates (or workflow templates) that may or may not share inventory, playbooks, or permissions. However, workflows have admin and execute permissions, similar to job templates. A workflow accomplishes the task of tracking the full set of jobs that were part of the release process as a single unit.
Workflows allow you to configure a sequence of disparate job templates (or workflow templates) that may or may not share inventory, playbooks, or permissions. However, workflows have admin and execute permissions, similar to job templates. A workflow accomplishes the task of tracking the full set of jobs that were part of the release process as a single unit.
Job or workflow templates are linked together using a graph-like structure called nodes. These nodes can be jobs, project syncs, or inventory syncs. A template can be part of different workflows or used multiple times in the same workflow. A copy of the graph structure is saved to a workflow job when you launch the workflow.
The example below shows a workflow that contains all three, as well as a workflow job template:
.. image:: ../common/images/wf-node-all-scenarios-wf-in-wf.png
:alt: Workflow Node All Scenarios
As the workflow runs, jobs are spawned from the node's linked template. Nodes linking to a job template which has prompt-driven fields (``job_type``, ``job_tags``, ``skip_tags``, ``limit``) can contain those fields, and will not be prompted on launch. Job templates with promptable credential and/or inventory, WITHOUT defaults, will not be available for inclusion in a workflow.
Workflow scenarios and considerations
----------------------------------------
@@ -28,34 +25,38 @@ Consider the following scenarios for building workflows:
- A root node is set to ALWAYS by default and it not editable.
.. image:: ../common/images/wf-root-node-always.png
:alt: Root Node Always
- A node can have multiple parents and children may be linked to any of the states of success, failure, or always. If always, then the state is neither success or failure. States apply at the node level, not at the workflow job template level. A workflow job will be marked as successful unless it is canceled or encounters an error.
- A node can have multiple parents and children may be linked to any of the states of success, failure, or always. If always, then the state is neither success or failure. States apply at the node level, not at the workflow job template level. A workflow job will be marked as successful unless it is canceled or encounters an error.
.. image:: ../common/images/wf-sibling-nodes-all-edge-types.png
:alt: Sibling Nodes All Edge Types
- If you remove a job or workflow template within the workflow, the node(s) previously connected to those deleted, automatically get connected upstream and retains its edge type as in the example below:
.. image:: ../common/images/wf-node-delete-scenario.png
:alt: Node Delete Scenario
- You could have a convergent workflow, where multiple jobs converge into one. In this scenario, any of the jobs or all of them must complete before the next one runs, as shown in the example below:
- You could have a convergent workflow, where multiple jobs converge into one. In this scenario, any of the jobs or all of them must complete before the next one runs, as shown in the example below:
.. image:: ../common/images/wf-node-convergence.png
.. image:: ../common/images/wf-node-convergence.png
:alt: Node Convergence
In the example provided, AWX runs the first two job templates in parallel. When they both finish and succeed as specified, the 3rd downstream (:ref:`convergence node <convergence_node>`), will trigger.
- Prompts for inventory and surveys will apply to workflow nodes in workflow job templates.
- If you launch from the API, running a ``get`` command displays a list of warnings and highlights missing components. The basic workflow for a workflow job template is illustrated below.
- If you launch from the API, running a ``get`` command displays a list of warnings and highlights missing components. The basic workflow for a workflow job template is illustrated below.
.. image:: ../common/images/workflow-diagram.png
:alt: Workflow Diagram
- It is possible to launch several workflows simultaneously, and set a schedule for when to launch them. You can set notifications on workflows, such as when a job completes, similar to that of job templates.
- It is possible to launch several workflows simultaneously, and set a schedule for when to launch them. You can set notifications on workflows, such as when a job completes, similar to that of job templates.
.. note::
.. include:: ../common/job-slicing-rule.rst
- You can build a recursive workflow, but if AWX detects an error, it will stop at the time the nested workflow attempts to run.
- Artifacts gathered in jobs in the sub-workflow will be passed to downstream nodes.
@@ -70,7 +71,6 @@ In the example provided, AWX runs the first two job templates in parallel. When
- In a workflow convergence scenario, ``set_stats`` data will be merged in an undefined way, so it is recommended that you set unique keys.
Extra Variables
----------------
@@ -83,6 +83,7 @@ Also similar to job templates, workflows use surveys to specify variables to be
Workflows utilize the same behavior (hierarchy) of variable precedence as Job Templates with the exception of three additional variables. Refer to the Variable Precedence Hierarchy in the :ref:`ug_jobtemplates_extravars` section of the Job Templates chapter of this guide. The three additional variables include:
.. image:: ../common/images/Architecture-AWX_Variable_Precedence_Hierarchy-Workflows.png
:alt: Variable Precedence Hierarchy
Workflows included in a workflow will follow the same variable precedence - they will only inherit variables if they are specifically prompted for, or defined as part of a survey.
@@ -108,7 +109,6 @@ If you use the ``set_stats`` module in your playbook, you can produce results th
data:
integration_results_url: "{{ (result.stdout|from_json).link }}"
- **use_set_stats.yml**: second playbook in the workflow
::
@@ -121,47 +121,44 @@ If you use the ``set_stats`` module in your playbook, you can produce results th
url: "{{ integration_results_url }}"
return_content: true
register: results
- name: "Output test results"
debug:
msg: "{{ results.content }}"
The ``set_stats`` module processes this workflow as follows:
1. The contents of an integration results (example: integration_results.txt below) is first uploaded to the web.
1. The contents of an integration results (example: integration_results.txt below) is first uploaded to the web.
::
the tests are passing!
the tests are passing!
2. Through the **invoke_set_stats** playbook, ``set_stats`` is then invoked to artifact the URL of the uploaded integration_results.txt into the Ansible variable "integration_results_url".
3. The second playbook in the workflow consumes the Ansible extra variable "integration_results_url". It calls out to the web using the ``uri`` module to get the contents of the file uploaded by the previous Job Template Job. Then, it simply prints out the contents of the gotten file.
.. note::
For artifacts to work, keep the default setting, ``per_host = False`` in the ``set_stats`` module.
.. note::
For artifacts to work, keep the default setting, ``per_host = False`` in the ``set_stats`` module.
Workflow States
----------------
The workflow job can have the following states (no Failed state):
- Waiting
- Waiting
- Running
- Success (finished)
- Cancel
- Cancel
- Error
- Failed
In the workflow scheme, canceling a job cancels the branch, while canceling the workflow job cancels the entire workflow.
In the workflow scheme, canceling a job cancels the branch, while canceling the workflow job cancels the entire workflow.
Role-Based Access Controls
-----------------------------
@@ -174,6 +171,4 @@ Other tasks such as the ability to make a duplicate copy and re-launch a workflo
.. ^^
For more information on performing the tasks described in this section, refer to the :ref:`Administration Guide <ag_start>`.
For more information on performing the tasks described in this section, refer to the :ref:`Administration Guide <ag_start>`.

View File

@@ -15,7 +15,7 @@ There are two methods you can use to get the next release version. The manual wa
Log into your github account, under your user icon go to Settings => Developer Settings => Personal Access Tokens => Tokens (classic).
Select the Generate new token => Generate new token (classic)
Fill in the note, select no scopes select "Generate token".
Copy the token and create a file in your awx repo called `.github_creds`. Enter the token in this file.
Copy the token and create a file at `~/.github_creds` or in your awx repo as `.github_creds`. Enter the token in this file.
Run `./tools/scripts/get_next_release.py`
This will use your token to go query for the PRs in the release and scan their bodies to select X/Y/Z and suggest new versions and spit out notifications.
@@ -149,7 +149,7 @@ Send notifications to the following groups:
* AWX Mailing List
* #social:ansible.com IRC (@newsbot for inclusion in bullhorn)
* #awx:ansible.com (no @newsbot in this room)
* #ansible-controller slack channel
* #ansible-controller slack channel
These messages are templated out for you in the output of `get_next_release.yml`.
@@ -169,7 +169,7 @@ Operator hub PRs are generated via an Ansible Playbook. See someone on the AWX t
* [kustomize](https://kustomize.io/)
* [opm](https://docs.openshift.com/container-platform/4.9/cli_reference/opm/cli-opm-install.html)
3. Download the script from https://gist.github.com/rooftopcellist/0e232f26666dee45be1d8a69270d63c2 into your awx-operator repo as release_operator_hub.sh
3. Download the script from https://github.com/ansible/awx-operator/blob/devel/hack/publish-to-operator-hub.sh into your awx-operator repo as release_operator_hub.sh
4. Make sure you are logged into quay.io with `docker login quay.io`

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash
if [ -z $AWX_IGNORE_BLACK ] ; then
python_files_changed=$(git diff --cached --name-only --diff-filter=AM | grep -E '\.py$')
python_files_changed=$(git diff --cached --name-only --diff-filter=AM awx/ awxkit/ tools/ | grep -E '\.py$')
if [ "x$python_files_changed" != "x" ] ; then
black --check $python_files_changed || \
if [ $? != 0 ] ; then

View File

@@ -59,13 +59,8 @@ for instructions.
If operating in a FIPS environment, `hashlib.md5()` will raise a `ValueError`,
but will support the `usedforsecurity` keyword on RHEL and Centos systems.
Keep an eye on https://code.djangoproject.com/ticket/28401
The override of `names_digest` could easily be broken in a future version.
Check that the import remains the same in the desired version.
https://github.com/django/django/blob/af5ec222ccd24e81f9fec6c34836a4e503e7ccf7/django/db/backends/base/schema.py#L7
This used to be a problem with `names_digest` function in Django, but
was fixed upstream in Django 4.1.
### django-split-settings
@@ -172,4 +167,3 @@ available on PyPi with source distribution.
Version 4.8 makes us a little bit nervous with changes to `searchwindowsize` https://github.com/pexpect/pexpect/pull/579/files
Pin to `pexpect==4.7.x` until we have more time to move to `4.8` and test.

View File

@@ -12,7 +12,7 @@ cryptography>=41.0.2 # CVE-2023-38325
Cython<3 # Since the bump to PyYAML 5.4.1 this is now a mandatory dep
daphne
distro
django==4.2.3 # see UPGRADE BLOCKERs CVEs were identified in 4.2, pinning to .3
django==4.2.5 # see UPGRADE BLOCKERs, CVE-2023-41164
django-auth-ldap
django-cors-headers
django-crum

View File

@@ -103,7 +103,7 @@ deprecated==1.2.13
# via jwcrypto
distro==1.8.0
# via -r /awx_devel/requirements/requirements.in
django==4.2.3
django==4.2.5
# via
# -r /awx_devel/requirements/requirements.in
# channels

View File

@@ -442,13 +442,11 @@ Now we are ready to configure and plumb OpenLDAP with AWX. To do this we have pr
Note: The default configuration will utilize the non-tls connection. If you want to use the tls configuration you will need to work through TLS negotiation issues because the LDAP server is using a self signed certificate.
Before we can run the playbook we need to understand that LDAP will be communicated to from within the AWX container. Because of this, we have to tell AWX how to route traffic to the LDAP container through the `LDAP Server URI` settings. The playbook requires a variable called container_reference to be set. The container_reference variable needs to be how your AWX container will be able to talk to the LDAP container. See the SAML section for some examples for how to select a `container_reference`.
Once you have your container reference you can run the playbook like:
You can run the playbook like:
```bash
export CONTROLLER_USERNAME=<your username>
export CONTROLLER_PASSWORD=<your password>
ansible-playbook tools/docker-compose/ansible/plumb_ldap.yml -e container_reference=<your container_reference here>
ansible-playbook tools/docker-compose/ansible/plumb_ldap.yml
```

View File

@@ -11,23 +11,6 @@
- name: Test that the development environment is able to launch a job
hosts: localhost
tasks:
- name: Boot the development environment
command: |
make docker-compose
environment:
COMPOSE_UP_OPTS: -d
args:
chdir: "{{ playbook_dir }}/../../../"
# Takes a while for migrations to finish
- name: Wait for the dev environment to be ready
uri:
url: "http://localhost:8013/api/v2/ping/"
register: _result
until: _result.status == 200
retries: 120
delay: 5
- name: Reset admin password
shell: |
docker exec -i tools_awx_1 bash <<EOSH

View File

@@ -1,5 +1,5 @@
{
"AUTH_LDAP_1_SERVER_URI": "ldap://{{ container_reference }}:389",
"AUTH_LDAP_1_SERVER_URI": "ldap://ldap:1389",
"AUTH_LDAP_1_BIND_DN": "cn=admin,dc=example,dc=org",
"AUTH_LDAP_1_BIND_PASSWORD": "admin",
"AUTH_LDAP_1_START_TLS": false,

View File

@@ -3,17 +3,15 @@
missing_modules = []
try:
import requests
except:
except ImportError:
missing_modules.append('requests')
import json
import os
import re
import sys
import time
try:
import semantic_version
except:
except ImportError:
missing_modules.append('semantic_version')
if len(missing_modules) > 0:
@@ -55,7 +53,7 @@ def getNextReleases():
try:
if a_pr['html_url'] in pr_votes:
continue
except:
except KeyError:
print("Unable to check on PR")
print(json.dumps(a_pr, indent=4))
sys.exit(255)
@@ -133,14 +131,17 @@ def getNextReleases():
# Load the users session information
#
session = requests.Session()
try:
print("Loading credentials")
with open(".github_creds", "r") as f:
password = f.read().strip()
session.headers.update({'Authorization': 'bearer {}'.format(password), 'Accept': 'application/vnd.github.v3+json'})
except Exception:
print("Failed to load credentials from ./.github_creds")
sys.exit(255)
print("Loading credentials")
CREDS_LOCATIONS = ('.github_creds', '~/.github_creds')
for creds_loc in CREDS_LOCATIONS:
if os.path.exists(os.path.expanduser(creds_loc)):
with open(os.path.expanduser(creds_loc), "r") as f:
password = f.read().strip()
session.headers.update({'Authorization': 'bearer {}'.format(password), 'Accept': 'application/vnd.github.v3+json'})
break
else:
raise Exception(f'Could not location github token in locations {CREDS_LOCATIONS}')
versions = {
'current': {},