Compare commits

..

23 Commits

Author SHA1 Message Date
Keith Grant
2529fdcfd7 Persist schedule prompt on launch fields when editing (#14736)
* persist schedule prompt on launch fields when editing

* Merge job template default credentials with schedule overrides in schedule prompt

* rename vars for clarity

* handle undefined defaultCredentials

---------

Co-authored-by: Michael Abashian <mabashia@redhat.com>
2023-12-21 14:46:49 -05:00
loh
19dff9c2d1 Fix twilio_backend.py to send SMS to multiple destinations. (#14656)
AWX only sends Twilio notifications to one destination with the current version of code, but this is a bug. Fixed this bug for sending SMS to multiple destinations.
2023-12-20 15:31:47 -05:00
Chris Meyers
2a6cf032f8 refactor awxkit import code
* Move awxkit import code into a pytest fixture to better control when
  the import happens
* Ensure /awx_devel/awxkit is added to sys path before awxkit import
  runs
2023-12-18 12:00:50 -05:00
Chris Meyers
6119b33a50 add awx collection export tests
* Basic export tests
* Added test that highlights a problem with running Schedule exports as
  non-root user. We rely on the POST key in the OPTIONS response to
  determine the fields to export for a resource. The POST key is not
  present if a user does NOT have create privileges.
* Fixed up forwarding all headers from the API server back to the test
  code. This was causing a problem in awxkit code that checks for
  allowed HTTP Verbs in the headers.
2023-12-18 12:00:50 -05:00
John Westcott IV
aacf9653c5 Use filtering/sorting from django-ansible-base (#14726)
* Move filtering to DAB

* add comment to trigger building a new image

Signed-off-by: jessicamack <jmack@redhat.com>

* remove unneeded comment

Signed-off-by: jessicamack <jmack@redhat.com>

* remove unused imports

Signed-off-by: jessicamack <jmack@redhat.com>

* change mock import

Signed-off-by: jessicamack <jmack@redhat.com>

---------

Signed-off-by: jessicamack <jmack@redhat.com>
Co-authored-by: jessicamack <jmack@redhat.com>
2023-12-18 10:05:02 -05:00
Alan Rominger
325f5250db Narrow the actor types accepted for RBAC evaluations (#14709)
* Narrow the scope of RBAC evaluations

* Update tests for RBAC method changes

* Simplify querset for credentials in org

* Fix call pattern to pass in team role obj
2023-12-14 21:30:47 -05:00
Alan Rominger
b14518c1e5 Simplify RBAC get_roles_on_resource method (#14710)
* Simplify RBAC get_roles_on_resource method

* Fix bug

* Fix query type bug
2023-12-14 10:42:26 -05:00
Hao Liu
6440e3cb55 Send SIGKILL to rsyslog if hard cancellation is needed 2023-12-14 10:41:48 -05:00
Hao Liu
b5f6aac3aa Correct misuse of stdxxx_event_enabled
Not every log messages need to be emitted as a event!
2023-12-14 10:41:48 -05:00
Hao Liu
6e5e1c8fff Recover rsyslog from 4xx error
Due to https://github.com/ansible/awx/issues/7560

'omhttp' module for rsyslog will completely stop forwarding message to external log aggregator after receiving a 4xx error from the external log aggregator

This PR is an "workaround" for this problem by restarting rsyslogd after detecting that rsyslog received a 4xx error
2023-12-14 10:41:48 -05:00
Hao Liu
bf42c63c12 Remove superwatcher from docker-compose dev (#14708)
When making changes to the application sometime you can accidentally cause FATAL state and cause the dev container to crash which will remove any ephemeral changes that you have made and is ANNOYING!
2023-12-13 14:26:53 -05:00
Avi Layani
df24cb692b Adding hosts bulk deletion feature (#14462)
* Adding hosts bulk deletion feature

Signed-off-by: Avi Layani <alayani@redhat.com>

* fix the type of the argument

Signed-off-by: Avi Layani <alayani@redhat.com>

* fixing activity_entry tracking

Signed-off-by: Avi Layani <alayani@redhat.com>

* Revert "fixing activity_entry tracking"

This reverts commit c8eab52c2ccc5abe215d56d1704ba1157e5fbbd0.
Since the bulk_delete is not related to an inventory, only hosts which
can be from different inventories.

* get only needed vars to reduce memory consumption

Signed-off-by: Avi Layani <alayani@redhat.com>

* filtering the data to reduce memory increase the number of queries

Signed-off-by: Avi Layani <alayani@redhat.com>

* update the activity stream for inventories

Signed-off-by: Avi Layani <alayani@redhat.com>

* fix the changes dict initialiazation

Signed-off-by: Avi Layani <alayani@redhat.com>

---------

Signed-off-by: Avi Layani <alayani@redhat.com>
2023-12-13 10:28:31 -06:00
jessicamack
0d825a744b Update setuptools-scm (#14716)
* properly format requirement

* upgrade setuptools_scm

* Revert "properly format requirement"

This reverts commit 4c8792950f.

* test ansible-runner package upgrade

* Revert "test ansible-runner package upgrade"

This reverts commit ba4b74f2bb.
2023-12-11 17:11:04 -05:00
Marliana Lara
5e48bf091b Fix undefined error in settings/logging/edit form (#14715)
Fix undefined error in logging settings edit form
2023-12-11 10:58:30 -05:00
Alan Rominger
1294cec92c Fix updater bug due to missing newline at EOF (#14713) 2023-12-08 16:51:17 +00:00
jessicamack
dae12ee1b8 Remove incorrectly formatted line from requirements.txt (#14714)
remove git+ line
2023-12-08 11:06:17 -05:00
jessicamack
b091f6cf79 Add django-ansible-base (#14705)
* add django-ansible-base

Signed-off-by: jessicamack <jmack@redhat.com>

* add licenses

* add django-ansible-base

Signed-off-by: jessicamack <jmack@redhat.com>

* add licenses

* apply patch to fix permissions issue

---------

Signed-off-by: jessicamack <jmack@redhat.com>
2023-12-07 11:45:44 -05:00
Don Naro
fe564c5fad Dependabot for docsite requirements (#14670) 2023-12-06 15:11:44 -05:00
Tyler Muir
eb3bc84461 remove unnecessary required flags for saml backend (#14666)
Signed-off-by: Tyler Muir <tylergmuir@gmail.com>
2023-12-06 15:08:54 -05:00
Andrew Austin
6aa2997dce Add TLS certificate auth for HashiCorp Vault (#14534)
* Add TLS certificate auth for HashiCorp Vault

Add support for AWX to authenticate with HashiCorp Vault using
TLS client certificates.

Also updates the documentation for the HashiCorp Vault secret management
plugins to include both the new TLS options and the missing Kubernetes
auth method options.

Signed-off-by: Andrew Austin <aaustin@redhat.com>

* Refactor docker-compose vault for TLS cert auth

Add TLS configuration to the docker-compose Vault configuration and
use that method by default in vault plumbing.

This ensures that the result of bringing up the docker-compose stack
with vault enabled and running the plumb-vault playbook is a fully
working credential retrieval setup using TLS client cert authentication.

Signed-off-by: Andrew Austin <aaustin@redhat.com>

* Remove incorrect trailing space

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

* Make vault init idempotent

- improve error handling for vault_initialization
- ignore error if vault cert auth is already configured
- removed unused register

* Add VAULT_TLS option

Make TLS for HashiCorp Vault optional and configurable via VAULT_TLS env var

* Add retries for vault init

Sometime it took longer for vault to fully come up and init will fail

---------

Signed-off-by: Andrew Austin <aaustin@redhat.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
Co-authored-by: Hao Liu <haoli@redhat.com>
2023-12-06 19:12:15 +00:00
Don Naro
dd00bbba42 separate tox calls in readthedocs config (#14673) 2023-12-06 17:17:12 +00:00
Rick Elrod
fe6bac6d9e [CI] Reduce GHA timeouts from 6h default (#14704)
* [CI] Reduce GHA timeouts from 6h default

The goal here is to never interfere with a real run (so most of the
timeout-minutes values seem rather high) but to avoid having 6h long
runs if something goes crazy and never ends.

Signed-off-by: Rick Elrod <rick@elrod.me>

* Do bash hackery instead

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-12-06 14:43:45 +00:00
jainnikhil30
87abbd4b10 Fix the bulk Job Launch Integration test in awx collection (#14702)
* fix the integration tests
2023-12-06 18:50:33 +05:30
100 changed files with 1776 additions and 813 deletions

View File

@@ -43,10 +43,14 @@ runs:
- name: Update default AWX password
shell: bash
run: |
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]
do
echo "Waiting for AWX..."
sleep 5
SECONDS=0
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]; do
if [[ $SECONDS -gt 600 ]]; then
echo "Timing out, AWX never came up"
exit 1
fi
echo "Waiting for AWX..."
sleep 5
done
echo "AWX is up, updating the password..."
docker exec -i tools_awx_1 sh <<-EOSH

10
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
version: 2
updates:
- package-ecosystem: "pip"
directory: "docs/docsite/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2
labels:
- "docs"
- "dependencies"

View File

@@ -11,6 +11,7 @@ jobs:
common-tests:
name: ${{ matrix.tests.name }}
runs-on: ubuntu-latest
timeout-minutes: 60
permissions:
packages: write
contents: read
@@ -49,6 +50,7 @@ jobs:
dev-env:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
@@ -63,6 +65,7 @@ jobs:
awx-operator:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- name: Checkout awx
uses: actions/checkout@v3
@@ -112,6 +115,7 @@ jobs:
collection-sanity:
name: awx_collection sanity
runs-on: ubuntu-latest
timeout-minutes: 30
strategy:
fail-fast: false
steps:
@@ -131,6 +135,7 @@ jobs:
collection-integration:
name: awx_collection integration
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
@@ -182,6 +187,7 @@ jobs:
collection-integration-coverage-combine:
name: combine awx_collection integration coverage
runs-on: ubuntu-latest
timeout-minutes: 10
needs:
- collection-integration
strategy:

View File

@@ -12,6 +12,7 @@ jobs:
push:
if: endsWith(github.repository, '/awx') || startsWith(github.ref, 'refs/heads/release_')
runs-on: ubuntu-latest
timeout-minutes: 60
permissions:
packages: write
contents: read

View File

@@ -6,6 +6,7 @@ jobs:
docsite-build:
name: docsite test build
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v3

View File

@@ -9,6 +9,7 @@ on:
jobs:
push:
runs-on: ubuntu-latest
timeout-minutes: 20
permissions:
packages: write
contents: read

View File

@@ -13,6 +13,7 @@ permissions:
jobs:
triage:
runs-on: ubuntu-latest
timeout-minutes: 20
name: Label Issue
steps:
@@ -26,6 +27,7 @@ jobs:
community:
runs-on: ubuntu-latest
timeout-minutes: 20
name: Label Issue - Community
steps:
- uses: actions/checkout@v3

View File

@@ -14,6 +14,7 @@ permissions:
jobs:
triage:
runs-on: ubuntu-latest
timeout-minutes: 20
name: Label PR
steps:
@@ -25,6 +26,7 @@ jobs:
community:
runs-on: ubuntu-latest
timeout-minutes: 20
name: Label PR - Community
steps:
- uses: actions/checkout@v3

View File

@@ -10,6 +10,7 @@ jobs:
if: github.repository_owner == 'ansible' && endsWith(github.repository, 'awx')
name: Scan PR description for semantic versioning keywords
runs-on: ubuntu-latest
timeout-minutes: 20
permissions:
packages: write
contents: read

View File

@@ -15,6 +15,7 @@ jobs:
promote:
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Checkout awx
uses: actions/checkout@v3

View File

@@ -23,6 +23,7 @@ jobs:
stage:
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest
timeout-minutes: 90
permissions:
packages: write
contents: write

View File

@@ -9,6 +9,7 @@ jobs:
name: Update Dependabot Prs
if: contains(github.event.pull_request.labels.*.name, 'dependencies') && contains(github.event.pull_request.labels.*.name, 'component:ui')
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: Checkout branch

View File

@@ -13,6 +13,7 @@ on:
jobs:
push:
runs-on: ubuntu-latest
timeout-minutes: 60
permissions:
packages: write
contents: read

View File

@@ -10,6 +10,7 @@ build:
3.11
commands:
- pip install --user tox
- python3 -m tox -e docs
- python3 -m tox -e docs --notest -v
- python3 -m tox -e docs --skip-pkg-install -q
- mkdir -p _readthedocs/html/
- mv docs/docsite/build/html/* _readthedocs/html/

View File

@@ -22,7 +22,7 @@ recursive-exclude awx/settings local_settings.py*
include tools/scripts/request_tower_configuration.sh
include tools/scripts/request_tower_configuration.ps1
include tools/scripts/automation-controller-service
include tools/scripts/failure-event-handler
include tools/scripts/rsyslog-4xx-recovery
include tools/scripts/awx-python
include awx/playbooks/library/mkfifo.py
include tools/sosreport/*

View File

@@ -43,6 +43,8 @@ PROMETHEUS ?= false
GRAFANA ?= false
# If set to true docker-compose will also start a hashicorp vault instance
VAULT ?= false
# If set to true docker-compose will also start a hashicorp vault instance with TLS enabled
VAULT_TLS ?= false
# If set to true docker-compose will also start a tacacs+ instance
TACACS ?= false
@@ -61,7 +63,7 @@ RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio
# These should be upgraded in the AWX and Ansible venv before attempting
# to install the actual requirements
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==65.6.3 setuptools_scm[toml]==7.0.5 wheel==0.38.4
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==65.6.3 setuptools_scm[toml]==8.0.4 wheel==0.38.4
NAME ?= awx
@@ -528,13 +530,15 @@ docker-compose-sources: .git/hooks/pre-commit
-e enable_prometheus=$(PROMETHEUS) \
-e enable_grafana=$(GRAFANA) \
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS) \
-e enable_tacacs=$(TACACS) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)
docker-compose: awx/projects docker-compose-sources
ansible-galaxy install --ignore-certs -r tools/docker-compose/ansible/requirements.yml;
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
-e enable_vault=$(VAULT);
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS);
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans
docker-compose-credential-plugins: awx/projects docker-compose-sources

View File

@@ -1,450 +0,0 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
# Python
import re
import json
from functools import reduce
# Django
from django.core.exceptions import FieldError, ValidationError, FieldDoesNotExist
from django.db import models
from django.db.models import Q, CharField, IntegerField, BooleanField, TextField, JSONField
from django.db.models.fields.related import ForeignObjectRel, ManyToManyField, ForeignKey
from django.db.models.functions import Cast
from django.contrib.contenttypes.models import ContentType
from django.contrib.contenttypes.fields import GenericForeignKey
from django.utils.encoding import force_str
from django.utils.translation import gettext_lazy as _
# Django REST Framework
from rest_framework.exceptions import ParseError, PermissionDenied
from rest_framework.filters import BaseFilterBackend
# AWX
from awx.main.utils import get_type_for_model, to_python_boolean
from awx.main.utils.db import get_all_field_names
class TypeFilterBackend(BaseFilterBackend):
"""
Filter on type field now returned with all objects.
"""
def filter_queryset(self, request, queryset, view):
try:
types = None
for key, value in request.query_params.items():
if key == 'type':
if ',' in value:
types = value.split(',')
else:
types = (value,)
if types:
types_map = {}
for ct in ContentType.objects.filter(Q(app_label='main') | Q(app_label='auth', model='user')):
ct_model = ct.model_class()
if not ct_model:
continue
ct_type = get_type_for_model(ct_model)
types_map[ct_type] = ct.pk
model = queryset.model
model_type = get_type_for_model(model)
if 'polymorphic_ctype' in get_all_field_names(model):
types_pks = set([v for k, v in types_map.items() if k in types])
queryset = queryset.filter(polymorphic_ctype_id__in=types_pks)
elif model_type in types:
queryset = queryset
else:
queryset = queryset.none()
return queryset
except FieldError as e:
# Return a 400 for invalid field names.
raise ParseError(*e.args)
def get_fields_from_path(model, path):
"""
Given a Django ORM lookup path (possibly over multiple models)
Returns the fields in the line, and also the revised lookup path
ex., given
model=Organization
path='project__timeout'
returns tuple of fields traversed as well and a corrected path,
for special cases we do substitutions
([<IntegerField for timeout>], 'project__timeout')
"""
# Store of all the fields used to detect repeats
field_list = []
new_parts = []
for name in path.split('__'):
if model is None:
raise ParseError(_('No related model for field {}.').format(name))
# HACK: Make project and inventory source filtering by old field names work for backwards compatibility.
if model._meta.object_name in ('Project', 'InventorySource'):
name = {'current_update': 'current_job', 'last_update': 'last_job', 'last_update_failed': 'last_job_failed', 'last_updated': 'last_job_run'}.get(
name, name
)
if name == 'type' and 'polymorphic_ctype' in get_all_field_names(model):
name = 'polymorphic_ctype'
new_parts.append('polymorphic_ctype__model')
else:
new_parts.append(name)
if name in getattr(model, 'PASSWORD_FIELDS', ()):
raise PermissionDenied(_('Filtering on password fields is not allowed.'))
elif name == 'pk':
field = model._meta.pk
else:
name_alt = name.replace("_", "")
if name_alt in model._meta.fields_map.keys():
field = model._meta.fields_map[name_alt]
new_parts.pop()
new_parts.append(name_alt)
else:
field = model._meta.get_field(name)
if isinstance(field, ForeignObjectRel) and getattr(field.field, '__prevent_search__', False):
raise PermissionDenied(_('Filtering on %s is not allowed.' % name))
elif getattr(field, '__prevent_search__', False):
raise PermissionDenied(_('Filtering on %s is not allowed.' % name))
if field in field_list:
# Field traversed twice, could create infinite JOINs, DoSing Tower
raise ParseError(_('Loops not allowed in filters, detected on field {}.').format(field.name))
field_list.append(field)
model = getattr(field, 'related_model', None)
return field_list, '__'.join(new_parts)
def get_field_from_path(model, path):
"""
Given a Django ORM lookup path (possibly over multiple models)
Returns the last field in the line, and the revised lookup path
ex.
(<IntegerField for timeout>, 'project__timeout')
"""
field_list, new_path = get_fields_from_path(model, path)
return (field_list[-1], new_path)
class FieldLookupBackend(BaseFilterBackend):
"""
Filter using field lookups provided via query string parameters.
"""
RESERVED_NAMES = ('page', 'page_size', 'format', 'order', 'order_by', 'search', 'type', 'host_filter', 'count_disabled', 'no_truncate', 'limit')
SUPPORTED_LOOKUPS = (
'exact',
'iexact',
'contains',
'icontains',
'startswith',
'istartswith',
'endswith',
'iendswith',
'regex',
'iregex',
'gt',
'gte',
'lt',
'lte',
'in',
'isnull',
'search',
)
# A list of fields that we know can be filtered on without the possibility
# of introducing duplicates
NO_DUPLICATES_ALLOW_LIST = (CharField, IntegerField, BooleanField, TextField)
def get_fields_from_lookup(self, model, lookup):
if '__' in lookup and lookup.rsplit('__', 1)[-1] in self.SUPPORTED_LOOKUPS:
path, suffix = lookup.rsplit('__', 1)
else:
path = lookup
suffix = 'exact'
if not path:
raise ParseError(_('Query string field name not provided.'))
# FIXME: Could build up a list of models used across relationships, use
# those lookups combined with request.user.get_queryset(Model) to make
# sure user cannot query using objects he could not view.
field_list, new_path = get_fields_from_path(model, path)
new_lookup = new_path
new_lookup = '__'.join([new_path, suffix])
return field_list, new_lookup
def get_field_from_lookup(self, model, lookup):
'''Method to match return type of single field, if needed.'''
field_list, new_lookup = self.get_fields_from_lookup(model, lookup)
return (field_list[-1], new_lookup)
def to_python_related(self, value):
value = force_str(value)
if value.lower() in ('none', 'null'):
return None
else:
return int(value)
def value_to_python_for_field(self, field, value):
if isinstance(field, models.BooleanField):
return to_python_boolean(value)
elif isinstance(field, (ForeignObjectRel, ManyToManyField, GenericForeignKey, ForeignKey)):
try:
return self.to_python_related(value)
except ValueError:
raise ParseError(_('Invalid {field_name} id: {field_id}').format(field_name=getattr(field, 'name', 'related field'), field_id=value))
else:
return field.to_python(value)
def value_to_python(self, model, lookup, value):
try:
lookup.encode("ascii")
except UnicodeEncodeError:
raise ValueError("%r is not an allowed field name. Must be ascii encodable." % lookup)
field_list, new_lookup = self.get_fields_from_lookup(model, lookup)
field = field_list[-1]
needs_distinct = not all(isinstance(f, self.NO_DUPLICATES_ALLOW_LIST) for f in field_list)
# Type names are stored without underscores internally, but are presented and
# and serialized over the API containing underscores so we remove `_`
# for polymorphic_ctype__model lookups.
if new_lookup.startswith('polymorphic_ctype__model'):
value = value.replace('_', '')
elif new_lookup.endswith('__isnull'):
value = to_python_boolean(value)
elif new_lookup.endswith('__in'):
items = []
if not value:
raise ValueError('cannot provide empty value for __in')
for item in value.split(','):
items.append(self.value_to_python_for_field(field, item))
value = items
elif new_lookup.endswith('__regex') or new_lookup.endswith('__iregex'):
try:
re.compile(value)
except re.error as e:
raise ValueError(e.args[0])
elif new_lookup.endswith('__iexact'):
if not isinstance(field, (CharField, TextField)):
raise ValueError(f'{field.name} is not a text field and cannot be filtered by case-insensitive search')
elif new_lookup.endswith('__search'):
related_model = getattr(field, 'related_model', None)
if not related_model:
raise ValueError('%s is not searchable' % new_lookup[:-8])
new_lookups = []
for rm_field in related_model._meta.fields:
if rm_field.name in ('username', 'first_name', 'last_name', 'email', 'name', 'description', 'playbook'):
new_lookups.append('{}__{}__icontains'.format(new_lookup[:-8], rm_field.name))
return value, new_lookups, needs_distinct
else:
if isinstance(field, JSONField):
new_lookup = new_lookup.replace(field.name, f'{field.name}_as_txt')
value = self.value_to_python_for_field(field, value)
return value, new_lookup, needs_distinct
def filter_queryset(self, request, queryset, view):
try:
# Apply filters specified via query_params. Each entry in the lists
# below is (negate, field, value).
and_filters = []
or_filters = []
chain_filters = []
role_filters = []
search_filters = {}
needs_distinct = False
# Can only have two values: 'AND', 'OR'
# If 'AND' is used, an item must satisfy all conditions to show up in the results.
# If 'OR' is used, an item just needs to satisfy one condition to appear in results.
search_filter_relation = 'OR'
for key, values in request.query_params.lists():
if key in self.RESERVED_NAMES:
continue
# HACK: make `created` available via API for the Django User ORM model
# so it keep compatibility with other objects which exposes the `created` attr.
if queryset.model._meta.object_name == 'User' and key.startswith('created'):
key = key.replace('created', 'date_joined')
# HACK: Make job event filtering by host name mostly work even
# when not capturing job event hosts M2M.
if queryset.model._meta.object_name == 'JobEvent' and key.startswith('hosts__name'):
key = key.replace('hosts__name', 'or__host__name')
or_filters.append((False, 'host__name__isnull', True))
# Custom __int filter suffix (internal use only).
q_int = False
if key.endswith('__int'):
key = key[:-5]
q_int = True
# RBAC filtering
if key == 'role_level':
role_filters.append(values[0])
continue
# Search across related objects.
if key.endswith('__search'):
if values and ',' in values[0]:
search_filter_relation = 'AND'
values = reduce(lambda list1, list2: list1 + list2, [i.split(',') for i in values])
for value in values:
search_value, new_keys, _ = self.value_to_python(queryset.model, key, force_str(value))
assert isinstance(new_keys, list)
search_filters[search_value] = new_keys
# by definition, search *only* joins across relations,
# so it _always_ needs a .distinct()
needs_distinct = True
continue
# Custom chain__ and or__ filters, mutually exclusive (both can
# precede not__).
q_chain = False
q_or = False
if key.startswith('chain__'):
key = key[7:]
q_chain = True
elif key.startswith('or__'):
key = key[4:]
q_or = True
# Custom not__ filter prefix.
q_not = False
if key.startswith('not__'):
key = key[5:]
q_not = True
# Convert value(s) to python and add to the appropriate list.
for value in values:
if q_int:
value = int(value)
value, new_key, distinct = self.value_to_python(queryset.model, key, value)
if distinct:
needs_distinct = True
if '_as_txt' in new_key:
fname = next(item for item in new_key.split('__') if item.endswith('_as_txt'))
queryset = queryset.annotate(**{fname: Cast(fname[:-7], output_field=TextField())})
if q_chain:
chain_filters.append((q_not, new_key, value))
elif q_or:
or_filters.append((q_not, new_key, value))
else:
and_filters.append((q_not, new_key, value))
# Now build Q objects for database query filter.
if and_filters or or_filters or chain_filters or role_filters or search_filters:
args = []
for n, k, v in and_filters:
if n:
args.append(~Q(**{k: v}))
else:
args.append(Q(**{k: v}))
for role_name in role_filters:
if not hasattr(queryset.model, 'accessible_pk_qs'):
raise ParseError(_('Cannot apply role_level filter to this list because its model does not use roles for access control.'))
args.append(Q(pk__in=queryset.model.accessible_pk_qs(request.user, role_name)))
if or_filters:
q = Q()
for n, k, v in or_filters:
if n:
q |= ~Q(**{k: v})
else:
q |= Q(**{k: v})
args.append(q)
if search_filters and search_filter_relation == 'OR':
q = Q()
for term, constrains in search_filters.items():
for constrain in constrains:
q |= Q(**{constrain: term})
args.append(q)
elif search_filters and search_filter_relation == 'AND':
for term, constrains in search_filters.items():
q_chain = Q()
for constrain in constrains:
q_chain |= Q(**{constrain: term})
queryset = queryset.filter(q_chain)
for n, k, v in chain_filters:
if n:
q = ~Q(**{k: v})
else:
q = Q(**{k: v})
queryset = queryset.filter(q)
queryset = queryset.filter(*args)
if needs_distinct:
queryset = queryset.distinct()
return queryset
except (FieldError, FieldDoesNotExist, ValueError, TypeError) as e:
raise ParseError(e.args[0])
except ValidationError as e:
raise ParseError(json.dumps(e.messages, ensure_ascii=False))
class OrderByBackend(BaseFilterBackend):
"""
Filter to apply ordering based on query string parameters.
"""
def filter_queryset(self, request, queryset, view):
try:
order_by = None
for key, value in request.query_params.items():
if key in ('order', 'order_by'):
order_by = value
if ',' in value:
order_by = value.split(',')
else:
order_by = (value,)
default_order_by = self.get_default_ordering(view)
# glue the order by and default order by together so that the default is the backup option
order_by = list(order_by or []) + list(default_order_by or [])
if order_by:
order_by = self._validate_ordering_fields(queryset.model, order_by)
# Special handling of the type field for ordering. In this
# case, we're not sorting exactly on the type field, but
# given the limited number of views with multiple types,
# sorting on polymorphic_ctype.model is effectively the same.
new_order_by = []
if 'polymorphic_ctype' in get_all_field_names(queryset.model):
for field in order_by:
if field == 'type':
new_order_by.append('polymorphic_ctype__model')
elif field == '-type':
new_order_by.append('-polymorphic_ctype__model')
else:
new_order_by.append(field)
else:
for field in order_by:
if field not in ('type', '-type'):
new_order_by.append(field)
queryset = queryset.order_by(*new_order_by)
return queryset
except FieldError as e:
# Return a 400 for invalid field names.
raise ParseError(*e.args)
def get_default_ordering(self, view):
ordering = getattr(view, 'ordering', None)
if isinstance(ordering, str):
return (ordering,)
return ordering
def _validate_ordering_fields(self, model, order_by):
for field_name in order_by:
# strip off the negation prefix `-` if it exists
prefix = ''
path = field_name
if field_name[0] == '-':
prefix = field_name[0]
path = field_name[1:]
try:
field, new_path = get_field_from_path(model, path)
new_path = '{}{}'.format(prefix, new_path)
except (FieldError, FieldDoesNotExist) as e:
raise ParseError(e.args[0])
yield new_path

View File

@@ -30,12 +30,13 @@ from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import StaticHTMLRenderer
from rest_framework.negotiation import DefaultContentNegotiation
from ansible_base.filters.rest_framework.field_lookup_backend import FieldLookupBackend
from ansible_base.utils.models import get_all_field_names
# AWX
from awx.api.filters import FieldLookupBackend
from awx.main.models import UnifiedJob, UnifiedJobTemplate, User, Role, Credential, WorkflowJobTemplateNode, WorkflowApprovalTemplate
from awx.main.access import optimize_queryset
from awx.main.utils import camelcase_to_underscore, get_search_fields, getattrd, get_object_or_400, decrypt_field, get_awx_version
from awx.main.utils.db import get_all_field_names
from awx.main.utils.licensing import server_product_name
from awx.main.views import ApiErrorView
from awx.api.serializers import ResourceAccessListElementSerializer, CopySerializer

View File

@@ -43,6 +43,8 @@ from rest_framework.utils.serializer_helpers import ReturnList
# Django-Polymorphic
from polymorphic.models import PolymorphicModel
from ansible_base.utils.models import get_type_for_model
# AWX
from awx.main.access import get_user_capabilities
from awx.main.constants import ACTIVE_STATES, CENSOR_VALUE
@@ -99,10 +101,9 @@ from awx.main.models import (
CLOUD_INVENTORY_SOURCES,
)
from awx.main.models.base import VERBOSITY_CHOICES, NEW_JOB_TYPE_CHOICES
from awx.main.models.rbac import get_roles_on_resource, role_summary_fields_generator
from awx.main.models.rbac import role_summary_fields_generator, RoleAncestorEntry
from awx.main.fields import ImplicitRoleField
from awx.main.utils import (
get_type_for_model,
get_model_for_type,
camelcase_to_underscore,
getattrd,
@@ -2201,6 +2202,99 @@ class BulkHostCreateSerializer(serializers.Serializer):
return return_data
class BulkHostDeleteSerializer(serializers.Serializer):
hosts = serializers.ListField(
allow_empty=False,
max_length=100000,
write_only=True,
help_text=_('List of hosts ids to be deleted, e.g. [105, 130, 131, 200]'),
)
class Meta:
model = Host
fields = ('hosts',)
def validate(self, attrs):
request = self.context.get('request', None)
max_hosts = settings.BULK_HOST_MAX_DELETE
# Validating the number of hosts to be deleted
if len(attrs['hosts']) > max_hosts:
raise serializers.ValidationError(
{
"ERROR": 'Number of hosts exceeds system setting BULK_HOST_MAX_DELETE',
"BULK_HOST_MAX_DELETE": max_hosts,
"Hosts_count": len(attrs['hosts']),
}
)
# Getting list of all host objects, filtered by the list of the hosts to delete
attrs['host_qs'] = Host.objects.get_queryset().filter(pk__in=attrs['hosts']).only('id', 'inventory_id', 'name')
# Converting the queryset data in a dict. to reduce the number of queries when
# manipulating the data
attrs['hosts_data'] = attrs['host_qs'].values()
if len(attrs['host_qs']) == 0:
error_hosts = {host: "Hosts do not exist or you lack permission to delete it" for host in attrs['hosts']}
raise serializers.ValidationError({'hosts': error_hosts})
if len(attrs['host_qs']) < len(attrs['hosts']):
hosts_exists = [host['id'] for host in attrs['hosts_data']]
failed_hosts = list(set(attrs['hosts']).difference(hosts_exists))
error_hosts = {host: "Hosts do not exist or you lack permission to delete it" for host in failed_hosts}
raise serializers.ValidationError({'hosts': error_hosts})
# Getting all inventories that the hosts can be in
inv_list = list(set([host['inventory_id'] for host in attrs['hosts_data']]))
# Checking that the user have permission to all inventories
errors = dict()
for inv in Inventory.objects.get_queryset().filter(pk__in=inv_list):
if request and not request.user.is_superuser:
if request.user not in inv.admin_role:
errors[inv.name] = "Lack permissions to delete hosts from this inventory."
if errors != {}:
raise PermissionDenied({"inventories": errors})
# check the inventory type only if the user have permission to it.
errors = dict()
for inv in Inventory.objects.get_queryset().filter(pk__in=inv_list):
if inv.kind != '':
errors[inv.name] = "Hosts can only be deleted from manual inventories."
if errors != {}:
raise serializers.ValidationError({"inventories": errors})
attrs['inventories'] = inv_list
return attrs
def delete(self, validated_data):
result = {"hosts": dict()}
changes = {'deleted_hosts': dict()}
for inventory in validated_data['inventories']:
changes['deleted_hosts'][inventory] = list()
for host in validated_data['hosts_data']:
result["hosts"][host["id"]] = f"The host {host['name']} was deleted"
changes['deleted_hosts'][host["inventory_id"]].append({"host_id": host["id"], "host_name": host["name"]})
try:
validated_data['host_qs'].delete()
except Exception as e:
raise serializers.ValidationError({"detail": _(f"cannot delete hosts, host deletion error {e}")})
request = self.context.get('request', None)
for inventory in validated_data['inventories']:
activity_entry = ActivityStream.objects.create(
operation='update',
object1='inventory',
changes=json.dumps(changes['deleted_hosts'][inventory]),
actor=request.user,
)
activity_entry.inventory.add(inventory)
return result
class GroupTreeSerializer(GroupSerializer):
children = serializers.SerializerMethodField()
@@ -2664,6 +2758,17 @@ class ResourceAccessListElementSerializer(UserSerializer):
if 'summary_fields' not in ret:
ret['summary_fields'] = {}
team_content_type = ContentType.objects.get_for_model(Team)
content_type = ContentType.objects.get_for_model(obj)
def get_roles_on_resource(parent_role):
"Returns a string list of the roles a parent_role has for current obj."
return list(
RoleAncestorEntry.objects.filter(ancestor=parent_role, content_type_id=content_type.id, object_id=obj.id)
.values_list('role_field', flat=True)
.distinct()
)
def format_role_perm(role):
role_dict = {'id': role.id, 'name': role.name, 'description': role.description}
try:
@@ -2679,7 +2784,7 @@ class ResourceAccessListElementSerializer(UserSerializer):
else:
# Singleton roles should not be managed from this view, as per copy/edit rework spec
role_dict['user_capabilities'] = {'unattach': False}
return {'role': role_dict, 'descendant_roles': get_roles_on_resource(obj, role)}
return {'role': role_dict, 'descendant_roles': get_roles_on_resource(role)}
def format_team_role_perm(naive_team_role, permissive_role_ids):
ret = []
@@ -2705,12 +2810,9 @@ class ResourceAccessListElementSerializer(UserSerializer):
else:
# Singleton roles should not be managed from this view, as per copy/edit rework spec
role_dict['user_capabilities'] = {'unattach': False}
ret.append({'role': role_dict, 'descendant_roles': get_roles_on_resource(obj, team_role)})
ret.append({'role': role_dict, 'descendant_roles': get_roles_on_resource(team_role)})
return ret
team_content_type = ContentType.objects.get_for_model(Team)
content_type = ContentType.objects.get_for_model(obj)
direct_permissive_role_ids = Role.objects.filter(content_type=content_type, object_id=obj.id).values_list('id', flat=True)
all_permissive_role_ids = Role.objects.filter(content_type=content_type, object_id=obj.id).values_list('ancestors__id', flat=True)

View File

@@ -0,0 +1,22 @@
# Bulk Host Delete
This endpoint allows the client to delete multiple hosts from inventories.
They may do this by providing a list of hosts ID's to be deleted.
Example:
{
"hosts": [1, 2, 3, 4, 5]
}
Return data:
{
"hosts": {
"1": "The host a1 was deleted",
"2": "The host a2 was deleted",
"3": "The host a3 was deleted",
"4": "The host a4 was deleted",
"5": "The host a5 was deleted",
}
}

View File

@@ -36,6 +36,7 @@ from awx.api.views import (
from awx.api.views.bulk import (
BulkView,
BulkHostCreateView,
BulkHostDeleteView,
BulkJobLaunchView,
)
@@ -152,6 +153,7 @@ v2_urls = [
re_path(r'^workflow_approvals/', include(workflow_approval_urls)),
re_path(r'^bulk/$', BulkView.as_view(), name='bulk'),
re_path(r'^bulk/host_create/$', BulkHostCreateView.as_view(), name='bulk_host_create'),
re_path(r'^bulk/host_delete/$', BulkHostDeleteView.as_view(), name='bulk_host_delete'),
re_path(r'^bulk/job_launch/$', BulkJobLaunchView.as_view(), name='bulk_job_launch'),
]

View File

@@ -742,8 +742,8 @@ class TeamActivityStreamList(SubListAPIView):
qs = self.request.user.get_queryset(self.model)
return qs.filter(
Q(team=parent)
| Q(project__in=models.Project.accessible_objects(parent, 'read_role'))
| Q(credential__in=models.Credential.accessible_objects(parent, 'read_role'))
| Q(project__in=models.Project.accessible_objects(parent.member_role, 'read_role'))
| Q(credential__in=models.Credential.accessible_objects(parent.member_role, 'read_role'))
)
@@ -1397,7 +1397,7 @@ class OrganizationCredentialList(SubListCreateAPIView):
self.check_parent_access(organization)
user_visible = models.Credential.accessible_objects(self.request.user, 'read_role').all()
org_set = models.Credential.accessible_objects(organization.admin_role, 'read_role').all()
org_set = models.Credential.objects.filter(organization=organization)
if self.request.user.is_superuser or self.request.user.is_system_auditor:
return org_set

View File

@@ -34,6 +34,7 @@ class BulkView(APIView):
'''List top level resources'''
data = OrderedDict()
data['host_create'] = reverse('api:bulk_host_create', request=request)
data['host_delete'] = reverse('api:bulk_host_delete', request=request)
data['job_launch'] = reverse('api:bulk_job_launch', request=request)
return Response(data)
@@ -72,3 +73,20 @@ class BulkHostCreateView(GenericAPIView):
result = serializer.create(serializer.validated_data)
return Response(result, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
class BulkHostDeleteView(GenericAPIView):
permission_classes = [IsAuthenticated]
model = Host
serializer_class = serializers.BulkHostDeleteSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
def get(self, request):
return Response({"detail": "Bulk delete hosts with this endpoint"}, status=status.HTTP_200_OK)
def post(self, request):
serializer = serializers.BulkHostDeleteSerializer(data=request.data, context={'request': request})
if serializer.is_valid():
result = serializer.delete(serializer.validated_data)
return Response(result, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)

View File

@@ -7,8 +7,10 @@ import json
# Django
from django.db import models
from ansible_base.utils.models import prevent_search
# AWX
from awx.main.models.base import CreatedModifiedModel, prevent_search
from awx.main.models.base import CreatedModifiedModel
from awx.main.utils import encrypt_field
from awx.conf import settings_registry

View File

@@ -20,11 +20,12 @@ from rest_framework.exceptions import ParseError, PermissionDenied
# Django OAuth Toolkit
from awx.main.models.oauth import OAuth2Application, OAuth2AccessToken
from ansible_base.utils.validation import to_python_boolean
# AWX
from awx.main.utils import (
get_object_or_400,
get_pk_from_dict,
to_python_boolean,
get_licenser,
)
from awx.main.models import (

View File

@@ -827,6 +827,16 @@ register(
category_slug='bulk',
)
register(
'BULK_HOST_MAX_DELETE',
field_class=fields.IntegerField,
default=250,
label=_('Max number of hosts to allow to be deleted in a single bulk action'),
help_text=_('Max number of hosts to allow to be deleted in a single bulk action'),
category=_('Bulk Actions'),
category_slug='bulk',
)
register(
'UI_NEXT',
field_class=fields.BooleanField,

View File

@@ -41,6 +41,34 @@ base_inputs = {
'secret': True,
'help_text': _('The Secret ID for AppRole Authentication'),
},
{
'id': 'client_cert_public',
'label': _('Client Certificate'),
'type': 'string',
'multiline': True,
'help_text': _(
'The PEM-encoded client certificate used for TLS client authentication.'
' This should include the certificate and any intermediate certififcates.'
),
},
{
'id': 'client_cert_private',
'label': _('Client Certificate Key'),
'type': 'string',
'multiline': True,
'secret': True,
'help_text': _('The certificate private key used for TLS client authentication.'),
},
{
'id': 'client_cert_role',
'label': _('TLS Authentication Role'),
'type': 'string',
'multiline': False,
'help_text': _(
'The role configured in Hashicorp Vault for TLS client authentication.'
' If not provided, Hashicorp Vault may assign roles based on the certificate used.'
),
},
{
'id': 'namespace',
'label': _('Namespace name (Vault Enterprise only)'),
@@ -164,8 +192,10 @@ def handle_auth(**kwargs):
token = method_auth(**kwargs, auth_param=approle_auth(**kwargs))
elif kwargs.get('kubernetes_role'):
token = method_auth(**kwargs, auth_param=kubernetes_auth(**kwargs))
elif kwargs.get('client_cert_public') and kwargs.get('client_cert_private'):
token = method_auth(**kwargs, auth_param=client_cert_auth(**kwargs))
else:
raise Exception('Either token or AppRole/Kubernetes authentication parameters must be set')
raise Exception('Either a token or AppRole, Kubernetes, or TLS authentication parameters must be set')
return token
@@ -181,6 +211,10 @@ def kubernetes_auth(**kwargs):
return {'role': kwargs['kubernetes_role'], 'jwt': jwt}
def client_cert_auth(**kwargs):
return {'name': kwargs.get('client_cert_role')}
def method_auth(**kwargs):
# get auth method specific params
request_kwargs = {'json': kwargs['auth_param'], 'timeout': 30}
@@ -193,13 +227,22 @@ def method_auth(**kwargs):
cacert = kwargs.get('cacert', None)
sess = requests.Session()
# Namespace support
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']
request_url = '/'.join([url, 'auth', auth_path, 'login']).rstrip('/')
with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert
resp = sess.post(request_url, **request_kwargs)
# TLS client certificate support
if kwargs.get('client_cert_public') and kwargs.get('client_cert_private'):
# Add client cert to requests Session before making call
with CertFiles(kwargs['client_cert_public'], key=kwargs['client_cert_private']) as client_cert:
sess.cert = client_cert
resp = sess.post(request_url, **request_kwargs)
else:
# Make call without client certificate
resp = sess.post(request_url, **request_kwargs)
resp.raise_for_status()
token = resp.json()['auth']['client_token']
return token

View File

@@ -6,8 +6,10 @@ from django.conf import settings # noqa
from django.db import connection
from django.db.models.signals import pre_delete # noqa
from ansible_base.utils.models import prevent_search
# AWX
from awx.main.models.base import BaseModel, PrimordialModel, prevent_search, accepts_json, CLOUD_INVENTORY_SOURCES, VERBOSITY_CHOICES # noqa
from awx.main.models.base import BaseModel, PrimordialModel, accepts_json, CLOUD_INVENTORY_SOURCES, VERBOSITY_CHOICES # noqa
from awx.main.models.unified_jobs import UnifiedJob, UnifiedJobTemplate, StdoutMaxBytesExceeded # noqa
from awx.main.models.organization import Organization, Profile, Team, UserSessionMembership # noqa
from awx.main.models.credential import Credential, CredentialType, CredentialInputSource, ManagedCredentialType, build_safe_env # noqa

View File

@@ -12,9 +12,11 @@ from django.utils.text import Truncator
from django.utils.translation import gettext_lazy as _
from django.core.exceptions import ValidationError
from ansible_base.utils.models import prevent_search
# AWX
from awx.api.versioning import reverse
from awx.main.models.base import prevent_search, AD_HOC_JOB_TYPE_CHOICES, VERBOSITY_CHOICES, VarsDictProperty
from awx.main.models.base import AD_HOC_JOB_TYPE_CHOICES, VERBOSITY_CHOICES, VarsDictProperty
from awx.main.models.events import AdHocCommandEvent, UnpartitionedAdHocCommandEvent
from awx.main.models.unified_jobs import UnifiedJob
from awx.main.models.notifications import JobNotificationMixin, NotificationTemplate

View File

@@ -15,7 +15,6 @@ from awx.main.utils import encrypt_field, parse_yaml_or_json
from awx.main.constants import CLOUD_PROVIDERS
__all__ = [
'prevent_search',
'VarsDictProperty',
'BaseModel',
'CreatedModifiedModel',
@@ -384,23 +383,6 @@ class NotificationFieldsModel(BaseModel):
notification_templates_started = models.ManyToManyField("NotificationTemplate", blank=True, related_name='%(class)s_notification_templates_for_started')
def prevent_search(relation):
"""
Used to mark a model field or relation as "restricted from filtering"
e.g.,
class AuthToken(BaseModel):
user = prevent_search(models.ForeignKey(...))
sensitive_data = prevent_search(models.CharField(...))
The flag set by this function is used by
`awx.api.filters.FieldLookupBackend` to block fields and relations that
should not be searchable/filterable via search query params
"""
setattr(relation, '__prevent_search__', True)
return relation
def accepts_json(relation):
"""
Used to mark a model field as allowing JSON e.g,. JobTemplate.extra_vars

View File

@@ -17,6 +17,8 @@ from django.db.models import Sum, Q
import redis
from solo.models import SingletonModel
from ansible_base.utils.models import prevent_search
# AWX
from awx import __version__ as awx_application_version
from awx.main.utils import is_testing
@@ -24,7 +26,7 @@ from awx.api.versioning import reverse
from awx.main.fields import ImplicitRoleField
from awx.main.managers import InstanceManager, UUID_DEFAULT
from awx.main.constants import JOB_FOLDER_PREFIX
from awx.main.models.base import BaseModel, HasEditsMixin, prevent_search
from awx.main.models.base import BaseModel, HasEditsMixin
from awx.main.models.rbac import (
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
ROLE_SINGLETON_SYSTEM_AUDITOR,

View File

@@ -25,6 +25,8 @@ from django.db.models import Q
# REST Framework
from rest_framework.exceptions import ParseError
from ansible_base.utils.models import prevent_search
# AWX
from awx.api.versioning import reverse
from awx.main.constants import CLOUD_PROVIDERS
@@ -35,7 +37,7 @@ from awx.main.fields import (
OrderedManyToManyField,
)
from awx.main.managers import HostManager, HostMetricActiveManager
from awx.main.models.base import BaseModel, CommonModelNameNotUnique, VarsDictProperty, CLOUD_INVENTORY_SOURCES, prevent_search, accepts_json
from awx.main.models.base import BaseModel, CommonModelNameNotUnique, VarsDictProperty, CLOUD_INVENTORY_SOURCES, accepts_json
from awx.main.models.events import InventoryUpdateEvent, UnpartitionedInventoryUpdateEvent
from awx.main.models.unified_jobs import UnifiedJob, UnifiedJobTemplate
from awx.main.models.mixins import (

View File

@@ -20,13 +20,14 @@ from django.core.exceptions import FieldDoesNotExist
# REST Framework
from rest_framework.exceptions import ParseError
from ansible_base.utils.models import prevent_search
# AWX
from awx.api.versioning import reverse
from awx.main.constants import HOST_FACTS_FIELDS
from awx.main.models.base import (
BaseModel,
CreatedModifiedModel,
prevent_search,
accepts_json,
JOB_TYPE_CHOICES,
NEW_JOB_TYPE_CHOICES,

View File

@@ -9,7 +9,6 @@ import requests
# Django
from django.apps import apps
from django.conf import settings
from django.contrib.auth.models import User # noqa
from django.contrib.contenttypes.models import ContentType
from django.core.exceptions import ValidationError
from django.db import models
@@ -17,8 +16,9 @@ from django.db.models.query import QuerySet
from django.utils.crypto import get_random_string
from django.utils.translation import gettext_lazy as _
from ansible_base.utils.models import prevent_search
# AWX
from awx.main.models.base import prevent_search
from awx.main.models.rbac import Role, RoleAncestorEntry
from awx.main.utils import parse_yaml_or_json, get_custom_venv_choices, get_licenser, polymorphic
from awx.main.utils.execution_environments import get_default_execution_environment
@@ -64,13 +64,12 @@ class ResourceMixin(models.Model):
@staticmethod
def _accessible_pk_qs(cls, accessor, role_field, content_types=None):
if type(accessor) == User:
if accessor._meta.model_name == 'user':
ancestor_roles = accessor.roles.all()
elif type(accessor) == Role:
ancestor_roles = [accessor]
else:
accessor_type = ContentType.objects.get_for_model(accessor)
ancestor_roles = Role.objects.filter(content_type__pk=accessor_type.id, object_id=accessor.id)
raise RuntimeError(f'Role filters only valid for users and ancestor role, received {accessor}')
if content_types is None:
ct_kwarg = dict(content_type_id=ContentType.objects.get_for_model(cls).id)

View File

@@ -15,9 +15,11 @@ from django.utils.encoding import smart_str, force_str
from jinja2 import sandbox, ChainableUndefined
from jinja2.exceptions import TemplateSyntaxError, UndefinedError, SecurityError
from ansible_base.utils.models import prevent_search
# AWX
from awx.api.versioning import reverse
from awx.main.models.base import CommonModelNameNotUnique, CreatedModifiedModel, prevent_search
from awx.main.models.base import CommonModelNameNotUnique, CreatedModifiedModel
from awx.main.utils import encrypt_field, decrypt_field, set_environ
from awx.main.notifications.email_backend import CustomEmailBackend
from awx.main.notifications.slack_backend import SlackBackend

View File

@@ -15,12 +15,10 @@ from django.utils.translation import gettext_lazy as _
# AWX
from awx.api.versioning import reverse
from django.contrib.auth.models import User # noqa
__all__ = [
'Role',
'batch_role_ancestor_rebuilding',
'get_roles_on_resource',
'ROLE_SINGLETON_SYSTEM_ADMINISTRATOR',
'ROLE_SINGLETON_SYSTEM_AUDITOR',
'role_summary_fields_generator',
@@ -170,16 +168,10 @@ class Role(models.Model):
return reverse('api:role_detail', kwargs={'pk': self.pk}, request=request)
def __contains__(self, accessor):
if type(accessor) == User:
if accessor._meta.model_name == 'user':
return self.ancestors.filter(members=accessor).exists()
elif accessor.__class__.__name__ == 'Team':
return self.ancestors.filter(pk=accessor.member_role.id).exists()
elif type(accessor) == Role:
return self.ancestors.filter(pk=accessor.pk).exists()
else:
accessor_type = ContentType.objects.get_for_model(accessor)
roles = Role.objects.filter(content_type__pk=accessor_type.id, object_id=accessor.id)
return self.ancestors.filter(pk__in=roles).exists()
raise RuntimeError(f'Role evaluations only valid for users, received {accessor}')
@property
def name(self):
@@ -460,31 +452,6 @@ class RoleAncestorEntry(models.Model):
object_id = models.PositiveIntegerField(null=False)
def get_roles_on_resource(resource, accessor):
"""
Returns a string list of the roles a accessor has for a given resource.
An accessor can be either a User, Role, or an arbitrary resource that
contains one or more Roles associated with it.
"""
if type(accessor) == User:
roles = accessor.roles.all()
elif type(accessor) == Role:
roles = [accessor]
else:
accessor_type = ContentType.objects.get_for_model(accessor)
roles = Role.objects.filter(content_type__pk=accessor_type.id, object_id=accessor.id)
return [
role_field
for role_field in RoleAncestorEntry.objects.filter(
ancestor__in=roles, content_type_id=ContentType.objects.get_for_model(resource).id, object_id=resource.id
)
.values_list('role_field', flat=True)
.distinct()
]
def role_summary_fields_generator(content_object, role_field):
global role_descriptions
global role_names

View File

@@ -30,8 +30,10 @@ from rest_framework.exceptions import ParseError
# Django-Polymorphic
from polymorphic.models import PolymorphicModel
from ansible_base.utils.models import prevent_search, get_type_for_model
# AWX
from awx.main.models.base import CommonModelNameNotUnique, PasswordFieldsModel, NotificationFieldsModel, prevent_search
from awx.main.models.base import CommonModelNameNotUnique, PasswordFieldsModel, NotificationFieldsModel
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.control import Control as ControlDispatcher
from awx.main.registrar import activity_stream_registrar
@@ -42,7 +44,6 @@ from awx.main.utils.common import (
_inventory_updates,
copy_model_by_class,
copy_m2m_relationships,
get_type_for_model,
parse_yaml_or_json,
getattr_dne,
ScheduleDependencyManager,

View File

@@ -23,9 +23,11 @@ from crum import get_current_user
from jinja2 import sandbox
from jinja2.exceptions import TemplateSyntaxError, UndefinedError, SecurityError
from ansible_base.utils.models import prevent_search
# AWX
from awx.api.versioning import reverse
from awx.main.models import prevent_search, accepts_json, UnifiedJobTemplate, UnifiedJob
from awx.main.models import accepts_json, UnifiedJobTemplate, UnifiedJob
from awx.main.models.notifications import NotificationTemplate, JobNotificationMixin
from awx.main.models.base import CreatedModifiedModel, VarsDictProperty
from awx.main.models.rbac import ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR

View File

@@ -39,11 +39,15 @@ class TwilioBackend(AWXBaseEmailBackend, CustomNotificationBase):
logger.error(smart_str(_("Exception connecting to Twilio: {}").format(e)))
for m in messages:
try:
connection.messages.create(to=m.to, from_=m.from_email, body=m.subject)
sent_messages += 1
except Exception as e:
logger.error(smart_str(_("Exception sending messages: {}").format(e)))
if not self.fail_silently:
raise
failed = False
for dest in m.to:
try:
logger.debug(smart_str(_("FROM: {} / TO: {}").format(m.from_email, dest)))
connection.messages.create(to=dest, from_=m.from_email, body=m.subject)
sent_messages += 1
except Exception as e:
logger.error(smart_str(_("Exception sending messages: {}").format(e)))
failed = True
if not self.fail_silently and failed:
raise
return sent_messages

View File

@@ -17,6 +17,8 @@ from django.utils.timezone import now as tz_now
from django.conf import settings
from django.contrib.contenttypes.models import ContentType
from ansible_base.utils.models import get_type_for_model
# AWX
from awx.main.dispatch.reaper import reap_job
from awx.main.models import (
@@ -34,7 +36,6 @@ from awx.main.models import (
from awx.main.scheduler.dag_workflow import WorkflowDAG
from awx.main.utils.pglock import advisory_lock
from awx.main.utils import (
get_type_for_model,
ScheduleTaskManager,
ScheduleWorkflowManager,
)

View File

@@ -2,12 +2,12 @@ from unittest import mock
import pytest
import json
from ansible_base.utils.models import get_type_for_model
from awx.api.versioning import reverse
from awx.main.models.jobs import JobTemplate, Job
from awx.main.models.activity_stream import ActivityStream
from awx.main.access import JobTemplateAccess
from awx.main.utils.common import get_type_for_model
@pytest.fixture

View File

@@ -309,3 +309,139 @@ def test_bulk_job_set_all_prompt(job_template, organization, inventory, project,
assert node[0].limit == 'kansas'
assert node[0].skip_tags == 'foobar'
assert node[0].job_tags == 'untagged'
@pytest.mark.django_db
@pytest.mark.parametrize('num_hosts, num_queries', [(1, 70), (10, 150), (25, 250)])
def test_bulk_host_delete_num_queries(organization, inventory, post, get, user, num_hosts, num_queries, django_assert_max_num_queries):
'''
If I am a...
org admin
inventory admin at org level
admin of a particular inventory
superuser
Bulk Host delete should take under a certain number of queries
'''
users_list = setup_admin_users_list(organization, inventory, user)
for u in users_list:
hosts = [{'name': str(uuid4())} for i in range(num_hosts)]
with django_assert_max_num_queries(num_queries):
bulk_host_create_response = post(reverse('api:bulk_host_create'), {'inventory': inventory.id, 'hosts': hosts}, u, expect=201).data
assert len(bulk_host_create_response['hosts']) == len(hosts), f"unexpected number of hosts created for user {u}"
hosts_ids_created = get_inventory_hosts(get, inventory.id, u)
bulk_host_delete_response = post(reverse('api:bulk_host_delete'), {'hosts': hosts_ids_created}, u, expect=201).data
assert len(bulk_host_delete_response['hosts'].keys()) == len(hosts), f"unexpected number of hosts deleted for user {u}"
@pytest.mark.django_db
def test_bulk_host_delete_rbac(organization, inventory, post, get, user):
'''
If I am a...
org admin
inventory admin at org level
admin of a particular invenotry
... I can bulk delete hosts
Everyone else cannot
'''
admin_users_list = setup_admin_users_list(organization, inventory, user)
users_list = setup_none_admin_uses_list(organization, inventory, user)
for indx, u in enumerate(admin_users_list):
bulk_host_create_response = post(
reverse('api:bulk_host_create'), {'inventory': inventory.id, 'hosts': [{'name': f'foobar-{indx}'}]}, u, expect=201
).data
assert len(bulk_host_create_response['hosts']) == 1, f"unexpected number of hosts created for user {u}"
assert Host.objects.filter(inventory__id=inventory.id)[0].name == f'foobar-{indx}'
hosts_ids_created = get_inventory_hosts(get, inventory.id, u)
bulk_host_delete_response = post(reverse('api:bulk_host_delete'), {'hosts': hosts_ids_created}, u, expect=201).data
assert len(bulk_host_delete_response['hosts'].keys()) == 1, f"unexpected number of hosts deleted by user {u}"
for indx, create_u in enumerate(admin_users_list):
bulk_host_create_response = post(
reverse('api:bulk_host_create'), {'inventory': inventory.id, 'hosts': [{'name': f'foobar2-{indx}'}]}, create_u, expect=201
).data
print(bulk_host_create_response)
assert bulk_host_create_response['hosts'][0]['name'] == f'foobar2-{indx}'
hosts_ids_created = get_inventory_hosts(get, inventory.id, create_u)
print(f"Try to delete {hosts_ids_created}")
for delete_u in users_list:
bulk_host_delete_response = post(reverse('api:bulk_host_delete'), {'hosts': hosts_ids_created}, delete_u, expect=403).data
assert "Lack permissions to delete hosts from this inventory." in bulk_host_delete_response['inventories'].values()
@pytest.mark.django_db
def test_bulk_host_delete_from_multiple_inv(organization, inventory, post, get, user):
'''
If I am inventory admin at org level
Bulk Host delete should be enabled only on my inventory
'''
num_hosts = 10
inventory.organization = organization
# Create second inventory
inv2 = organization.inventories.create(name="second-test-inv")
inv2.organization = organization
admin2_user = user('inventory2_admin', False)
inv2.admin_role.members.add(admin2_user)
admin_user = user('inventory_admin', False)
inventory.admin_role.members.add(admin_user)
organization.member_role.members.add(admin_user)
organization.member_role.members.add(admin2_user)
hosts = [{'name': str(uuid4())} for i in range(num_hosts)]
hosts2 = [{'name': str(uuid4())} for i in range(num_hosts)]
# create hosts in each of the inventories
bulk_host_create_response = post(reverse('api:bulk_host_create'), {'inventory': inventory.id, 'hosts': hosts}, admin_user, expect=201).data
assert len(bulk_host_create_response['hosts']) == len(hosts), f"unexpected number of hosts created for user {admin_user}"
bulk_host_create_response2 = post(reverse('api:bulk_host_create'), {'inventory': inv2.id, 'hosts': hosts2}, admin2_user, expect=201).data
assert len(bulk_host_create_response2['hosts']) == len(hosts), f"unexpected number of hosts created for user {admin2_user}"
# get all hosts ids - from both inventories
hosts_ids_created = get_inventory_hosts(get, inventory.id, admin_user)
hosts_ids_created += get_inventory_hosts(get, inv2.id, admin2_user)
expected_error = "Lack permissions to delete hosts from this inventory."
# try to delete ALL hosts with admin user of inventory 1.
for inv_name, invadmin in zip([inv2.name, inventory.name], [admin_user, admin2_user]):
bulk_host_delete_response = post(reverse('api:bulk_host_delete'), {'hosts': hosts_ids_created}, invadmin, expect=403).data
result_message = bulk_host_delete_response['inventories'][inv_name]
assert result_message == expected_error, f"deleted hosts without permission by user {invadmin}"
def setup_admin_users_list(organization, inventory, user):
inventory.organization = organization
inventory_admin = user('inventory_admin', False)
org_admin = user('org_admin', False)
org_inv_admin = user('org_admin', False)
superuser = user('admin', True)
for u in [org_admin, org_inv_admin, inventory_admin]:
organization.member_role.members.add(u)
organization.admin_role.members.add(org_admin)
organization.inventory_admin_role.members.add(org_inv_admin)
inventory.admin_role.members.add(inventory_admin)
return [inventory_admin, org_inv_admin, superuser, org_admin]
def setup_none_admin_uses_list(organization, inventory, user):
inventory.organization = organization
auditor = user('auditor', False)
member = user('member', False)
use_inv_member = user('member', False)
for u in [auditor, member, use_inv_member]:
organization.member_role.members.add(u)
inventory.use_role.members.add(use_inv_member)
organization.auditor_role.members.add(auditor)
return [auditor, member, use_inv_member]
def get_inventory_hosts(get, inv_id, use_user):
data = get(reverse('api:inventory_hosts_list', kwargs={'pk': inv_id}), use_user, expect=200).data
results = [host['id'] for host in data['results']]
return results

View File

@@ -40,6 +40,26 @@ def test_hashivault_kubernetes_auth():
assert res == expected_res
def test_hashivault_client_cert_auth_explicit_role():
kwargs = {
'client_cert_role': 'test-cert-1',
}
expected_res = {
'name': 'test-cert-1',
}
res = hashivault.client_cert_auth(**kwargs)
assert res == expected_res
def test_hashivault_client_cert_auth_no_role():
kwargs = {}
expected_res = {
'name': None,
}
res = hashivault.client_cert_auth(**kwargs)
assert res == expected_res
def test_hashivault_handle_auth_token():
kwargs = {
'token': 'the_token',
@@ -73,6 +93,22 @@ def test_hashivault_handle_auth_kubernetes():
assert token == 'the_token'
def test_hashivault_handle_auth_client_cert():
kwargs = {
'client_cert_public': "foo",
'client_cert_private': "bar",
'client_cert_role': 'test-cert-1',
}
auth_params = {
'name': 'test-cert-1',
}
with mock.patch.object(hashivault, 'method_auth') as method_mock:
method_mock.return_value = 'the_token'
token = hashivault.handle_auth(**kwargs)
method_mock.assert_called_with(**kwargs, auth_param=auth_params)
assert token == 'the_token'
def test_hashivault_handle_auth_not_enough_args():
with pytest.raises(Exception):
hashivault.handle_auth()

View File

@@ -208,6 +208,6 @@ def test_auto_parenting():
@pytest.mark.django_db
def test_update_parents_keeps_teams(team, project):
project.update_role.parents.add(team.member_role)
assert team.member_role in project.update_role # test prep sanity check
assert list(Project.accessible_objects(team.member_role, 'update_role')) == [project] # test prep sanity check
update_role_parentage_for_instance(project)
assert team.member_role in project.update_role # actual assertion
assert list(Project.accessible_objects(team.member_role, 'update_role')) == [project] # actual assertion

View File

@@ -92,7 +92,7 @@ def test_team_accessible_by(team, user, project):
u = user('team_member', False)
team.member_role.children.add(project.use_role)
assert team in project.read_role
assert list(Project.accessible_objects(team.member_role, 'read_role')) == [project]
assert u not in project.read_role
team.member_role.members.add(u)
@@ -104,7 +104,7 @@ def test_team_accessible_objects(team, user, project):
u = user('team_member', False)
team.member_role.children.add(project.use_role)
assert len(Project.accessible_objects(team, 'read_role')) == 1
assert len(Project.accessible_objects(team.member_role, 'read_role')) == 1
assert not Project.accessible_objects(u, 'read_role')
team.member_role.members.add(u)

View File

@@ -3,15 +3,13 @@
import pytest
# Django
from django.core.exceptions import FieldDoesNotExist
from rest_framework.exceptions import PermissionDenied
from rest_framework.exceptions import PermissionDenied, ParseError
from ansible_base.filters.rest_framework.field_lookup_backend import FieldLookupBackend
from awx.api.filters import FieldLookupBackend, OrderByBackend, get_field_from_path
from awx.main.models import (
AdHocCommand,
ActivityStream,
Credential,
Job,
JobTemplate,
SystemJob,
@@ -20,88 +18,11 @@ from awx.main.models import (
WorkflowJob,
WorkflowJobTemplate,
WorkflowJobOptions,
InventorySource,
JobEvent,
)
from awx.main.models.oauth import OAuth2Application
from awx.main.models.jobs import JobOptions
def test_related():
field_lookup = FieldLookupBackend()
lookup = '__'.join(['inventory', 'organization', 'pk'])
field, new_lookup = field_lookup.get_field_from_lookup(InventorySource, lookup)
print(field)
print(new_lookup)
def test_invalid_filter_key():
field_lookup = FieldLookupBackend()
# FieldDoesNotExist is caught and converted to ParseError by filter_queryset
with pytest.raises(FieldDoesNotExist) as excinfo:
field_lookup.value_to_python(JobEvent, 'event_data.task_action', 'foo')
assert 'has no field named' in str(excinfo)
def test_invalid_field_hop():
with pytest.raises(ParseError) as excinfo:
get_field_from_path(Credential, 'organization__description__user')
assert 'No related model for' in str(excinfo)
def test_invalid_order_by_key():
field_order_by = OrderByBackend()
with pytest.raises(ParseError) as excinfo:
[f for f in field_order_by._validate_ordering_fields(JobEvent, ('event_data.task_action',))]
assert 'has no field named' in str(excinfo)
@pytest.mark.parametrize(u"empty_value", [u'', ''])
def test_empty_in(empty_value):
field_lookup = FieldLookupBackend()
with pytest.raises(ValueError) as excinfo:
field_lookup.value_to_python(JobTemplate, 'project__name__in', empty_value)
assert 'empty value for __in' in str(excinfo.value)
@pytest.mark.parametrize(u"valid_value", [u'foo', u'foo,'])
def test_valid_in(valid_value):
field_lookup = FieldLookupBackend()
value, new_lookup, _ = field_lookup.value_to_python(JobTemplate, 'project__name__in', valid_value)
assert 'foo' in value
def test_invalid_field():
invalid_field = u"ヽヾ"
field_lookup = FieldLookupBackend()
with pytest.raises(ValueError) as excinfo:
field_lookup.value_to_python(WorkflowJobTemplate, invalid_field, 'foo')
assert 'is not an allowed field name. Must be ascii encodable.' in str(excinfo.value)
def test_valid_iexact():
field_lookup = FieldLookupBackend()
value, new_lookup, _ = field_lookup.value_to_python(JobTemplate, 'project__name__iexact', 'foo')
assert 'foo' in value
def test_invalid_iexact():
field_lookup = FieldLookupBackend()
with pytest.raises(ValueError) as excinfo:
field_lookup.value_to_python(Job, 'id__iexact', '1')
assert 'is not a text field and cannot be filtered by case-insensitive search' in str(excinfo.value)
@pytest.mark.parametrize('lookup_suffix', ['', 'contains', 'startswith', 'in'])
@pytest.mark.parametrize('password_field', Credential.PASSWORD_FIELDS)
def test_filter_on_password_field(password_field, lookup_suffix):
field_lookup = FieldLookupBackend()
lookup = '__'.join(filter(None, [password_field, lookup_suffix]))
with pytest.raises(PermissionDenied) as excinfo:
field, new_lookup = field_lookup.get_field_from_lookup(Credential, lookup)
assert 'not allowed' in str(excinfo.value)
@pytest.mark.parametrize(
'model, query',
[
@@ -128,10 +49,3 @@ def test_filter_sensitive_fields_and_relations(model, query):
with pytest.raises(PermissionDenied) as excinfo:
field, new_lookup = field_lookup.get_field_from_lookup(model, query)
assert 'not allowed' in str(excinfo.value)
def test_looping_filters_prohibited():
field_lookup = FieldLookupBackend()
with pytest.raises(ParseError) as loop_exc:
field_lookup.get_field_from_lookup(Job, 'job_events__job__job_events')
assert 'job_events' in str(loop_exc.value)

View File

@@ -12,6 +12,8 @@ from unittest import mock
from rest_framework.exceptions import ParseError
from ansible_base.utils.models import get_type_for_model
from awx.main.utils import common
from awx.api.validators import HostnameRegexValidator
@@ -106,7 +108,7 @@ TEST_MODELS = [
# Cases relied on for scheduler dependent jobs list
@pytest.mark.parametrize('model,name', TEST_MODELS)
def test_get_type_for_model(model, name):
assert common.get_type_for_model(model) == name
assert get_type_for_model(model) == name
def test_get_model_for_invalid_type():

View File

@@ -68,7 +68,9 @@ class mockHost:
@mock.patch('awx.main.utils.filters.get_model', return_value=mockHost())
class TestSmartFilterQueryFromString:
@mock.patch('awx.api.filters.get_fields_from_path', lambda model, path: ([model], path)) # disable field filtering, because a__b isn't a real Host field
@mock.patch(
'ansible_base.filters.rest_framework.field_lookup_backend.get_fields_from_path', lambda model, path: ([model], path)
) # disable field filtering, because a__b isn't a real Host field
@pytest.mark.parametrize(
"filter_string,q_expected",
[

View File

@@ -52,12 +52,10 @@ __all__ = [
'get_awx_http_client_headers',
'get_awx_version',
'update_scm_url',
'get_type_for_model',
'get_model_for_type',
'copy_model_by_class',
'copy_m2m_relationships',
'prefetch_page_capabilities',
'to_python_boolean',
'datetime_hook',
'ignore_inventory_computed_fields',
'ignore_inventory_group_removal',
@@ -110,18 +108,6 @@ def get_object_or_400(klass, *args, **kwargs):
raise ParseError(*e.args)
def to_python_boolean(value, allow_none=False):
value = str(value)
if value.lower() in ('true', '1', 't'):
return True
elif value.lower() in ('false', '0', 'f'):
return False
elif allow_none and value.lower() in ('none', 'null'):
return None
else:
raise ValueError(_(u'Unable to convert "%s" to boolean') % value)
def datetime_hook(d):
new_d = {}
for key, value in d.items():
@@ -569,14 +555,6 @@ def copy_m2m_relationships(obj1, obj2, fields, kwargs=None):
dest_field.add(*list(src_field_value.all().values_list('id', flat=True)))
def get_type_for_model(model):
"""
Return type name for a given model class.
"""
opts = model._meta.concrete_model._meta
return camelcase_to_underscore(opts.object_name)
def get_model_for_type(type_name):
"""
Return model class for a given type name.

View File

@@ -1,27 +1,10 @@
# Copyright (c) 2017 Ansible by Red Hat
# All Rights Reserved.
from itertools import chain
from awx.settings.application_name import set_application_name
from django.conf import settings
def get_all_field_names(model):
# Implements compatibility with _meta.get_all_field_names
# See: https://docs.djangoproject.com/en/1.11/ref/models/meta/#migrating-from-the-old-api
return list(
set(
chain.from_iterable(
(field.name, field.attname) if hasattr(field, 'attname') else (field.name,)
for field in model._meta.get_fields()
# For complete backwards compatibility, you may want to exclude
# GenericForeignKey from the results.
if not (field.many_to_one and field.related_model is None)
)
)
)
def set_connection_name(function):
set_application_name(settings.DATABASES, settings.CLUSTER_HOST_ID, function=function)

View File

@@ -161,7 +161,7 @@ class SmartFilter(object):
else:
# detect loops and restrict access to sensitive fields
# this import is intentional here to avoid a circular import
from awx.api.filters import FieldLookupBackend
from ansible_base.filters.rest_framework.field_lookup_backend import FieldLookupBackend
FieldLookupBackend().get_field_from_lookup(Host, k)
kwargs[k] = v

View File

@@ -11,6 +11,7 @@ from datetime import timedelta
# python-ldap
import ldap
from split_settings.tools import include
DEBUG = True
@@ -131,6 +132,9 @@ BULK_JOB_MAX_LAUNCH = 100
# Maximum number of host that can be created in 1 bulk host create
BULK_HOST_MAX_CREATE = 100
# Maximum number of host that can be deleted in 1 bulk host delete
BULK_HOST_MAX_DELETE = 250
SITE_ID = 1
# Make this unique, and don't share it with anybody.
@@ -336,6 +340,7 @@ INSTALLED_APPS = [
'awx.ui',
'awx.sso',
'solo',
'ansible_base',
]
INTERNAL_IPS = ('127.0.0.1',)
@@ -350,12 +355,6 @@ REST_FRAMEWORK = {
'awx.api.authentication.LoggedBasicAuthentication',
),
'DEFAULT_PERMISSION_CLASSES': ('awx.api.permissions.ModelAccessPermission',),
'DEFAULT_FILTER_BACKENDS': (
'awx.api.filters.TypeFilterBackend',
'awx.api.filters.FieldLookupBackend',
'rest_framework.filters.SearchFilter',
'awx.api.filters.OrderByBackend',
),
'DEFAULT_PARSER_CLASSES': ('awx.api.parsers.JSONParser',),
'DEFAULT_RENDERER_CLASSES': ('awx.api.renderers.DefaultJSONRenderer', 'awx.api.renderers.BrowsableAPIRenderer'),
'DEFAULT_METADATA_CLASS': 'awx.api.metadata.Metadata',
@@ -1063,3 +1062,12 @@ CLEANUP_HOST_METRICS_HARD_THRESHOLD = 36 # months
# Host metric summary monthly task - last time of run
HOST_METRIC_SUMMARY_TASK_LAST_TS = None
HOST_METRIC_SUMMARY_TASK_INTERVAL = 7 # days
# django-ansible-base
ANSIBLE_BASE_FEATURES = {'AUTHENTICATION': False, 'SWAGGER': False, 'FILTERING': True}
from ansible_base import settings # noqa: E402
settings_file = os.path.join(os.path.dirname(settings.__file__), 'dynamic_settings.py')
include(settings_file)

View File

@@ -1341,7 +1341,6 @@ register(
'SOCIAL_AUTH_SAML_SP_PUBLIC_CERT',
field_class=fields.CharField,
allow_blank=True,
required=True,
validators=[validate_certificate],
label=_('SAML Service Provider Public Certificate'),
help_text=_('Create a keypair to use as a service provider (SP) and include the certificate content here.'),
@@ -1353,7 +1352,6 @@ register(
'SOCIAL_AUTH_SAML_SP_PRIVATE_KEY',
field_class=fields.CharField,
allow_blank=True,
required=True,
validators=[validate_private_key],
label=_('SAML Service Provider Private Key'),
help_text=_('Create a keypair to use as a service provider (SP) and include the private key content here.'),
@@ -1365,7 +1363,6 @@ register(
register(
'SOCIAL_AUTH_SAML_ORG_INFO',
field_class=SAMLOrgInfoField,
required=True,
label=_('SAML Service Provider Organization Info'),
help_text=_('Provide the URL, display name, and the name of your app. Refer to the documentation for example syntax.'),
category=_('SAML'),
@@ -1379,7 +1376,6 @@ register(
'SOCIAL_AUTH_SAML_TECHNICAL_CONTACT',
field_class=SAMLContactField,
allow_blank=True,
required=True,
label=_('SAML Service Provider Technical Contact'),
help_text=_('Provide the name and email address of the technical contact for your service provider. Refer to the documentation for example syntax.'),
category=_('SAML'),
@@ -1391,7 +1387,6 @@ register(
'SOCIAL_AUTH_SAML_SUPPORT_CONTACT',
field_class=SAMLContactField,
allow_blank=True,
required=True,
label=_('SAML Service Provider Support Contact'),
help_text=_('Provide the name and email address of the support contact for your service provider. Refer to the documentation for example syntax.'),
category=_('SAML'),

View File

@@ -20,6 +20,7 @@ import UnsupportedScheduleForm from './UnsupportedScheduleForm';
import parseRuleObj, { UnsupportedRRuleError } from './parseRuleObj';
import buildRuleObj from './buildRuleObj';
import buildRuleSet from './buildRuleSet';
import mergeArraysByCredentialType from './mergeArraysByCredentialType';
const NUM_DAYS_PER_FREQUENCY = {
week: 7,
@@ -350,6 +351,12 @@ function ScheduleForm({
startDate: currentDate,
startTime: time,
timezone: schedule.timezone || now.zoneName,
credentials: mergeArraysByCredentialType(
resourceDefaultCredentials,
credentials
),
labels: originalLabels.current,
instance_groups: originalInstanceGroups.current,
};
if (hasDaysToKeepField) {

View File

@@ -0,0 +1,18 @@
export default function mergeArraysByCredentialType(
defaultCredentials = [],
overrides = []
) {
const mergedArray = [...defaultCredentials];
overrides.forEach((override) => {
const index = mergedArray.findIndex(
(defaultCred) => defaultCred.credential_type === override.credential_type
);
if (index !== -1) {
mergedArray.splice(index, 1);
}
mergedArray.push(override);
});
return mergedArray;
}

View File

@@ -119,7 +119,7 @@ function LoggingEdit() {
...logging.LOG_AGGREGATOR_ENABLED,
help_text: (
<>
{logging.LOG_AGGREGATOR_ENABLED.help_text}
{logging.LOG_AGGREGATOR_ENABLED?.help_text}
{!formik.values.LOG_AGGREGATOR_ENABLED &&
(!formik.values.LOG_AGGREGATOR_HOST ||
!formik.values.LOG_AGGREGATOR_TYPE) && (

View File

@@ -8,6 +8,7 @@ action_groups:
- application
- bulk_job_launch
- bulk_host_create
- bulk_host_delete
- controller_meta
- credential_input_source
- credential

View File

@@ -0,0 +1,65 @@
#!/usr/bin/python
# coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'}
DOCUMENTATION = '''
---
module: bulk_host_delete
author: "Avi Layani (@Avilir)"
short_description: Bulk host delete in Automation Platform Controller
description:
- Single-request bulk host deletion in Automation Platform Controller.
- Provides a way to delete many hosts at once from inventories in Controller.
options:
hosts:
description:
- List of hosts id's to delete from inventory.
required: True
type: list
elements: int
extends_documentation_fragment: awx.awx.auth
'''
EXAMPLES = '''
- name: Bulk host delete
bulk_host_delete:
hosts:
- 1
- 2
'''
from ..module_utils.controller_api import ControllerAPIModule
def main():
# Any additional arguments that are not fields of the item can be added here
argument_spec = dict(
hosts=dict(required=True, type='list', elements='int'),
)
# Create a module for ourselves
module = ControllerAPIModule(argument_spec=argument_spec)
# Extract our parameters
hosts = module.params.get('hosts')
# Delete the hosts
result = module.post_endpoint("bulk/host_delete", data={"hosts": hosts})
if result['status_code'] != 201:
module.fail_json(msg="Failed to delete hosts, see response for details", response=result)
module.json_output['changed'] = True
module.exit_json(**module.json_output)
if __name__ == '__main__':
main()

View File

@@ -18,30 +18,55 @@ import pytest
from ansible.module_utils.six import raise_from
from awx.main.tests.functional.conftest import _request
from awx.main.models import Organization, Project, Inventory, JobTemplate, Credential, CredentialType, ExecutionEnvironment, UnifiedJob
from awx.main.tests.functional.conftest import credentialtype_scm, credentialtype_ssh # noqa: F401; pylint: disable=unused-variable
from awx.main.models import (
Organization,
Project,
Inventory,
JobTemplate,
Credential,
CredentialType,
ExecutionEnvironment,
UnifiedJob,
WorkflowJobTemplate,
NotificationTemplate,
Schedule,
)
from django.db import transaction
try:
import tower_cli # noqa
HAS_TOWER_CLI = True
except ImportError:
HAS_TOWER_CLI = False
try:
# Because awxkit will be a directory at the root of this makefile and we are using python3, import awxkit will work even if its not installed.
# However, awxkit will not contain api whih causes a stack failure down on line 170 when we try to mock it.
# So here we are importing awxkit.api to prevent that. Then you only get an error on tests for awxkit functionality.
import awxkit.api # noqa
HAS_AWX_KIT = True
except ImportError:
HAS_AWX_KIT = False
HAS_TOWER_CLI = False
HAS_AWX_KIT = False
logger = logging.getLogger('awx.main.tests')
@pytest.fixture(autouse=True)
def awxkit_path_set(monkeypatch):
"""Monkey patch sys.path, insert awxkit source code so that
the package does not need to be installed.
"""
base_folder = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, os.pardir, 'awxkit'))
monkeypatch.syspath_prepend(base_folder)
@pytest.fixture(autouse=True)
def import_awxkit():
global HAS_TOWER_CLI
global HAS_AWX_KIT
try:
import tower_cli # noqa
HAS_TOWER_CLI = True
except ImportError:
HAS_TOWER_CLI = False
try:
import awxkit # noqa
HAS_AWX_KIT = True
except ImportError:
HAS_AWX_KIT = False
def sanitize_dict(din):
"""Sanitize Django response data to purge it of internal types
so it may be used to cast a requests response object
@@ -123,7 +148,7 @@ def run_module(request, collection_import):
sanitize_dict(py_data)
resp._content = bytes(json.dumps(django_response.data), encoding='utf8')
resp.status_code = django_response.status_code
resp.headers = {'X-API-Product-Name': 'AWX', 'X-API-Product-Version': '0.0.1-devel'}
resp.headers = dict(django_response.headers)
if request.config.getoption('verbose') > 0:
logger.info('%s %s by %s, code:%s', method, '/api/' + url.split('/api/')[1], request_user.username, resp.status_code)
@@ -236,10 +261,8 @@ def job_template(project, inventory):
@pytest.fixture
def machine_credential(organization):
ssh_type = CredentialType.defaults['ssh']()
ssh_type.save()
return Credential.objects.create(credential_type=ssh_type, name='machine-cred', inputs={'username': 'test_user', 'password': 'pas4word'})
def machine_credential(credentialtype_ssh, organization): # noqa: F811
return Credential.objects.create(credential_type=credentialtype_ssh, name='machine-cred', inputs={'username': 'test_user', 'password': 'pas4word'})
@pytest.fixture
@@ -253,9 +276,7 @@ def vault_credential(organization):
def kube_credential():
ct = CredentialType.defaults['kubernetes_bearer_token']()
ct.save()
return Credential.objects.create(
credential_type=ct, name='kube-cred', inputs={'host': 'my.cluster', 'bearer_token': 'my-token', 'verify_ssl': False}
)
return Credential.objects.create(credential_type=ct, name='kube-cred', inputs={'host': 'my.cluster', 'bearer_token': 'my-token', 'verify_ssl': False})
@pytest.fixture
@@ -288,3 +309,42 @@ def mock_has_unpartitioned_events():
# We mock this out to circumvent the migration query.
with mock.patch.object(UnifiedJob, 'has_unpartitioned_events', new=False) as _fixture:
yield _fixture
@pytest.fixture
def workflow_job_template(organization, inventory):
return WorkflowJobTemplate.objects.create(name='test-workflow_job_template', organization=organization, inventory=inventory)
@pytest.fixture
def notification_template(organization):
return NotificationTemplate.objects.create(
name='test-notification_template',
organization=organization,
notification_type="webhook",
notification_configuration=dict(
url="http://localhost",
username="",
password="",
headers={
"Test": "Header",
},
),
)
@pytest.fixture
def scm_credential(credentialtype_scm, organization): # noqa: F811
return Credential.objects.create(
credential_type=credentialtype_scm, name='scm-cred', inputs={'username': 'optimus', 'password': 'prime'}, organization=organization
)
@pytest.fixture
def rrule():
return 'DTSTART:20151117T050000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1'
@pytest.fixture
def schedule(job_template, rrule):
return Schedule.objects.create(unified_job_template=job_template, name='test-sched', rrule=rrule)

View File

@@ -45,3 +45,28 @@ def test_bulk_host_create(run_module, admin_user, inventory):
resp_hosts = inventory.hosts.all().values_list('name', flat=True)
for h in hosts:
assert h['name'] in resp_hosts
@pytest.mark.django_db
def test_bulk_host_delete(run_module, admin_user, inventory):
hosts = [dict(name="127.0.0.1"), dict(name="foo.dns.org")]
result = run_module(
'bulk_host_create',
{
'inventory': inventory.name,
'hosts': hosts,
},
admin_user,
)
assert not result.get('failed', False), result.get('msg', result)
assert result.get('changed'), result
resp_hosts_ids = list(inventory.hosts.all().values_list('id', flat=True))
result = run_module(
'bulk_host_delete',
{
'hosts': resp_hosts_ids,
},
admin_user,
)
assert not result.get('failed', False), result.get('msg', result)
assert result.get('changed'), result

View File

@@ -50,6 +50,7 @@ no_endpoint_for_module = [
extra_endpoints = {
'bulk_job_launch': '/api/v2/bulk/job_launch/',
'bulk_host_create': '/api/v2/bulk/host_create/',
'bulk_host_delete': '/api/v2/bulk/host_delete/',
}
# Global module parameters we can ignore

View File

@@ -0,0 +1,154 @@
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import pytest
from awx.main.models.execution_environments import ExecutionEnvironment
from awx.main.models.jobs import JobTemplate
from awx.main.tests.functional.conftest import user, system_auditor # noqa: F401; pylint: disable=unused-import
ASSETS = set([
"users",
"organizations",
"teams",
"credential_types",
"credentials",
"notification_templates",
"projects",
"inventory",
"inventory_sources",
"job_templates",
"workflow_job_templates",
"execution_environments",
"applications",
"schedules",
])
@pytest.fixture
def job_template(project, inventory, organization, machine_credential):
jt = JobTemplate.objects.create(name='test-jt', project=project, inventory=inventory, organization=organization, playbook='helloworld.yml')
jt.credentials.add(machine_credential)
jt.save()
return jt
@pytest.fixture
def execution_environment(organization):
return ExecutionEnvironment.objects.create(name="test-ee", description="test-ee", managed=False, organization=organization)
def find_by(result, name, key, value):
for c in result[name]:
if c[key] == value:
return c
values = [c.get(key, None) for c in result[name]]
raise ValueError(f"Failed to find assets['{name}'][{key}] = '{value}' valid values are {values}")
@pytest.mark.django_db
def test_export(run_module, admin_user):
"""
There should be nothing to export EXCEPT the admin user.
"""
result = run_module('export', dict(all=True), admin_user)
assert not result.get('failed', False), result.get('msg', result)
assets = result['assets']
assert set(result['assets'].keys()) == ASSETS
u = find_by(assets, 'users', 'username', 'admin')
assert u['is_superuser'] is True
all_assets_except_users = {k: v for k, v in assets.items() if k != 'users'}
for k, v in all_assets_except_users.items():
assert v == [], f"Expected resource {k} to be empty. Instead it is {v}"
@pytest.mark.django_db
def test_export_simple(
run_module,
organization,
project,
inventory,
job_template,
scm_credential,
machine_credential,
workflow_job_template,
execution_environment,
notification_template,
rrule,
schedule,
admin_user,
):
"""
TODO: Ensure there aren't _more_ results in each resource than we expect
"""
result = run_module('export', dict(all=True), admin_user)
assert not result.get('failed', False), result.get('msg', result)
assets = result['assets']
u = find_by(assets, 'users', 'username', 'admin')
assert u['is_superuser'] is True
find_by(assets, 'organizations', 'name', 'Default')
r = find_by(assets, 'credentials', 'name', 'scm-cred')
assert r['credential_type']['kind'] == 'scm'
assert r['credential_type']['name'] == 'Source Control'
r = find_by(assets, 'credentials', 'name', 'machine-cred')
assert r['credential_type']['kind'] == 'ssh'
assert r['credential_type']['name'] == 'Machine'
r = find_by(assets, 'job_templates', 'name', 'test-jt')
assert r['natural_key']['organization']['name'] == 'Default'
assert r['inventory']['name'] == 'test-inv'
assert r['project']['name'] == 'test-proj'
find_by(r['related'], 'credentials', 'name', 'machine-cred')
r = find_by(assets, 'inventory', 'name', 'test-inv')
assert r['organization']['name'] == 'Default'
r = find_by(assets, 'projects', 'name', 'test-proj')
assert r['organization']['name'] == 'Default'
r = find_by(assets, 'workflow_job_templates', 'name', 'test-workflow_job_template')
assert r['natural_key']['organization']['name'] == 'Default'
assert r['inventory']['name'] == 'test-inv'
r = find_by(assets, 'execution_environments', 'name', 'test-ee')
assert r['organization']['name'] == 'Default'
r = find_by(assets, 'schedules', 'name', 'test-sched')
assert r['rrule'] == rrule
r = find_by(assets, 'notification_templates', 'name', 'test-notification_template')
assert r['organization']['name'] == 'Default'
assert r['notification_configuration']['url'] == 'http://localhost'
@pytest.mark.django_db
def test_export_system_auditor(run_module, schedule, system_auditor): # noqa: F811
"""
This test illustrates that deficiency of export when ran as non-root user (i.e. system auditor).
The OPTIONS endpoint does NOT return POST for a system auditor. This is bad for the export code
because it relies on crawling the OPTIONS POST response to determine the fields to export.
"""
result = run_module('export', dict(all=True), system_auditor)
assert result.get('failed', False), result.get('msg', result)
assert 'Failed to export assets substring not found' in result['msg'], (
'If you found this error then you have probably fixed a feature! The export code attempts to assertain the POST fields from the `description` field,'
' but both the API side and the client inference code are lacking.'
)
# r = result['assets']['schedules'][0]
# assert r['natural_key']['name'] == 'test-sched'
# assert 'rrule' not in r, 'If you found this error then you have probably fixed a feature! We WANT rrule to be found in the export schedule payload.'

View File

@@ -0,0 +1,80 @@
---
- name: "Generate a random string for test"
set_fact:
test_id: "{{ lookup('password', '/dev/null chars=ascii_letters length=16') }}"
when: "test_id is not defined"
- name: "Generate a unique name"
set_fact:
bulk_inv_name: "AWX-Collection-tests-bulk_host_create-{{ test_id }}"
- name: "Get our collection package"
controller_meta:
register: "controller_meta"
- name: "Generate the name of our plugin"
set_fact:
plugin_name: "{{ controller_meta.prefix }}.controller_api"
- name: "Create an inventory"
inventory:
name: "{{ bulk_inv_name }}"
organization: "Default"
state: "present"
register: "inventory_result"
- name: "Bulk Host Create"
bulk_host_create:
hosts:
- name: "123.456.789.123"
description: "myhost1"
variables:
food: "carrot"
color: "orange"
- name: "example.dns.gg"
description: "myhost2"
enabled: "false"
inventory: "{{ bulk_inv_name }}"
register: "result"
- assert:
that:
- "result is not failed"
- name: "Get our collection package"
controller_meta:
register: "controller_meta"
- name: "Generate the name of our plugin"
set_fact:
plugin_name: "{{ controller_meta.prefix }}.controller_api"
- name: "Setting the inventory hosts endpoint"
set_fact:
endpoint: "inventories/{{ inventory_result.id }}/hosts/"
- name: "Get hosts information from inventory"
set_fact:
hosts_created: "{{ query(plugin_name, endpoint, return_objects=True) }}"
host_id_list: []
- name: "Extract host IDs from hosts information"
set_fact:
host_id_list: "{{ host_id_list + [item.id] }}"
loop: "{{ hosts_created }}"
- name: "Bulk Host Delete"
bulk_host_delete:
hosts: "{{ host_id_list }}"
register: "result"
- assert:
that:
- "result is not failed"
# cleanup
- name: "Delete inventory"
inventory:
name: "{{ bulk_inv_name }}"
organization: "Default"
state: "absent"

View File

@@ -60,7 +60,7 @@
- result['job_info']['skip_tags'] == "skipbaz"
- result['job_info']['limit'] == "localhost"
- result['job_info']['job_tags'] == "Hello World"
- result['job_info']['inventory'] == {{ inventory_id }}
- result['job_info']['inventory'] == inventory_id | int
- "result['job_info']['extra_vars'] == '{\"animal\": \"bear\", \"food\": \"carrot\"}'"
# cleanup

View File

@@ -16,7 +16,7 @@
- assert:
that:
- "{{ matching_jobs.count }} == 1"
- matching_jobs.count == 1
- name: List failed jobs (which don't exist)
job_list:
@@ -26,7 +26,7 @@
- assert:
that:
- "{{ successful_jobs.count }} == 0"
- successful_jobs.count == 0
- name: Get ALL result pages!
job_list:

View File

@@ -99,7 +99,7 @@
that:
- wait_results is failed
- 'wait_results.status == "canceled"'
- "wait_results.msg == 'Job with id {{ job.id }} failed' or 'Job with id={{ job.id }} failed, error: Job failed.'"
- "'Job with id ~ job.id failed' or 'Job with id= ~ job.id failed, error: Job failed.' is in wait_results.msg"
# workflow wait test
- name: Generate a random string for test

View File

@@ -132,7 +132,7 @@
- name: Get the ID of the first user created and verify that it is correct
assert:
that: "{{ query(plugin_name, 'users', query_params={ 'username' : user_creation_results['results'][0]['item'] }, return_ids=True)[0] }} == {{ user_creation_results['results'][0]['id'] }}"
that: "query(plugin_name, 'users', query_params={ 'username' : user_creation_results['results'][0]['item'] }, return_ids=True)[0] == user_creation_results['results'][0]['id'] | string"
- name: Try to get an ID of someone who does not exist
set_fact:

View File

@@ -11,7 +11,7 @@
- name: Call ruleset with no rules
set_fact:
complex_rule: "{{ query(ruleset_plugin_name, '2022-04-30 10:30:45') }}"
complex_rule: "{{ query(ruleset_plugin_name | string, '2022-04-30 10:30:45') }}"
ignore_errors: True
register: results

View File

@@ -36,7 +36,7 @@
- assert:
that:
- result is failed
- "'Unable to create schedule {{ sched1 }}' in result.msg"
- "'Unable to create schedule '~ sched1 in result.msg"
- name: Create with options that the JT does not support
schedule:
@@ -62,7 +62,7 @@
- assert:
that:
- result is failed
- "'Unable to create schedule {{ sched1 }}' in result.msg"
- "'Unable to create schedule '~ sched1 in result.msg"
- name: Build a real schedule
schedule:

View File

@@ -9,7 +9,7 @@
- name: Test too many params (failure from validation of terms)
debug:
msg: "{{ query(plugin_name, 'none', 'weekly', start_date='2020-4-16 03:45:07') }}"
msg: "{{ query(plugin_name | string, 'none', 'weekly', start_date='2020-4-16 03:45:07') }}"
ignore_errors: true
register: result

View File

@@ -57,7 +57,7 @@
- assert:
that:
- result is failed
- "'Monitoring of Workflow Job - {{ wfjt_name1 }} aborted due to timeout' in result.msg"
- "'Monitoring of Workflow Job - '~ wfjt_name1 ~ ' aborted due to timeout' in result.msg"
- name: Kick off a workflow and wait for it
workflow_launch:

View File

@@ -143,6 +143,26 @@ class BulkHostCreate(CustomAction):
return response
class BulkHostDelete(CustomAction):
action = 'host_delete'
resource = 'bulk'
@property
def options_endpoint(self):
return self.page.endpoint + '{}/'.format(self.action)
def add_arguments(self, parser, resource_options_parser):
options = self.page.connection.options(self.options_endpoint)
if options.ok:
options = options.json()['actions']['POST']
resource_options_parser.options['HOSTDELETEPOST'] = options
resource_options_parser.build_query_arguments(self.action, 'HOSTDELETEPOST')
def perform(self, **kwargs):
response = self.page.get().host_delete.post(kwargs)
return response
class ProjectUpdate(Launchable, CustomAction):
action = 'update'
resource = 'projects'

View File

@@ -270,6 +270,10 @@ class ResourceOptionsParser(object):
if k == 'hosts':
kwargs['type'] = list_of_json_or_yaml
kwargs['required'] = required = True
if method == "host_delete":
if k == 'hosts':
kwargs['type'] = list_of_json_or_yaml
kwargs['required'] = required = True
if method == "job_launch":
if k == 'jobs':
kwargs['type'] = list_of_json_or_yaml

View File

@@ -3,6 +3,7 @@
Bulk API endpoints allows to perform bulk operations in single web request. There are currently following bulk api actions:
- /api/v2/bulk/job_launch
- /api/v2/bulk/host_create
- /api/v2/bulk/host_delete
Making individual API calls in rapid succession or at high concurrency can overwhelm AWX's ability to serve web requests. When the application's ability to serve is exhausted, clients often receive 504 timeout errors.
@@ -99,3 +100,20 @@ Following is an example of a post request at the /api/v2/bulk/host_create:
The above will add 6 hosts in the inventory.
The maximum number of hosts allowed to be added is controlled by the setting `BULK_HOST_MAX_CREATE`. The default is 100 hosts. Additionally, nginx limits the maximum payload size, which is very likely when posting a large number of hosts in one request with variable data associated with them. The maximum payload size is 1MB unless overridden in your nginx config.
## Bulk Host Delete
Provides feature in the API that allows a single web request to delete multiple hosts from an inventory.
Following is an example of a post request at the /api/v2/bulk/host_delete:
{
"hosts": [3, 4, 5, 6, 7 ,8, 9, 10]
}
The above will delete 8 hosts from the inventory.
The maximum number of hosts allowed to be deleted is controlled by the setting `BULK_HOST_MAX_DELETE`. The default is 250 hosts. Additionally, nginx limits the maximum payload size, which is very likely when posting a large number of hosts in one request with variable data associated with them. The maximum payload size is 1MB unless overridden in your nginx config.

View File

@@ -265,10 +265,21 @@ When **HashiCorp Vault Secret Lookup** is selected for **Credential Type**, prov
- **CA Certificate**: specify the CA certificate used to verify HashiCorp's server
- **Approle Role_ID**: specify the ID for Approle authentication
- **Approle Secret_ID**: specify the corresponding secret ID for Approle authentication
- **Path to Approle Auth**: specify a path if other than the default path of ``/approle``
- **Client Certificate**: specify a PEM-encoded client certificate when using the TLS auth method including any required intermediate certificates expected by Vault
- **Client Certificate Key**: specify a PEM-encoded certificate private key when using the TLS auth method
- **TLS Authentication Role**: specify the role or certificate name in Vault that corresponds to your client certificate when using the TLS auth method. If it is not provided, Vault will attempt to match the certificate automatically
- **Namespace name** specify the namespace name (Vault Enterprise only)
- **Kubernetes role** specify the role name when using Kubernetes authentication
- **Path to Auth**: specify a path if other than the default path of ``/approle``
- **API Version** (required): select v1 for static lookups and v2 for versioned lookups
For more detail about Approle and its fields, refer to the `Vault documentation for Approle Auth Method <https://www.vaultproject.io/docs/auth/approle>`_. Below shows an example of a configured HashiCorp Vault Secret Lookup credential.
For more detail about the Approle auth method and its fields, refer to the `Vault documentation for Approle Auth Method <https://www.vaultproject.io/docs/auth/approle>`_.
For more detail about the Kubernetes auth method and its fields, refer to the `Vault documentation for Kubernetes auth method <https://developer.hashicorp.com/vault/docs/auth/kubernetes>` _.
For more detail about the TLS certificate auth method and its fields, refer to the `Vault documentation for TLS certificates auth method <https://developer.hashicorp.com/vault/docs/auth/cert>` _.
Below shows an example of a configured HashiCorp Vault Secret Lookup credential.
.. image:: ../common/images/credentials-create-hashicorp-kv-credential.png
:alt: Example new HashiCorp Vault Secret lookup dialog
@@ -288,9 +299,18 @@ When **HashiCorp Vault Signed SSH** is selected for **Credential Type**, provide
- **CA Certificate**: specify the CA certificate used to verify HashiCorp's server
- **Approle Role_ID**: specify the ID for Approle authentication
- **Approle Secret_ID**: specify the corresponding secret ID for Approle authentication
- **Path to Approle Auth**: specify a path if other than the default path of ``/approle``
- **Client Certificate**: specify a PEM-encoded client certificate when using the TLS auth method including any required intermediate certificates expected by Vault
- **Client Certificate Key**: specify a PEM-encoded certificate private key when using the TLS auth method
- **TLS Authentication Role**: specify the role or certificate name in Vault that corresponds to your client certificate when using the TLS auth method. If it is not provided, Vault will attempt to match the certificate automatically
- **Namespace name** specify the namespace name (Vault Enterprise only)
- **Kubernetes role** specify the role name when using Kubernetes authentication
- **Path to Auth**: specify a path if other than the default path of ``/approle``
For more detail about Approle and its fields, refer to the `Vault documentation for Approle Auth Method <https://www.vaultproject.io/docs/auth/approle>`_.
For more detail about the Approle auth method and its fields, refer to the `Vault documentation for Approle Auth Method <https://www.vaultproject.io/docs/auth/approle>`_.
For more detail about the Kubernetes auth method and its fields, refer to the `Vault documentation for Kubernetes auth method <https://developer.hashicorp.com/vault/docs/auth/kubernetes>` _.
For more detail about the TLS certificate auth method and its fields, refer to the `Vault documentation for TLS certificates auth method <https://developer.hashicorp.com/vault/docs/auth/cert>` _.
Below shows an example of a configured HashiCorp SSH Secrets Engine credential.

View File

@@ -95,13 +95,6 @@ The `singleton` class method is a helper method on the `Role` model that helps i
You may use the `user in some_role` syntax to check and see if the specified
user is a member of the given role, **or** a member of any ancestor role.
#### `get_roles_on_resource(resource, accessor)`
This is a static method (not bound to a class) that will efficiently return the names
of all roles that the `accessor` (a user or a team) has on a particular resource.
The resource is a python object for something like an organization, credential, or job template.
Return value is a list of strings like `["admin_role", "execute_role"]`.
### Fields
#### `ImplicitRoleField`

View File

@@ -0,0 +1,168 @@
Apache License
==============
_Version 2.0, January 2004_
_&lt;<http://www.apache.org/licenses/>&gt;_
### Terms and Conditions for use, reproduction, and distribution
#### 1. Definitions
“License” shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
“Licensor” shall mean the copyright owner or entity authorized by the copyright
owner that is granting the License.
“Legal Entity” shall mean the union of the acting entity and all other entities
that control, are controlled by, or are under common control with that entity.
For the purposes of this definition, “control” means **(i)** the power, direct or
indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or **(ii)** ownership of fifty percent (50%) or more of the
outstanding shares, or **(iii)** beneficial ownership of such entity.
“You” (or “Your”) shall mean an individual or Legal Entity exercising
permissions granted by this License.
“Source” form shall mean the preferred form for making modifications, including
but not limited to software source code, documentation source, and configuration
files.
“Object” form shall mean any form resulting from mechanical transformation or
translation of a Source form, including but not limited to compiled object code,
generated documentation, and conversions to other media types.
“Work” shall mean the work of authorship, whether in Source or Object form, made
available under the License, as indicated by a copyright notice that is included
in or attached to the work (an example is provided in the Appendix below).
“Derivative Works” shall mean any work, whether in Source or Object form, that
is based on (or derived from) the Work and for which the editorial revisions,
annotations, elaborations, or other modifications represent, as a whole, an
original work of authorship. For the purposes of this License, Derivative Works
shall not include works that remain separable from, or merely link (or bind by
name) to the interfaces of, the Work and Derivative Works thereof.
“Contribution” shall mean any work of authorship, including the original version
of the Work and any modifications or additions to that Work or Derivative Works
thereof, that is intentionally submitted to Licensor for inclusion in the Work
by the copyright owner or by an individual or Legal Entity authorized to submit
on behalf of the copyright owner. For the purposes of this definition,
“submitted” means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems, and
issue tracking systems that are managed by, or on behalf of, the Licensor for
the purpose of discussing and improving the Work, but excluding communication
that is conspicuously marked or otherwise designated in writing by the copyright
owner as “Not a Contribution.”
“Contributor” shall mean Licensor and any individual or Legal Entity on behalf
of whom a Contribution has been received by Licensor and subsequently
incorporated within the Work.
#### 2. Grant of Copyright License
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the Work and such
Derivative Works in Source or Object form.
#### 3. Grant of Patent License
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this section) patent license to make, have
made, use, offer to sell, sell, import, and otherwise transfer the Work, where
such license applies only to those patent claims licensable by such Contributor
that are necessarily infringed by their Contribution(s) alone or by combination
of their Contribution(s) with the Work to which such Contribution(s) was
submitted. If You institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work or a
Contribution incorporated within the Work constitutes direct or contributory
patent infringement, then any patent licenses granted to You under this License
for that Work shall terminate as of the date such litigation is filed.
#### 4. Redistribution
You may reproduce and distribute copies of the Work or Derivative Works thereof
in any medium, with or without modifications, and in Source or Object form,
provided that You meet the following conditions:
* **(a)** You must give any other recipients of the Work or Derivative Works a copy of
this License; and
* **(b)** You must cause any modified files to carry prominent notices stating that You
changed the files; and
* **(c)** You must retain, in the Source form of any Derivative Works that You distribute,
all copyright, patent, trademark, and attribution notices from the Source form
of the Work, excluding those notices that do not pertain to any part of the
Derivative Works; and
* **(d)** If the Work includes a “NOTICE” text file as part of its distribution, then any
Derivative Works that You distribute must include a readable copy of the
attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the
following places: within a NOTICE text file distributed as part of the
Derivative Works; within the Source form or documentation, if provided along
with the Derivative Works; or, within a display generated by the Derivative
Works, if and wherever such third-party notices normally appear. The contents of
the NOTICE file are for informational purposes only and do not modify the
License. You may add Your own attribution notices within Derivative Works that
You distribute, alongside or as an addendum to the NOTICE text from the Work,
provided that such additional attribution notices cannot be construed as
modifying the License.
You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction, or
distribution of Your modifications, or for any such Derivative Works as a whole,
provided Your use, reproduction, and distribution of the Work otherwise complies
with the conditions stated in this License.
#### 5. Submission of Contributions
Unless You explicitly state otherwise, any Contribution intentionally submitted
for inclusion in the Work by You to the Licensor shall be under the terms and
conditions of this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify the terms of
any separate license agreement you may have executed with Licensor regarding
such Contributions.
#### 6. Trademarks
This License does not grant permission to use the trade names, trademarks,
service marks, or product names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
#### 7. Disclaimer of Warranty
Unless required by applicable law or agreed to in writing, Licensor provides the
Work (and each Contributor provides its Contributions) on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE,
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
solely responsible for determining the appropriateness of using or
redistributing the Work and assume any risks associated with Your exercise of
permissions under this License.
#### 8. Limitation of Liability
In no event and under no legal theory, whether in tort (including negligence),
contract, or otherwise, unless required by applicable law (such as deliberate
and grossly negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this License or
out of the use or inability to use the Work (including but not limited to
damages for loss of goodwill, work stoppage, computer failure or malfunction, or
any and all other commercial damages or losses), even if such Contributor has
been advised of the possibility of such damages.
#### 9. Accepting Warranty or Additional Liability
While redistributing the Work or Derivative Works thereof, You may choose to
offer, and charge a fee for, acceptance of support, warranty, indemnity, or
other liability obligations and/or rights consistent with this License. However,
in accepting such obligations, You may act only on Your own behalf and on Your
sole responsibility, not on behalf of any other Contributor, and only if You
agree to indemnify, defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason of your
accepting any such warranty or additional liability.

View File

@@ -0,0 +1,24 @@
Copyright (c) Alex Gaynor and individual contributors.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* The names of its contributors may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,30 @@
Copyright © 2011-present, Encode OSS Ltd.
Copyright © 2019-2021, T. Franzel <tfranzel@gmail.com>, Cashlink Technologies GmbH.
Copyright © 2021-present, T. Franzel <tfranzel@gmail.com>.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

19
licenses/inflection.txt Normal file
View File

@@ -0,0 +1,19 @@
Copyright (C) 2012-2020 Janne Vanhala
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

20
licenses/tabulate.txt Normal file
View File

@@ -0,0 +1,20 @@
Copyright (c) 2011-2020 Sergey Astanin and contributors
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

202
licenses/uritemplate.txt Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -47,8 +47,8 @@ python-tss-sdk>=1.2.1
python-ldap
pyyaml>=6.0.1
receptorctl
social-auth-core[openidconnect]==4.3.0 # see UPGRADE BLOCKERs
social-auth-app-django==5.0.0 # see UPGRADE BLOCKERs
social-auth-core[openidconnect]==4.4.2 # see UPGRADE BLOCKERs
social-auth-app-django==5.4.0 # see UPGRADE BLOCKERs
sqlparse >= 0.4.4 # Required by django https://github.com/ansible/awx/security/dependabot/96
redis
requests

View File

@@ -83,6 +83,7 @@ cryptography==41.0.3
# adal
# autobahn
# azure-keyvault
# django-ansible-base
# jwcrypto
# pyopenssl
# service-identity
@@ -105,23 +106,34 @@ django==4.2.6
# via
# -r /awx_devel/requirements/requirements.in
# channels
# django-ansible-base
# django-auth-ldap
# django-cors-headers
# django-crum
# django-extensions
# django-filter
# django-guid
# django-oauth-toolkit
# django-polymorphic
# django-solo
# djangorestframework
# drf-spectacular
# social-auth-app-django
# via -r /awx_devel/requirements/requirements_git.txt
django-auth-ldap==4.1.0
# via -r /awx_devel/requirements/requirements.in
# via
# -r /awx_devel/requirements/requirements.in
# django-ansible-base
django-cors-headers==3.13.0
# via -r /awx_devel/requirements/requirements.in
django-crum==0.7.9
# via -r /awx_devel/requirements/requirements.in
# via
# -r /awx_devel/requirements/requirements.in
# django-ansible-base
django-extensions==3.2.1
# via -r /awx_devel/requirements/requirements.in
django-filter==23.5
# via django-ansible-base
django-guid==3.2.1
# via -r /awx_devel/requirements/requirements.in
django-oauth-toolkit==1.7.1
@@ -136,11 +148,16 @@ django-solo==2.0.0
django-split-settings==1.0.0
# via -r /awx_devel/requirements/requirements.in
djangorestframework==3.14.0
# via -r /awx_devel/requirements/requirements.in
# via
# -r /awx_devel/requirements/requirements.in
# django-ansible-base
# drf-spectacular
djangorestframework-yaml==2.0.0
# via -r /awx_devel/requirements/requirements.in
docutils==0.19
# via python-daemon
drf-spectacular==0.26.5
# via django-ansible-base
ecdsa==0.18.0
# via python-jose
enum-compat==0.0.3
@@ -179,6 +196,10 @@ incremental==22.10.0
# via twisted
inflect==6.0.2
# via jaraco-text
inflection==0.5.1
# via
# django-ansible-base
# drf-spectacular
irc==20.1.0
# via -r /awx_devel/requirements/requirements.in
isodate==0.6.1
@@ -213,7 +234,9 @@ jmespath==1.0.1
json-log-formatter==0.5.1
# via -r /awx_devel/requirements/requirements.in
jsonschema==4.17.3
# via -r /awx_devel/requirements/requirements.in
# via
# -r /awx_devel/requirements/requirements.in
# drf-spectacular
jwcrypto==1.4.2
# via django-oauth-toolkit
kubernetes==25.3.0
@@ -328,6 +351,7 @@ python-jose==3.3.0
python-ldap==3.4.3
# via
# -r /awx_devel/requirements/requirements.in
# django-ansible-base
# django-auth-ldap
python-string-utils==1.0.0
# via openshift
@@ -335,7 +359,9 @@ python-tss-sdk==1.2.1
# via -r /awx_devel/requirements/requirements.in
python3-openid==3.2.0
# via social-auth-core
# via -r /awx_devel/requirements/requirements_git.txt
# via
# -r /awx_devel/requirements/requirements_git.txt
# django-ansible-base
pytz==2022.6
# via
# djangorestframework
@@ -347,6 +373,7 @@ pyyaml==6.0.1
# -r /awx_devel/requirements/requirements.in
# ansible-runner
# djangorestframework-yaml
# drf-spectacular
# kubernetes
# receptorctl
receptorctl==1.4.2
@@ -384,7 +411,7 @@ service-identity==21.1.0
# via twisted
setuptools-rust==1.5.2
# via -r /awx_devel/requirements/requirements.in
setuptools-scm[toml]==7.0.5
setuptools-scm[toml]==8.0.4
# via -r /awx_devel/requirements/requirements.in
six==1.16.0
# via
@@ -406,9 +433,11 @@ slack-sdk==3.19.4
# via -r /awx_devel/requirements/requirements.in
smmap==5.0.0
# via gitdb
social-auth-app-django==5.0.0
# via -r /awx_devel/requirements/requirements.in
social-auth-core[openidconnect]==4.3.0
social-auth-app-django==5.4.0
# via
# -r /awx_devel/requirements/requirements.in
# django-ansible-base
social-auth-core[openidconnect]==4.4.2
# via
# -r /awx_devel/requirements/requirements.in
# social-auth-app-django
@@ -416,6 +445,8 @@ sqlparse==0.4.4
# via
# -r /awx_devel/requirements/requirements.in
# django
tabulate==0.9.0
# via django-ansible-base
tacacs-plus==1.0
# via -r /awx_devel/requirements/requirements.in
tempora==5.1.0
@@ -439,6 +470,8 @@ typing-extensions==4.4.0
# setuptools-rust
# setuptools-scm
# twisted
uritemplate==4.1.1
# via drf-spectacular
urllib3==1.26.13
# via
# botocore

View File

@@ -5,3 +5,4 @@ git+https://github.com/ansible/ansible-runner.git@devel#egg=ansible-runner
# specifically need https://github.com/robgolding/django-radius/pull/27
git+https://github.com/ansible/django-radius.git@develop#egg=django-radius
git+https://github.com/ansible/python3-saml.git@devel#egg=python3-saml
git+https://github.com/ansible/django-ansible-base.git@devel#egg=django-ansible-base

View File

@@ -55,7 +55,7 @@ main() {
echo ""
NEEDS_HELP=1
;;
esac
esac
if [[ "$NEEDS_HELP" == "1" ]] ; then
echo "This script generates requirements.txt from requirements.in and requirements_git.in"
@@ -76,6 +76,12 @@ main() {
exit
fi
if [[ ! -z "$(tail -c 1 "${requirements_git}")" ]]
then
echo "No newline at end of ${requirements_git}, please add one"
exit
fi
cp -vf requirements.txt "${_tmp}"
cd "${_tmp}"

View File

@@ -226,15 +226,18 @@ RUN ln -sf /awx_devel/{{ template_dest }}/supervisor_rsyslog.conf /etc/superviso
ADD tools/ansible/roles/dockerfile/files/launch_awx_web.sh /usr/bin/launch_awx_web.sh
ADD tools/ansible/roles/dockerfile/files/launch_awx_task.sh /usr/bin/launch_awx_task.sh
ADD tools/ansible/roles/dockerfile/files/launch_awx_rsyslog.sh /usr/bin/launch_awx_rsyslog.sh
ADD tools/scripts/rsyslog-4xx-recovery /usr/bin/rsyslog-4xx-recovery
ADD {{ template_dest }}/supervisor_web.conf /etc/supervisord_web.conf
ADD {{ template_dest }}/supervisor_task.conf /etc/supervisord_task.conf
ADD {{ template_dest }}/supervisor_rsyslog.conf /etc/supervisord_rsyslog.conf
ADD tools/scripts/awx-python /usr/bin/awx-python
{% endif %}
{% if (build_dev|bool) or (kube_dev|bool) %}
RUN echo /awx_devel > /var/lib/awx/venv/awx/lib/python3.9/site-packages/awx.egg-link
ADD tools/docker-compose/awx-manage /usr/local/bin/awx-manage
ADD tools/scripts/awx-python /usr/bin/awx-python
RUN ln -sf /awx_devel/tools/scripts/awx-python /usr/bin/awx-python
RUN ln -sf /awx_devel/tools/scripts/rsyslog-4xx-recovery /usr/bin/rsyslog-4xx-recovery
{% endif %}
# Pre-create things we need to access

View File

@@ -8,13 +8,14 @@ pidfile = /var/run/supervisor/supervisor.rsyslog.pid
[program:awx-rsyslogd]
command = rsyslogd -n -i /var/run/awx-rsyslog/rsyslog.pid -f /var/lib/awx/rsyslog/rsyslog.conf
autorestart = true
startsecs = 30
startsecs = 0
stopasgroup=true
killasgroup=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stderr_events_enabled = true
[program:awx-rsyslog-configurer]
{% if kube_dev | bool %}
@@ -59,6 +60,15 @@ stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[eventlistener:rsyslog-4xx-recovery]
command=rsyslog-4xx-recovery
buffer_size = 100
events=PROCESS_LOG_STDERR
priority=0
autorestart=true
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[unix_http_server]
file=/var/run/supervisor/supervisor.rsyslog.sock

View File

@@ -2,6 +2,8 @@
- name: Plumb AWX for Vault
hosts: localhost
gather_facts: False
vars:
awx_host: "https://127.0.0.1:8043"
tasks:
- include_role:
name: vault

View File

@@ -30,6 +30,24 @@ ldap_public_key_file: '{{ ldap_cert_dir }}/{{ ldap_public_key_file_name }}'
ldap_private_key_file: '{{ ldap_cert_dir }}/{{ ldap_private_key_file_name }}'
ldap_cert_subject: "/C=US/ST=NC/L=Durham/O=awx/CN="
# Hashicorp Vault
enable_vault: false
vault_tls: false
hashivault_cert_dir: '{{ sources_dest }}/vault_certs'
hashivault_server_cert_subject: "/C=US/ST=NC/L=Durham/O=awx/CN=tools-vault-1"
hashivault_server_cert_extensions:
- "subjectAltName = DNS:tools_vault_1, DNS:localhost"
- "keyUsage = digitalSignature, nonRepudiation"
- "extendedKeyUsage = serverAuth"
hashivault_client_cert_extensions:
- "subjectAltName = DNS:awx-vault-client"
- "keyUsage = digitalSignature, nonRepudiation"
- "extendedKeyUsage = serverAuth, clientAuth"
hashivault_client_cert_subject: "/C=US/ST=NC/L=Durham/O=awx/CN=awx-vault-client"
hashivault_server_public_keyfile: '{{ hashivault_cert_dir }}/server.crt'
hashivault_server_private_keyfile: '{{ hashivault_cert_dir }}/server.key'
hashivault_client_public_keyfile: '{{ hashivault_cert_dir }}/client.crt'
hashivault_client_private_keyfile: '{{ hashivault_cert_dir }}/client.key'
# Metrics
enable_splunk: false
enable_grafana: false

View File

@@ -101,6 +101,10 @@
include_tasks: ldap.yml
when: enable_ldap | bool
- name: Include vault TLS tasks if enabled
include_tasks: vault_tls.yml
when: enable_vault | bool
- name: Render Docker-Compose
template:
src: docker-compose.yml.j2

View File

@@ -0,0 +1,31 @@
---
- name: Create Certificates for HashiCorp Vault
block:
- name: Create Hashicorp Vault cert directory
file:
path: "{{ hashivault_cert_dir }}"
state: directory
- name: Generate vault server certificate
command: 'openssl req -new -newkey rsa:2048 -x509 -days 365 -nodes -out {{ hashivault_server_public_keyfile }} -keyout {{ hashivault_server_private_keyfile }} -subj "{{ hashivault_server_cert_subject }}"{% for ext in hashivault_server_cert_extensions %} -addext "{{ ext }}"{% endfor %}'
args:
creates: "{{ hashivault_server_public_keyfile }}"
- name: Generate vault test client certificate
command: 'openssl req -new -newkey rsa:2048 -x509 -days 365 -nodes -out {{ hashivault_client_public_keyfile }} -keyout {{ hashivault_client_private_keyfile }} -subj "{{ hashivault_client_cert_subject }}"{% for ext in hashivault_client_cert_extensions %} -addext "{{ ext }}"{% endfor %}'
args:
creates: "{{ hashivault_client_public_keyfile }}"
- name: Set mode for vault certificates
ansible.builtin.file:
path: "{{ hashivault_cert_dir }}"
recurse: true
state: directory
mode: 0777
when: vault_tls | bool
- name: Delete Certificates for HashiCorp Vault
file:
path: "{{ hashivault_cert_dir }}"
state: absent
when: vault_tls | bool == false

View File

@@ -252,7 +252,7 @@ services:
privileged: true
{% endfor %}
{% endif %}
{% if enable_vault|bool %}
{% if enable_vault | bool %}
vault:
image: hashicorp/vault:1.14
container_name: tools_vault_1
@@ -261,10 +261,17 @@ services:
ports:
- "1234:1234"
environment:
{% if vault_tls | bool %}
VAULT_LOCAL_CONFIG: '{"storage": {"file": {"path": "/vault/file"}}, "listener": [{"tcp": { "address": "0.0.0.0:1234", "tls_disable": false, "tls_cert_file": "/vault/tls/server.crt", "tls_key_file": "/vault/tls/server.key"}}], "default_lease_ttl": "168h", "max_lease_ttl": "720h", "ui": true}'
{% else %}
VAULT_LOCAL_CONFIG: '{"storage": {"file": {"path": "/vault/file"}}, "listener": [{"tcp": { "address": "0.0.0.0:1234", "tls_disable": true}}], "default_lease_ttl": "168h", "max_lease_ttl": "720h", "ui": true}'
{% endif %}
cap_add:
- IPC_LOCK
volumes:
{% if vault_tls | bool %}
- '../../docker-compose/_sources/vault_certs:/vault/tls'
{% endif %}
- 'hashicorp_vault_data:/vault/file'
{% endif %}

View File

@@ -1,2 +1,7 @@
---
vault_file: "{{ sources_dest }}/secrets/vault_init.yml"
admin_password_file: "{{ sources_dest }}/secrets/admin_password.yml"
vault_cert_dir: '{{ sources_dest }}/vault_certs'
vault_server_cert: "{{ vault_cert_dir }}/server.crt"
vault_client_cert: "{{ vault_cert_dir }}/client.crt"
vault_client_key: "{{ vault_cert_dir }}/client.key"

View File

@@ -1,4 +1,7 @@
---
- name: Set vault_addr
include_tasks: set_vault_addr.yml
- block:
- name: Start the vault
community.docker.docker_compose:
@@ -12,9 +15,16 @@
command: vault operator init
container: tools_vault_1
env:
VAULT_ADDR: "http://127.0.0.1:1234"
VAULT_ADDR: "{{ vault_addr }}"
VAULT_SKIP_VERIFY: "true"
register: vault_initialization
ignore_errors: true
failed_when:
- vault_initialization.rc != 0
- vault_initialization.stderr.find("Vault is already initialized") == -1
changed_when:
- vault_initialization.rc == 0
retries: 5
delay: 5
- name: Write out initialization file
copy:
@@ -34,21 +44,52 @@
name: vault
tasks_from: unseal.yml
- name: Configure the vault with cert auth
block:
- name: Create a cert auth mount
flowerysong.hvault.write:
path: "sys/auth/cert"
vault_addr: "{{ vault_addr_from_host }}"
validate_certs: false
token: "{{ Initial_Root_Token }}"
data:
type: "cert"
register: vault_auth_cert
failed_when:
- vault_auth_cert.result.errors | default([]) | length > 0
- "'path is already in use at cert/' not in vault_auth_cert.result.errors | default([])"
changed_when:
- vault_auth_cert.result.errors | default([]) | length == 0
- name: Configure client certificate
flowerysong.hvault.write:
path: "auth/cert/certs/awx-client"
vault_addr: "{{ vault_addr_from_host }}"
validate_certs: false
token: "{{ Initial_Root_Token }}"
data:
name: awx-client
certificate: "{{ lookup('ansible.builtin.file', '{{ vault_client_cert }}') }}"
policies:
- root
when: vault_tls | bool
- name: Create an engine
flowerysong.hvault.engine:
path: "my_engine"
type: "kv"
vault_addr: "http://localhost:1234"
vault_addr: "{{ vault_addr_from_host }}"
validate_certs: false
token: "{{ Initial_Root_Token }}"
register: engine
- name: Create a secret
- name: Create a demo secret
flowerysong.hvault.kv:
mount_point: "my_engine/my_root"
key: "my_folder"
value:
my_key: "this_is_the_secret_value"
vault_addr: "http://localhost:1234"
vault_addr: "{{ vault_addr_from_host }}"
validate_certs: false
token: "{{ Initial_Root_Token }}"
always:

View File

@@ -1,29 +1,45 @@
---
- name: Set vault_addr
include_tasks: set_vault_addr.yml
- name: Load vault keys
include_vars:
file: "{{ vault_file }}"
- name: Get AWX admin password
include_vars:
file: "{{ admin_password_file }}"
- name: Create a HashiCorp Vault Credential
awx.awx.credential:
credential_type: HashiCorp Vault Secret Lookup
name: Vault Lookup Cred
organization: Default
controller_host: "{{ awx_host }}"
controller_username: admin
controller_password: "{{ admin_password }}"
validate_certs: false
inputs:
api_version: "v1"
cacert: ""
default_auth_path: "approle"
cacert: "{{ lookup('ansible.builtin.file', '{{ vault_server_cert }}', errors='ignore') }}"
default_auth_path: "cert"
kubernetes_role: ""
namespace: ""
role_id: ""
secret_id: ""
client_cert_public: "{{ lookup('ansible.builtin.file', '{{ vault_client_cert }}', errors='ignore') }}"
client_cert_private: "{{ lookup('ansible.builtin.file', '{{ vault_client_key }}', errors='ignore') }}"
token: "{{ Initial_Root_Token }}"
url: "http://tools_vault_1:1234"
url: "{{ vault_addr_from_container }}"
register: vault_cred
- name: Create a custom credential type
awx.awx.credential_type:
name: Vault Custom Cred Type
kind: cloud
controller_host: "{{ awx_host }}"
controller_username: admin
controller_password: "{{ admin_password }}"
validate_certs: false
injectors:
extra_vars:
the_secret_from_vault: "{{ '{{' }} password {{ '}}' }}"
@@ -38,6 +54,11 @@
- name: Create a credential of the custom type
awx.awx.credential:
credential_type: "{{ custom_vault_cred_type.id }}"
controller_host: "{{ awx_host }}"
controller_username: admin
controller_password: "{{ admin_password }}"
validate_certs: false
name: Credential From Vault
inputs: {}
organization: Default
@@ -48,6 +69,11 @@
input_field_name: password
target_credential: "{{ custom_credential.id }}"
source_credential: "{{ vault_cred.id }}"
controller_host: "{{ awx_host }}"
controller_username: admin
controller_password: "{{ admin_password }}"
validate_certs: false
metadata:
auth_path: ""
secret_backend: "my_engine"

View File

@@ -0,0 +1,19 @@
---
- name: Detect if vault cert directory exist
stat:
path: "{{ vault_cert_dir }}"
register: vault_cert_dir_stat
- name: Set vault_addr for http
set_fact:
vault_addr: "http://127.0.0.1:1234"
vault_addr_from_host: "http://localhost:1234"
vault_addr_from_container: "http://tools_vault_1:1234"
when: vault_cert_dir_stat.stat.exists == false
- name: Set vault_addr for https
set_fact:
vault_addr: "https://127.0.0.1:1234"
vault_addr_from_host: "https://localhost:1234"
vault_addr_from_container: "https://tools_vault_1:1234"
when: vault_cert_dir_stat.stat.exists == true

View File

@@ -1,11 +1,15 @@
---
- name: Set vault_addr
include_tasks: set_vault_addr.yml
- name: Load vault keys
include_vars:
file: "{{ vault_file }}"
- name: Unseal the vault
flowerysong.hvault.seal:
vault_addr: "http://localhost:1234"
vault_addr: "{{ vault_addr_from_host }}"
validate_certs: false
state: unsealed
key: "{{ item }}"
loop:

View File

@@ -8,16 +8,20 @@ command = awx-manage run_dispatcher
autorestart = true
stopasgroup=true
killasgroup=true
stdout_events_enabled = true
stderr_events_enabled = true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:awx-receiver]
command = awx-manage run_callback_receiver
autorestart = true
stopasgroup=true
killasgroup=true
stdout_events_enabled = true
stderr_events_enabled = true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:awx-wsrelay]
command = awx-manage run_wsrelay
@@ -25,8 +29,10 @@ autorestart = true
autorestart = true
stopasgroup=true
killasgroup=true
stdout_events_enabled = true
stderr_events_enabled = true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:awx-ws-heartbeat]
command = awx-manage run_ws_heartbeat
@@ -34,8 +40,10 @@ autorestart = true
autorestart = true
stopasgroup=true
killasgroup=true
stdout_events_enabled = true
stderr_events_enabled = true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:awx-rsyslog-configurer]
command = awx-manage run_rsyslog_configurer
@@ -64,32 +72,40 @@ stopwaitsecs = 1
stopsignal=KILL
stopasgroup=true
killasgroup=true
stdout_events_enabled = true
stderr_events_enabled = true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:awx-daphne]
command = make daphne
autorestart = true
stopasgroup=true
killasgroup=true
stdout_events_enabled = true
stderr_events_enabled = true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:awx-nginx]
command = make nginx
autorestart = true
stopasgroup=true
killasgroup=true
stdout_events_enabled = true
stderr_events_enabled = true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:awx-rsyslogd]
command = rsyslogd -n -i /var/run/awx-rsyslog/rsyslog.pid -f /var/lib/awx/rsyslog/rsyslog.conf
autorestart = true
startsecs=0
stopasgroup=true
killasgroup=true
redirect_stderr=true
stdout_events_enabled = true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stderr_events_enabled = true
[program:awx-receptor]
@@ -97,8 +113,10 @@ command = receptor --config /etc/receptor/receptor.conf
autorestart = true
stopasgroup=true
killasgroup=true
stdout_events_enabled = true
stderr_events_enabled = true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[group:tower-processes]
programs=awx-dispatcher,awx-receiver,awx-uwsgi,awx-daphne,awx-nginx,awx-wsrelay,awx-rsyslogd,awx-ws-heartbeat,awx-rsyslog-configurer,awx-cache-clear
@@ -110,14 +128,19 @@ autostart = true
autorestart = true
stopasgroup=true
killasgroup=true
stdout_events_enabled = true
stderr_events_enabled = true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[eventlistener:superwatcher]
command=stop-supervisor
events=PROCESS_STATE_FATAL
autorestart = true
stderr_logfile=/dev/stdout
[eventlistener:rsyslog-4xx-recovery]
command=/awx_devel/tools/scripts/rsyslog-4xx-recovery
buffer_size = 100
events=PROCESS_LOG_STDERR
priority=0
autorestart=true
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[unix_http_server]
file=/var/run/supervisor/supervisor.sock

View File

@@ -13,7 +13,7 @@ def write_stdout(s):
def write_stderr(s):
sys.stderr.write(s)
sys.stderr.write(f"[rsyslog-4xx-recovery] {s}")
sys.stderr.flush()
@@ -31,23 +31,6 @@ def main():
except ValueError as e:
write_stderr(str(e))
# now decide what do to based on eventnames
if headers["eventname"] == "PROCESS_STATE_FATAL":
headers.update(
dict(
[x.split(":") for x in sys.stdin.read(int(headers["len"])).split()]
)
)
try:
# incoming event that produced PROCESS_STATE_FATAL will have a PID. SIGTERM it!
write_stderr(
f"{datetime.datetime.now(timezone.utc)} - sending SIGTERM to proc={headers} with data={headers}\n"
)
os.kill(headers["pid"], signal.SIGTERM)
except Exception as e:
write_stderr(str(e))
# awx-rsyslog PROCESS_LOG_STDERR handler
if headers["eventname"] == "PROCESS_LOG_STDERR":
# pertinent data to process that produced PROCES_LOG_STDERR is in the first line of the data payload; so lets extract it
@@ -66,6 +49,15 @@ def main():
except Exception as e:
write_stderr(str(e))
if "action-0-omhttp queue: need to do hard cancellation" in log_message:
try:
write_stderr(
f"{datetime.datetime.now(timezone.utc)} - sending SIGKILL to proc=[{proc_details['processname']}] with pid=[{int(proc_details['pid'])}] due to log_message=[{log_message}]\n"
)
os.kill(int(proc_details["pid"]), signal.SIGKILL)
except Exception as e:
write_stderr(str(e))
write_stdout("RESULT 2\nOK")