Compare commits

...

87 Commits

Author SHA1 Message Date
Jeff Bradberry
6d0a3149f1 Create and register page types for the new RBAC endpoints 2024-06-14 14:48:05 -04:00
Chad Ferman
31a086b11a Add OpenShift Virtualization Inventory source option (#15047)
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-06-14 13:38:37 -04:00
a_nackov
d94f766fcb Fix notification name search (#15231)
Signed-off-by: Adrian Nackov <adrian.nackov@mail.schwarz>
2024-06-13 14:49:54 +00:00
Viktor Varga
a7113549eb Add 'Terraform State' inventory source support for collection (#15258) 2024-06-12 19:22:21 +00:00
Jake Jackson
bfd811f408 Upgrade aiohttp for cve 2024-23829 (#15257) 2024-06-12 19:20:40 +00:00
Jeff Bradberry
030704a9e1 Change all uses of ImplicitRoleField to do on_delete=SET_NULL
This will mitigate the problem where if any Role gets deleted for some
weird reason it could previously cascade delete important objects.
2024-06-12 13:08:03 -04:00
Seth Foster
c312d9bce3 Rename setting to allow local resource management (#15269)
rename AWX_DIRECT_SHARED_RESOURCE_MANAGEMENT_ENABLED
to
ALLOW_LOCAL_RESOURCE_MANAGEMENT

- clearer meaning
- drop prefix so the same setting is used across the platform

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-06-11 12:50:18 -04:00
Jeff Bradberry
aadcc217eb This should deal correctly with the ancestor list mismatches 2024-06-10 16:36:22 -04:00
Jeff Bradberry
345c1c11e9 Guard against the role field not being populated
when doing the final reset of Role.implicit_parents.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
2c3a7fafc5 Add a new test scenario
to trigger the implicit parent not being in the parents and ancestors lists.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
dbcd32a1d9 Mark and rebuild the implicit_parents field for all affected roles 2024-06-10 16:36:22 -04:00
Jeff Bradberry
d45e258a78 Wait until the end of the fix script to clean up orphaned roles 2024-06-10 16:36:22 -04:00
Jeff Bradberry
d16b69a102 Add output of the update and deletion counts to fix.py 2024-06-10 16:36:22 -04:00
Jeff Bradberry
8b4efbc973 Do not throw away the container of cross-linked parents
Since we use it twice, the second time to get the id field of each.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
4cb061e7db Add a readme file with instructions 2024-06-10 16:36:22 -04:00
Jeff Bradberry
31db6a1447 Fix another instance where a bad resource->Role fk could throw a traceback 2024-06-10 16:36:22 -04:00
Jeff Bradberry
ad9d5904d8 Adjusted foreignkeys.sql for correctness
Some relationships known to be handled by the special mapping sql file
were being caught as false positives.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
b837d549ff Split the foreign key sql script into an 'into' and 'from' portion
Also, make use of up-front defined arrays of the tables involved, for
ease of editing in the future.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
9e22865d2e Filter out the relations within the known topology tables 2024-06-10 16:36:22 -04:00
Jeff Bradberry
ee3e3e1516 First cut at detecting which foreign keys enter and exit the topology tables 2024-06-10 16:36:22 -04:00
Jeff Bradberry
4a8f6e45f8 Move the "test" files into their own directory 2024-06-10 16:36:22 -04:00
Jeff Bradberry
6a317cca1b Remove the role_chain.py module
it wound up being unworkable, and I think ultimately we only need to
check the immediate parentage of each role.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
d67af79451 Attempt to correct any crosslinked parents
I think that rebuild_role_ancestor_list() will then correctly update
all of the affected Role.ancestors.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
fe77fda7b2 Exclude more files in the .gitignore 2024-06-10 16:36:22 -04:00
Jeff Bradberry
f613b76baa Modify the role parent check logic to stay in the roles as much as possible
since the foreign keys to the roles from the resources can make us go
wrong almost immediately.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
054cbe69d7 Exclude the team grant false positives
The results in my test now look correct.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
87e9dcb6d7 Attempt to more thoroughly check the parents of each Role
This version, however, has false positives because Roles become
children of Team.member_role when a Role is granted to a Team.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
c8829b057e First cut at checking the role hierarchy
Checking if parents and implicit_parents are consistent with ancestors.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
a0b376a6ca Set up a scenario where IG.use_role_id points to something no longer there
This is actually happening for one customer, though it seems like it
shouldn't be if the foreign key constraint is set back up properly.
In order to recreate it, I had to add the constraint back with 'NOT
VALID' added on to prevent the check.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
d675207f99 Handle the case where a resource points to a Role which isn't in the db 2024-06-10 16:36:22 -04:00
Jeff Bradberry
20504042c9 Graph out only the parent/child chains from a given Role
Doing the entire graph is too much on any system with real amounts of Roles.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
0e87e97820 Check for a broken ContentType -> model and log and skip
Apparently this has happened to a customer, per Nate Becker.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
1f154742df Make the role_chain.py script emit a Graphviz file
of the Role relationships.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
85fc81aab1 Start a new script that can be used to examine a Role's ancestry 2024-06-10 16:36:22 -04:00
Jeff Bradberry
5cfeeb3e87 Treat resources with null role fks differently
The underlying role should be re-linked, instead of treated as orphaned.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
a8c07b06d8 Set up an enhanced version of Seth's bad role scenario 2024-06-10 16:36:22 -04:00
Jeff Bradberry
53c5feaf6b Set up Seth's bad role scenario 2024-06-10 16:36:22 -04:00
Jeff Bradberry
6f57aaa8f5 When checking reverse links, treat duplicate Roles different from bad ones
Also, null out the generic foreign key on orphaned roles before deleting.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
bea74a401d Attempt to be more efficient about grouping the content types
Also, attempt to rebuild the role ancestors in the fixup script.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
54e85813c8 First full check script
This version emits the first fix-up script as its output.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
b69ed08fe5 Specifically examine the InstanceGroup roles 2024-06-10 16:36:22 -04:00
Jeff Bradberry
de25408a23 Print out details of all of the crosslinked roles 2024-06-10 16:36:22 -04:00
Jeff Bradberry
b17f0a188b Initial check 2024-06-10 16:36:22 -04:00
Hao Liu
fb860d76ce Add receptor work list command to sosreport (#15207) 2024-06-10 19:39:24 +00:00
Artsiom Musin
451f20ce0f Use patch to update users in awx cli 2024-06-10 14:54:10 -04:00
Hao Liu
c1dc0c7b86 Periodically sync from share resource provider (#15264)
* Periodically sync from share resource provider

- add periodic task `periodic_resource_sync` run once every 15 min
- if `RESOURCE_SERVER` is not configured sync will not run
- only 1 node

example RESOURCE_SERVER configuration
```
RESOURCE_SERVER = {
    "URL": "<resource server url>",
    "SECRET_KEY": "<resource server auth token>",
    "VALIDATE_HTTPS": <True/False>,
}
RESOURCE_SERVICE_PATH = <resource_service_path>
```
2024-06-10 18:10:57 +00:00
Seth Foster
d65ea2a3d5 Fix race condition when deleting schedules (#15259)
If more than one schedule for a unified job template
is removed at once, a race condition can arise.

example scenario:  delete schedules with ids 7 and 8
- unified job template next_schedule is currently 7
- on delete of schedule 7, update_computed_fields will try to set
next_schedule to 8
- but while this logic is occurring, another transaction
is deleting 8

This leads to a db IntegrityError

The solution here is to call select_for_update() on the
next schedule, so that 8 cannot be deleted until
the transaction for deleting 7 is completed.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-06-09 20:39:18 -04:00
Dave
8827ae7554 Replace REMOTE_ADDR with ansible_base.lib.utils.requests.get_remote_host (#15175) 2024-06-06 14:47:04 +01:00
Jeff Bradberry
4915262af1 Do each batch of the HostMetric updates in a transaction
It looks like we can't do upserts currently without dropping to raw
SQL, but if we wrap each batch in a transaction, that should insure
that each is updated with the correct count.
2024-06-05 14:15:21 -04:00
Seth Foster
d43c91e1a5 Option for dev env to enable ssl for postgres (#15151)
PG_TLS=true make docker-compose

This will add some extra startup commands
for the postgres container to generate a key and
cert to use for postgres connections.
It will also mount in pgssl.conf which has ssl configuration.

This can be useful for debugging issues that only surface
when using ssl postgres connections.
2024-06-05 12:48:08 -04:00
Seth Foster
b470ca32af Prevent modifying shared resources when using platform ingress (#15234)
* Prevent modifying shared resources

Adds a class decorator to prevent modifying shared resources
when gateway is being used.

AWX_DIRECT_SHARED_RESOURCE_MANAGEMENT_ENABLED is the setting
to enable/disable this feature.

Works by overriding these view methods:
- create
- delete
- perform_update

create and delete are overridden to raise a
PermissionDenied exception.

perform_update is overridden to check if any shared
fields are being modified, and raise a PermissionDenied
exception if so.

Additional changes:

Prevent sso conf from registering external authentication related settings if
AWX_DIRECT_SHARED_RESOURCE_MANAGEMENT_ENABLED is False

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-06-05 12:44:01 -04:00
Satoe Imaishi
793777bec7 Add cython to VENV_BOOTSTRAP for grpcio (#15256) 2024-06-05 11:04:15 -04:00
Jake Jackson
6dc4a4508d fix cve 2024-24680 (#15250) 2024-06-04 15:44:09 -04:00
Hao Liu
cf09a4220d Repin cython due to https://github.com/yaml/pyyaml/pull/702 (#15248)
* Revert "Unpin cypthon (#15246)"

This reverts commit 659c3b64de.

* Pin grpcio

Avoid cython 3 due to https://github.com/yaml/pyyaml/pull/702

* Delete asyncpg.txt
2024-06-03 19:42:20 +00:00
Hao Liu
659c3b64de Unpin cypthon (#15246)
* Unpin cython

* Remove unused asyncpg

* Remove asyncpg license file
2024-06-03 11:41:56 -04:00
Ethem Cem Özkan
37ad690d09 Add AWS SNS notification support for webhook (#15184)
Support for AWS SNS notifications. SNS is a widespread service that is used to integrate with other AWS services(EG lambdas). This support would unlock use cases like triggering lambda functions, especially when AWX is deployed on EKS.

Decisions:

Data Structure
- I preferred using the same structure as Webhook for message body data because it contains all job details. For now, I directly linked to Webhook to avoid duplication, but I am open to suggestions.

AWS authentication
- To support non-AWS native environments, I added configuration options for AWS secret key, ID, and session tokens. When entered, these values are supplied to the underlining boto3 SNS client. If not entered, it falls back to the default authentication chain to support the native AWS environment. Properly configured EKS pods are created with temporary credentials that the default authentication chain can pick automatically.

---------

Signed-off-by: Ethem Cem Ozkan <ethemcem.ozkan@gmail.com>
2024-06-02 02:48:56 +00:00
Akira Yokochi
7845ec7e01 Modify the link to terraform_state inventory plugin (#15241)
fix link to terraform_state inventory plugin
2024-06-01 22:36:30 -04:00
Chris Meyers
a15bcf1d55 Add requirements comment 2024-05-31 13:55:17 -04:00
Chris Meyers
7b3fb2c2a8 Add example grafana dashboard
* Per-service log view
2024-05-31 13:55:17 -04:00
Chris Meyers
6df47c8449 Rework which loggers we sent to OTEL
* Send all propagate=False loggers to OTEL AND the awx logger
2024-05-31 13:55:17 -04:00
Chris Meyers
cae42653bf Add recording
* Always output awx logs to a file via otel
* That log file can always be later replayed into a product that
  supports otlp at a later date.
* Useful when you find a problem that you need a time series DB to help
  find and solve.
* Useful if a community member or customer has a problem where a time
  series db would be helpful. You can take a "remote" users log and
  replay it locally for analysis.
2024-05-31 13:55:17 -04:00
Chris Meyers
da46a29f40 Move requirements out of dev and into mainline
* Add new package license files
2024-05-31 13:55:17 -04:00
Chris Meyers
0eb465531c Centralized logging via otel 2024-05-31 13:55:17 -04:00
Hao Liu
d0fe0ed796 Add check_instance_ready management command (#15238)
- throw exception and return 1 if instance not ready
- return 0 if ready
2024-05-31 09:29:40 -04:00
Chris Meyers
ceafa14c9d Use settings fixture in tests
* Otherwise, settings value changes bleeds over into other tests.
* Remove django.conf settings import so that we do not accidentally
  forget to use the settings fixture.
2024-05-30 14:10:35 -05:00
Chris Meyers
08e1454098 Make named url work with optional url prefix
* Handle named url sub-resources
* i.e. /api/v2/inventories/my_inventory++Default/hosts/
2024-05-29 12:39:25 -05:00
Harshith u
776b661fb3 use optional api prefix in collection if set as environ vairable (#15205)
* use optional api prefix if set as environ variable

* Different default depending on collection type
2024-05-29 11:54:05 -04:00
Hao Liu
af6ccdbde5 Fix galaxy publishing (#15233)
- switch to galaxy search API for determining if the version we want to publish already exist
- switch from github action variable to env var for easier copy and paste testing
2024-05-28 15:27:34 -04:00
Matthew Jones
559ab3564b Include Kube credentials in the inventory source picker (#15223) 2024-05-28 14:05:24 -04:00
Alan Rominger
208ef0ce25 Update test so that DAB change can merge (#15222) 2024-05-28 11:53:01 -04:00
Alexander Pykavy
c3d9aa54d8 Mention in the docs that you can skip make docker-compose-build (#15149)
Signed-off-by: Alexander Pykavy <aleksandrpykavyj@gmail.com>
2024-05-22 19:33:13 +00:00
irozet12
66efe7198a Wrap long line to fit help window (#14597) (#15169)
Wrap long line to fit description window (#14597)

Co-authored-by: Ирина Розет <irozet@astralinux.ru>
2024-05-22 19:31:03 +00:00
Beni ~HB9HNT
adf930ee42 awxkit: replace deprecated locale.format() with locale.format_string() to fix human output on Python 3.12 (#15170)
Replace deprecated locale.format with locale.format_string

This will be removed in Python 3.12 and will break human output unless fixed.
2024-05-22 19:27:31 +00:00
Hao Liu
892410477a Fix promote from release event (#15215) 2024-05-22 18:58:11 +00:00
Seth Foster
0d4f653794 Fix up ansible-test sanity checks due to ansible 2.17 release (#15208)
* Fix up ansible sanity checks

* Fix awx-collection test failure

* Add ignore for ansible-test 2.17 

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-05-21 15:05:59 -04:00
Alan Rominger
8de8f6dce2 Update a few dev requirements (#15203)
* Update a few dev requirements

* Fix test failures due to upgrade

* Update patterns for mocker usage
2024-05-20 23:37:02 +00:00
Hao Liu
fc9064e27f Allow wsrelay to fail without FATAL (#15191)
We have not identify the root cause of wsrelay failure but attempt to make wsrelay restart itself resulted in postgres and redis connection leak. We were not able to fully identify where the redis connection leak comes from so reverting back to failing and removing startsecs 30 will prevent wsrelay to FATAL
2024-05-20 23:34:12 +00:00
TVo
7de350dc3e Added docs for new RBAC changes (#15150)
* Added docs for new RBAC changes

* Added UI changes with screens and API endpoints with sample commands.

* Update docs/docsite/rst/userguide/rbac.rst

Co-authored-by: Vidya Nambiar <43621546+vidyanambiar@users.noreply.github.com>

* Incorporated review feedback from @vidyanambiar.

---------

Co-authored-by: Vidya Nambiar <43621546+vidyanambiar@users.noreply.github.com>
2024-05-17 20:10:16 -04:00
Michael Anstis
d4bdaad4d8 Fix success_url_allowed_hosts set instantiation (#15196)
Co-authored-by: Michael Anstis <manstis@redhat.com>
2024-05-16 12:08:50 -04:00
Bikouo Aubin
a9b2ffa3e9 Fix terraform backend credential issue (#15141)
fix issue introduced by PR15055
2024-05-15 15:19:18 -04:00
Sean Sullivan
1b8d409043 Add skip authorization option to collection application module (#15190) 2024-05-15 09:29:00 -04:00
dependabot[bot]
da2bccf5a8 Bump jinja2 from 3.1.3 to 3.1.4 in /docs/docsite (#15168)
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.4.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.3...3.1.4)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-14 14:26:26 -04:00
Hao Liu
a2f083bd8e Fix podman failure in development environment (#15188)
```
ERRO[0000] path "/var/lib/awx/.config" exists and it is not owned by the current user
```
start to happen with podman 5

it seems that the config files are no longer needed removing it fixes the problem
2024-05-14 14:18:48 -04:00
Michael Anstis
4d641b6cf5 Support Django logout redirects (#15148)
* Allowed hosts for logout redirects can now be set via the LOGOUT_ALLOWED_HOSTS setting

Authored-by: Michael Anstis <manstis@redhat.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-05-13 13:03:27 -04:00
Elijah DeLee
439c3f0c23 Skip 3 expensive calls for jobs saving in 'waiting' status on UnifiedJob (#15174)
skip update parent logic for 'waiting' on UnifiedJob

by not looking up "status_before" from previous instance
we save 2 to 3 expensive calls (the self lookup of old state, the lookup
of parent, and the update to parent if allow_simultaneous == False or status == 'waiting')
2024-05-13 10:26:03 -04:00
jessicamack
946bbe3560 Clean up settings file (#15135)
remove unneeded settings
2024-05-10 11:25:15 -04:00
James
20f054d600 Expose websockets on api prefix v2 2024-05-01 10:44:51 -04:00
148 changed files with 6994 additions and 2277 deletions

View File

@@ -29,7 +29,7 @@ jobs:
- name: Set GitHub Env vars if release event - name: Set GitHub Env vars if release event
if: ${{ github.event_name == 'release' }} if: ${{ github.event_name == 'release' }}
run: | run: |
echo "TAG_NAME=${{ env.TAG_NAME }}" >> $GITHUB_ENV echo "TAG_NAME=${{ github.event.release.tag_name }}" >> $GITHUB_ENV
- name: Checkout awx - name: Checkout awx
uses: actions/checkout@v3 uses: actions/checkout@v3
@@ -60,15 +60,18 @@ jobs:
COLLECTION_VERSION: ${{ env.TAG_NAME }} COLLECTION_VERSION: ${{ env.TAG_NAME }}
COLLECTION_TEMPLATE_VERSION: true COLLECTION_TEMPLATE_VERSION: true
run: | run: |
sudo apt-get install jq
make build_collection make build_collection
curl_with_redirects=$(curl --head -sLw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz | tail -1) count=$(curl -s https://galaxy.ansible.com/api/v3/plugin/ansible/search/collection-versions/\?namespace\=${COLLECTION_NAMESPACE}\&name\=awx\&version\=${COLLECTION_VERSION} | jq .meta.count)
curl_without_redirects=$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz | tail -1) if [[ "$count" == "1" ]]; then
if [[ "$curl_with_redirects" == "302" ]] || [[ "$curl_without_redirects" == "302" ]]; then
echo "Galaxy release already done"; echo "Galaxy release already done";
else elif [[ "$count" == "0" ]]; then
ansible-galaxy collection publish \ ansible-galaxy collection publish \
--token=${{ secrets.GALAXY_TOKEN }} \ --token=${{ secrets.GALAXY_TOKEN }} \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz; awx_collection_build/${COLLECTION_NAMESPACE}-awx-${COLLECTION_VERSION}.tar.gz;
else
echo "Unexpected count from galaxy search: $count";
exit 1;
fi fi
- name: Set official pypi info - name: Set official pypi info

View File

@@ -11,6 +11,8 @@ ignore: |
# django template files # django template files
awx/api/templates/instance_install_bundle/** awx/api/templates/instance_install_bundle/**
.readthedocs.yaml .readthedocs.yaml
tools/loki
tools/otel
extends: default extends: default

View File

@@ -47,8 +47,14 @@ VAULT ?= false
VAULT_TLS ?= false VAULT_TLS ?= false
# If set to true docker-compose will also start a tacacs+ instance # If set to true docker-compose will also start a tacacs+ instance
TACACS ?= false TACACS ?= false
# If set to true docker-compose will also start an OpenTelemetry Collector instance
OTEL ?= false
# If set to true docker-compose will also start a Loki instance
LOKI ?= false
# If set to true docker-compose will install editable dependencies # If set to true docker-compose will install editable dependencies
EDITABLE_DEPENDENCIES ?= false EDITABLE_DEPENDENCIES ?= false
# If set to true, use tls for postgres connection
PG_TLS ?= false
VENV_BASE ?= /var/lib/awx/venv VENV_BASE ?= /var/lib/awx/venv
@@ -65,7 +71,7 @@ RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio
# These should be upgraded in the AWX and Ansible venv before attempting # These should be upgraded in the AWX and Ansible venv before attempting
# to install the actual requirements # to install the actual requirements
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==69.0.2 setuptools_scm[toml]==8.0.4 wheel==0.42.0 VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==69.0.2 setuptools_scm[toml]==8.0.4 wheel==0.42.0 cython==0.29.37
NAME ?= awx NAME ?= awx
@@ -535,7 +541,10 @@ docker-compose-sources: .git/hooks/pre-commit
-e enable_vault=$(VAULT) \ -e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS) \ -e vault_tls=$(VAULT_TLS) \
-e enable_tacacs=$(TACACS) \ -e enable_tacacs=$(TACACS) \
-e enable_otel=$(OTEL) \
-e enable_loki=$(LOKI) \
-e install_editable_dependencies=$(EDITABLE_DEPENDENCIES) \ -e install_editable_dependencies=$(EDITABLE_DEPENDENCIES) \
-e pg_tls=$(PG_TLS) \
$(EXTRA_SOURCES_ANSIBLE_OPTS) $(EXTRA_SOURCES_ANSIBLE_OPTS)
docker-compose: awx/projects docker-compose-sources docker-compose: awx/projects docker-compose-sources

View File

@@ -33,6 +33,7 @@ from rest_framework.negotiation import DefaultContentNegotiation
# django-ansible-base # django-ansible-base
from ansible_base.rest_filters.rest_framework.field_lookup_backend import FieldLookupBackend from ansible_base.rest_filters.rest_framework.field_lookup_backend import FieldLookupBackend
from ansible_base.lib.utils.models import get_all_field_names from ansible_base.lib.utils.models import get_all_field_names
from ansible_base.lib.utils.requests import get_remote_host
from ansible_base.rbac.models import RoleEvaluation, RoleDefinition from ansible_base.rbac.models import RoleEvaluation, RoleDefinition
from ansible_base.rbac.permission_registry import permission_registry from ansible_base.rbac.permission_registry import permission_registry
@@ -93,8 +94,9 @@ class LoggedLoginView(auth_views.LoginView):
def post(self, request, *args, **kwargs): def post(self, request, *args, **kwargs):
ret = super(LoggedLoginView, self).post(request, *args, **kwargs) ret = super(LoggedLoginView, self).post(request, *args, **kwargs)
ip = get_remote_host(request) # request.META.get('REMOTE_ADDR', None)
if request.user.is_authenticated: if request.user.is_authenticated:
logger.info(smart_str(u"User {} logged in from {}".format(self.request.user.username, request.META.get('REMOTE_ADDR', None)))) logger.info(smart_str(u"User {} logged in from {}".format(self.request.user.username, ip)))
ret.set_cookie( ret.set_cookie(
'userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False), samesite=getattr(settings, 'USER_COOKIE_SAMESITE', 'Lax') 'userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False), samesite=getattr(settings, 'USER_COOKIE_SAMESITE', 'Lax')
) )
@@ -103,12 +105,15 @@ class LoggedLoginView(auth_views.LoginView):
return ret return ret
else: else:
if 'username' in self.request.POST: if 'username' in self.request.POST:
logger.warning(smart_str(u"Login failed for user {} from {}".format(self.request.POST.get('username'), request.META.get('REMOTE_ADDR', None)))) logger.warning(smart_str(u"Login failed for user {} from {}".format(self.request.POST.get('username'), ip)))
ret.status_code = 401 ret.status_code = 401
return ret return ret
class LoggedLogoutView(auth_views.LogoutView): class LoggedLogoutView(auth_views.LogoutView):
success_url_allowed_hosts = set(settings.LOGOUT_ALLOWED_HOSTS.split(",")) if settings.LOGOUT_ALLOWED_HOSTS else set()
def dispatch(self, request, *args, **kwargs): def dispatch(self, request, *args, **kwargs):
original_user = getattr(request, 'user', None) original_user = getattr(request, 'user', None)
ret = super(LoggedLogoutView, self).dispatch(request, *args, **kwargs) ret = super(LoggedLogoutView, self).dispatch(request, *args, **kwargs)
@@ -208,11 +213,12 @@ class APIView(views.APIView):
return response return response
if response.status_code >= 400: if response.status_code >= 400:
ip = get_remote_host(request) # request.META.get('REMOTE_ADDR', None)
msg_data = { msg_data = {
'status_code': response.status_code, 'status_code': response.status_code,
'user_name': request.user, 'user_name': request.user,
'url_path': request.path, 'url_path': request.path,
'remote_addr': request.META.get('REMOTE_ADDR', None), 'remote_addr': ip,
} }
if type(response.data) is dict: if type(response.data) is dict:

View File

@@ -5381,7 +5381,7 @@ class NotificationSerializer(BaseSerializer):
) )
def get_body(self, obj): def get_body(self, obj):
if obj.notification_type in ('webhook', 'pagerduty'): if obj.notification_type in ('webhook', 'pagerduty', 'awssns'):
if isinstance(obj.body, dict): if isinstance(obj.body, dict):
if 'body' in obj.body: if 'body' in obj.body:
return obj.body['body'] return obj.body['body']
@@ -5403,9 +5403,9 @@ class NotificationSerializer(BaseSerializer):
def to_representation(self, obj): def to_representation(self, obj):
ret = super(NotificationSerializer, self).to_representation(obj) ret = super(NotificationSerializer, self).to_representation(obj)
if obj.notification_type == 'webhook': if obj.notification_type in ('webhook', 'awssns'):
ret.pop('subject') ret.pop('subject')
if obj.notification_type not in ('email', 'webhook', 'pagerduty'): if obj.notification_type not in ('email', 'webhook', 'pagerduty', 'awssns'):
ret.pop('body') ret.pop('body')
return ret return ret

View File

@@ -62,6 +62,7 @@ from wsgiref.util import FileWrapper
# django-ansible-base # django-ansible-base
from ansible_base.rbac.models import RoleEvaluation, ObjectRole from ansible_base.rbac.models import RoleEvaluation, ObjectRole
from ansible_base.resource_registry.shared_types import OrganizationType, TeamType, UserType
# AWX # AWX
from awx.main.tasks.system import send_notifications, update_inventory_computed_fields from awx.main.tasks.system import send_notifications, update_inventory_computed_fields
@@ -128,6 +129,7 @@ from awx.api.views.mixin import (
from awx.api.pagination import UnifiedJobEventPagination from awx.api.pagination import UnifiedJobEventPagination
from awx.main.utils import set_environ from awx.main.utils import set_environ
logger = logging.getLogger('awx.api.views') logger = logging.getLogger('awx.api.views')
@@ -710,16 +712,81 @@ class AuthView(APIView):
return Response(data) return Response(data)
def immutablesharedfields(cls):
'''
Class decorator to prevent modifying shared resources when ALLOW_LOCAL_RESOURCE_MANAGEMENT setting is set to False.
Works by overriding these view methods:
- create
- delete
- perform_update
create and delete are overridden to raise a PermissionDenied exception.
perform_update is overridden to check if any shared fields are being modified,
and raise a PermissionDenied exception if so.
'''
# create instead of perform_create because some of our views
# override create instead of perform_create
if hasattr(cls, 'create'):
cls.original_create = cls.create
@functools.wraps(cls.create)
def create_wrapper(*args, **kwargs):
if settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
return cls.original_create(*args, **kwargs)
raise PermissionDenied({'detail': _('Creation of this resource is not allowed. Create this resource via the platform ingress.')})
cls.create = create_wrapper
if hasattr(cls, 'delete'):
cls.original_delete = cls.delete
@functools.wraps(cls.delete)
def delete_wrapper(*args, **kwargs):
if settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
return cls.original_delete(*args, **kwargs)
raise PermissionDenied({'detail': _('Deletion of this resource is not allowed. Delete this resource via the platform ingress.')})
cls.delete = delete_wrapper
if hasattr(cls, 'perform_update'):
cls.original_perform_update = cls.perform_update
@functools.wraps(cls.perform_update)
def update_wrapper(*args, **kwargs):
if not settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
view, serializer = args
instance = view.get_object()
if instance:
if isinstance(instance, models.Organization):
shared_fields = OrganizationType._declared_fields.keys()
elif isinstance(instance, models.User):
shared_fields = UserType._declared_fields.keys()
elif isinstance(instance, models.Team):
shared_fields = TeamType._declared_fields.keys()
attrs = serializer.validated_data
for field in shared_fields:
if field in attrs and getattr(instance, field) != attrs[field]:
raise PermissionDenied({field: _(f"Cannot change shared field '{field}'. Alter this field via the platform ingress.")})
return cls.original_perform_update(*args, **kwargs)
cls.perform_update = update_wrapper
return cls
@immutablesharedfields
class TeamList(ListCreateAPIView): class TeamList(ListCreateAPIView):
model = models.Team model = models.Team
serializer_class = serializers.TeamSerializer serializer_class = serializers.TeamSerializer
@immutablesharedfields
class TeamDetail(RetrieveUpdateDestroyAPIView): class TeamDetail(RetrieveUpdateDestroyAPIView):
model = models.Team model = models.Team
serializer_class = serializers.TeamSerializer serializer_class = serializers.TeamSerializer
@immutablesharedfields
class TeamUsersList(BaseUsersList): class TeamUsersList(BaseUsersList):
model = models.User model = models.User
serializer_class = serializers.UserSerializer serializer_class = serializers.UserSerializer
@@ -1101,6 +1168,7 @@ class ProjectCopy(CopyAPIView):
copy_return_serializer_class = serializers.ProjectSerializer copy_return_serializer_class = serializers.ProjectSerializer
@immutablesharedfields
class UserList(ListCreateAPIView): class UserList(ListCreateAPIView):
model = models.User model = models.User
serializer_class = serializers.UserSerializer serializer_class = serializers.UserSerializer
@@ -1271,7 +1339,16 @@ class UserRolesList(SubListAttachDetachAPIView):
user = get_object_or_400(models.User, pk=self.kwargs['pk']) user = get_object_or_400(models.User, pk=self.kwargs['pk'])
role = get_object_or_400(models.Role, pk=sub_id) role = get_object_or_400(models.Role, pk=sub_id)
credential_content_type = ContentType.objects.get_for_model(models.Credential) content_types = ContentType.objects.get_for_models(models.Organization, models.Team, models.Credential) # dict of {model: content_type}
# Prevent user to be associated with team/org when ALLOW_LOCAL_RESOURCE_MANAGEMENT is False
if not settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
for model in [models.Organization, models.Team]:
ct = content_types[model]
if role.content_type == ct and role.role_field in ['member_role', 'admin_role']:
data = dict(msg=_(f"Cannot directly modify user membership to {ct.model}. Direct shared resource management disabled"))
return Response(data, status=status.HTTP_403_FORBIDDEN)
credential_content_type = content_types[models.Credential]
if role.content_type == credential_content_type: if role.content_type == credential_content_type:
if 'disassociate' not in request.data and role.content_object.organization and user not in role.content_object.organization.member_role: if 'disassociate' not in request.data and role.content_object.organization and user not in role.content_object.organization.member_role:
data = dict(msg=_("You cannot grant credential access to a user not in the credentials' organization")) data = dict(msg=_("You cannot grant credential access to a user not in the credentials' organization"))
@@ -1343,6 +1420,7 @@ class UserActivityStreamList(SubListAPIView):
return qs.filter(Q(actor=parent) | Q(user__in=[parent])) return qs.filter(Q(actor=parent) | Q(user__in=[parent]))
@immutablesharedfields
class UserDetail(RetrieveUpdateDestroyAPIView): class UserDetail(RetrieveUpdateDestroyAPIView):
model = models.User model = models.User
serializer_class = serializers.UserSerializer serializer_class = serializers.UserSerializer
@@ -4295,7 +4373,15 @@ class RoleUsersList(SubListAttachDetachAPIView):
user = get_object_or_400(models.User, pk=sub_id) user = get_object_or_400(models.User, pk=sub_id)
role = self.get_parent_object() role = self.get_parent_object()
credential_content_type = ContentType.objects.get_for_model(models.Credential) content_types = ContentType.objects.get_for_models(models.Organization, models.Team, models.Credential) # dict of {model: content_type}
if not settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
for model in [models.Organization, models.Team]:
ct = content_types[model]
if role.content_type == ct and role.role_field in ['member_role', 'admin_role']:
data = dict(msg=_(f"Cannot directly modify user membership to {ct.model}. Direct shared resource management disabled"))
return Response(data, status=status.HTTP_403_FORBIDDEN)
credential_content_type = content_types[models.Credential]
if role.content_type == credential_content_type: if role.content_type == credential_content_type:
if 'disassociate' not in request.data and role.content_object.organization and user not in role.content_object.organization.member_role: if 'disassociate' not in request.data and role.content_object.organization and user not in role.content_object.organization.member_role:
data = dict(msg=_("You cannot grant credential access to a user not in the credentials' organization")) data = dict(msg=_("You cannot grant credential access to a user not in the credentials' organization"))

View File

@@ -53,15 +53,18 @@ from awx.api.serializers import (
CredentialSerializer, CredentialSerializer,
) )
from awx.api.views.mixin import RelatedJobsPreventDeleteMixin, OrganizationCountsMixin from awx.api.views.mixin import RelatedJobsPreventDeleteMixin, OrganizationCountsMixin
from awx.api.views import immutablesharedfields
logger = logging.getLogger('awx.api.views.organization') logger = logging.getLogger('awx.api.views.organization')
@immutablesharedfields
class OrganizationList(OrganizationCountsMixin, ListCreateAPIView): class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization model = Organization
serializer_class = OrganizationSerializer serializer_class = OrganizationSerializer
@immutablesharedfields
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView): class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization model = Organization
serializer_class = OrganizationSerializer serializer_class = OrganizationSerializer
@@ -104,6 +107,7 @@ class OrganizationInventoriesList(SubListAPIView):
relationship = 'inventories' relationship = 'inventories'
@immutablesharedfields
class OrganizationUsersList(BaseUsersList): class OrganizationUsersList(BaseUsersList):
model = User model = User
serializer_class = UserSerializer serializer_class = UserSerializer
@@ -112,6 +116,7 @@ class OrganizationUsersList(BaseUsersList):
ordering = ('username',) ordering = ('username',)
@immutablesharedfields
class OrganizationAdminsList(BaseUsersList): class OrganizationAdminsList(BaseUsersList):
model = User model = User
serializer_class = UserSerializer serializer_class = UserSerializer
@@ -150,6 +155,7 @@ class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
parent_key = 'organization' parent_key = 'organization'
@immutablesharedfields
class OrganizationTeamsList(SubListCreateAttachDetachAPIView): class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
model = Team model = Team
serializer_class = TeamSerializer serializer_class = TeamSerializer

View File

@@ -130,9 +130,9 @@ def test_default_setting(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system', default='DEFAULT') settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system', default='DEFAULT')
settings_to_cache = mocker.Mock(**{'order_by.return_value': []}) settings_to_cache = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache): mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache)
assert settings.AWX_SOME_SETTING == 'DEFAULT' assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.cache.get('AWX_SOME_SETTING') == 'DEFAULT' assert settings.cache.get('AWX_SOME_SETTING') == 'DEFAULT'
@pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT') @pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT')
@@ -146,9 +146,9 @@ def test_setting_is_not_from_setting_file(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system', default='DEFAULT') settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system', default='DEFAULT')
settings_to_cache = mocker.Mock(**{'order_by.return_value': []}) settings_to_cache = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache): mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache)
assert settings.AWX_SOME_SETTING == 'DEFAULT' assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.registry.get_setting_field('AWX_SOME_SETTING').defined_in_file is False assert settings.registry.get_setting_field('AWX_SOME_SETTING').defined_in_file is False
def test_empty_setting(settings, mocker): def test_empty_setting(settings, mocker):
@@ -156,10 +156,10 @@ def test_empty_setting(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system') settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([]), 'first.return_value': None})}) mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([]), 'first.return_value': None})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks): mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
with pytest.raises(AttributeError): with pytest.raises(AttributeError):
settings.AWX_SOME_SETTING settings.AWX_SOME_SETTING
assert settings.cache.get('AWX_SOME_SETTING') == SETTING_CACHE_NOTSET assert settings.cache.get('AWX_SOME_SETTING') == SETTING_CACHE_NOTSET
def test_setting_from_db(settings, mocker): def test_setting_from_db(settings, mocker):
@@ -168,9 +168,9 @@ def test_setting_from_db(settings, mocker):
setting_from_db = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB') setting_from_db = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([setting_from_db]), 'first.return_value': setting_from_db})}) mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([setting_from_db]), 'first.return_value': setting_from_db})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks): mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
assert settings.AWX_SOME_SETTING == 'FROM_DB' assert settings.AWX_SOME_SETTING == 'FROM_DB'
assert settings.cache.get('AWX_SOME_SETTING') == 'FROM_DB' assert settings.cache.get('AWX_SOME_SETTING') == 'FROM_DB'
@pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT') @pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT')
@@ -205,8 +205,8 @@ def test_db_setting_update(settings, mocker):
existing_setting = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB') existing_setting = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
setting_list = mocker.Mock(**{'order_by.return_value.first.return_value': existing_setting}) setting_list = mocker.Mock(**{'order_by.return_value.first.return_value': existing_setting})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=setting_list): mocker.patch('awx.conf.models.Setting.objects.filter', return_value=setting_list)
settings.AWX_SOME_SETTING = 'NEW-VALUE' settings.AWX_SOME_SETTING = 'NEW-VALUE'
assert existing_setting.value == 'NEW-VALUE' assert existing_setting.value == 'NEW-VALUE'
existing_setting.save.assert_called_with(update_fields=['value']) existing_setting.save.assert_called_with(update_fields=['value'])
@@ -217,8 +217,8 @@ def test_db_setting_deletion(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system') settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system')
existing_setting = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB') existing_setting = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=[existing_setting]): mocker.patch('awx.conf.models.Setting.objects.filter', return_value=[existing_setting])
del settings.AWX_SOME_SETTING del settings.AWX_SOME_SETTING
assert existing_setting.delete.call_count == 1 assert existing_setting.delete.call_count == 1
@@ -283,10 +283,10 @@ def test_sensitive_cache_data_is_encrypted(settings, mocker):
# use its primary key as part of the encryption key # use its primary key as part of the encryption key
setting_from_db = mocker.Mock(pk=123, key='AWX_ENCRYPTED', value='SECRET!') setting_from_db = mocker.Mock(pk=123, key='AWX_ENCRYPTED', value='SECRET!')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([setting_from_db]), 'first.return_value': setting_from_db})}) mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([setting_from_db]), 'first.return_value': setting_from_db})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks): mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
cache.set('AWX_ENCRYPTED', 'SECRET!') cache.set('AWX_ENCRYPTED', 'SECRET!')
assert cache.get('AWX_ENCRYPTED') == 'SECRET!' assert cache.get('AWX_ENCRYPTED') == 'SECRET!'
assert native_cache.get('AWX_ENCRYPTED') == 'FRPERG!' assert native_cache.get('AWX_ENCRYPTED') == 'FRPERG!'
def test_readonly_sensitive_cache_data_is_encrypted(settings): def test_readonly_sensitive_cache_data_is_encrypted(settings):

View File

@@ -14,7 +14,7 @@ __all__ = [
'STANDARD_INVENTORY_UPDATE_ENV', 'STANDARD_INVENTORY_UPDATE_ENV',
] ]
CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'controller', 'insights', 'terraform') CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'controller', 'insights', 'terraform', 'openshift_virtualization')
PRIVILEGE_ESCALATION_METHODS = [ PRIVILEGE_ESCALATION_METHODS = [
('sudo', _('Sudo')), ('sudo', _('Sudo')),
('su', _('Su')), ('su', _('Su')),

View File

@@ -252,7 +252,7 @@ class ImplicitRoleField(models.ForeignKey):
kwargs.setdefault('related_name', '+') kwargs.setdefault('related_name', '+')
kwargs.setdefault('null', 'True') kwargs.setdefault('null', 'True')
kwargs.setdefault('editable', False) kwargs.setdefault('editable', False)
kwargs.setdefault('on_delete', models.CASCADE) kwargs.setdefault('on_delete', models.SET_NULL)
super(ImplicitRoleField, self).__init__(*args, **kwargs) super(ImplicitRoleField, self).__init__(*args, **kwargs)
def deconstruct(self): def deconstruct(self):

View File

@@ -0,0 +1,12 @@
from django.core.management.base import BaseCommand, CommandError
from awx.main.models.ha import Instance
class Command(BaseCommand):
help = 'Check if the task manager instance is ready throw error if not ready, can be use as readiness probe for k8s.'
def handle(self, *args, **options):
if Instance.objects.me().node_state != Instance.States.READY:
raise CommandError('Instance is not ready') # so that return code is not 0
return

View File

@@ -101,8 +101,9 @@ class Command(BaseCommand):
migrating = bool(executor.migration_plan(executor.loader.graph.leaf_nodes())) migrating = bool(executor.migration_plan(executor.loader.graph.leaf_nodes()))
connection.close() # Because of async nature, main loop will use new connection, so close this connection.close() # Because of async nature, main loop will use new connection, so close this
except Exception as exc: except Exception as exc:
logger.warning(f'Error on startup of run_wsrelay (error: {exc}), retry in 10s...') time.sleep(10) # Prevent supervisor from restarting the service too quickly and the service to enter FATAL state
time.sleep(10) # sleeping before logging because logging rely on setting which require database connection...
logger.warning(f'Error on startup of run_wsrelay (error: {exc}), slept for 10s...')
return return
# In containerized deployments, migrations happen in the task container, # In containerized deployments, migrations happen in the task container,
@@ -121,13 +122,14 @@ class Command(BaseCommand):
return return
try: try:
my_hostname = Instance.objects.my_hostname() my_hostname = Instance.objects.my_hostname() # This relies on settings.CLUSTER_HOST_ID which requires database connection
logger.info('Active instance with hostname {} is registered.'.format(my_hostname)) logger.info('Active instance with hostname {} is registered.'.format(my_hostname))
except RuntimeError as e: except RuntimeError as e:
# the CLUSTER_HOST_ID in the task, and web instance must match and # the CLUSTER_HOST_ID in the task, and web instance must match and
# ensure network connectivity between the task and web instance # ensure network connectivity between the task and web instance
logger.info('Unable to return currently active instance: {}, retry in 5s...'.format(e)) time.sleep(10) # Prevent supervisor from restarting the service too quickly and the service to enter FATAL state
time.sleep(5) # sleeping before logging because logging rely on setting which require database connection...
logger.warning(f"Unable to return currently active instance: {e}, slept for 10s before return.")
return return
if options.get('status'): if options.get('status'):
@@ -166,12 +168,14 @@ class Command(BaseCommand):
WebsocketsMetricsServer().start() WebsocketsMetricsServer().start()
while True: try:
try: logger.info('Starting Websocket Relayer...')
asyncio.run(WebSocketRelayManager().run()) websocket_relay_manager = WebSocketRelayManager()
except KeyboardInterrupt: asyncio.run(websocket_relay_manager.run())
logger.info('Shutting down Websocket Relayer') except KeyboardInterrupt:
break logger.info('Terminating Websocket Relayer')
except Exception as e: except BaseException as e: # BaseException is used to catch all exceptions including asyncio.CancelledError
logger.exception('Error in Websocket Relayer, exception: {}. Restarting in 10 seconds'.format(e)) time.sleep(10) # Prevent supervisor from restarting the service too quickly and the service to enter FATAL state
time.sleep(10) # sleeping before logging because logging rely on setting which require database connection...
logger.warning(f"Encounter error while running Websocket Relayer {e}, slept for 10s...")
return

View File

@@ -6,7 +6,7 @@ import logging
import threading import threading
import time import time
import urllib.parse import urllib.parse
from pathlib import Path from pathlib import Path, PurePosixPath
from django.conf import settings from django.conf import settings
from django.contrib.auth import logout from django.contrib.auth import logout
@@ -138,14 +138,36 @@ class URLModificationMiddleware(MiddlewareMixin):
@classmethod @classmethod
def _convert_named_url(cls, url_path): def _convert_named_url(cls, url_path):
url_units = url_path.split('/') default_prefix = PurePosixPath('/api/v2/')
# If the identifier is an empty string, it is always invalid. optional_prefix = PurePosixPath(f'/api/{settings.OPTIONAL_API_URLPATTERN_PREFIX}/v2/')
if len(url_units) < 6 or url_units[1] != 'api' or url_units[2] not in ['v2'] or not url_units[4]:
return url_path url_path_original = url_path
resource = url_units[3] url_path = PurePosixPath(url_path)
if set(optional_prefix.parts).issubset(set(url_path.parts)):
url_prefix = optional_prefix
elif set(default_prefix.parts).issubset(set(url_path.parts)):
url_prefix = default_prefix
else:
return url_path_original
# Remove prefix
url_path = PurePosixPath(*url_path.parts[len(url_prefix.parts) :])
try:
resource_path = PurePosixPath(url_path.parts[0])
name = url_path.parts[1]
url_suffix = PurePosixPath(*url_path.parts[2:]) # remove name and resource
except IndexError:
return url_path_original
resource = resource_path.parts[0]
if resource in settings.NAMED_URL_MAPPINGS: if resource in settings.NAMED_URL_MAPPINGS:
url_units[4] = cls._named_url_to_pk(settings.NAMED_URL_GRAPH[settings.NAMED_URL_MAPPINGS[resource]], resource, url_units[4]) pk = PurePosixPath(cls._named_url_to_pk(settings.NAMED_URL_GRAPH[settings.NAMED_URL_MAPPINGS[resource]], resource, name))
return '/'.join(url_units) else:
return url_path_original
parts = url_prefix.parts + resource_path.parts + pk.parts + url_suffix.parts
return PurePosixPath(*parts).as_posix() + '/'
def process_request(self, request): def process_request(self, request):
old_path = request.path_info old_path = request.path_info

View File

@@ -17,49 +17,49 @@ class Migration(migrations.Migration):
model_name='organization', model_name='organization',
name='execute_role', name='execute_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role' null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
), ),
), ),
migrations.AddField( migrations.AddField(
model_name='organization', model_name='organization',
name='job_template_admin_role', name='job_template_admin_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role' editable=False, null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
), ),
), ),
migrations.AddField( migrations.AddField(
model_name='organization', model_name='organization',
name='credential_admin_role', name='credential_admin_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role' null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
), ),
), ),
migrations.AddField( migrations.AddField(
model_name='organization', model_name='organization',
name='inventory_admin_role', name='inventory_admin_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role' null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
), ),
), ),
migrations.AddField( migrations.AddField(
model_name='organization', model_name='organization',
name='project_admin_role', name='project_admin_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role' null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
), ),
), ),
migrations.AddField( migrations.AddField(
model_name='organization', model_name='organization',
name='workflow_admin_role', name='workflow_admin_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role' null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
), ),
), ),
migrations.AddField( migrations.AddField(
model_name='organization', model_name='organization',
name='notification_admin_role', name='notification_admin_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role' null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
), ),
), ),
migrations.AlterField( migrations.AlterField(
@@ -67,7 +67,7 @@ class Migration(migrations.Migration):
name='admin_role', name='admin_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['singleton:system_administrator', 'organization.credential_admin_role'], parent_role=['singleton:system_administrator', 'organization.credential_admin_role'],
related_name='+', related_name='+',
to='main.Role', to='main.Role',
@@ -77,7 +77,7 @@ class Migration(migrations.Migration):
model_name='inventory', model_name='inventory',
name='admin_role', name='admin_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='organization.inventory_admin_role', related_name='+', to='main.Role' null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='organization.inventory_admin_role', related_name='+', to='main.Role'
), ),
), ),
migrations.AlterField( migrations.AlterField(
@@ -85,7 +85,7 @@ class Migration(migrations.Migration):
name='admin_role', name='admin_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['organization.project_admin_role', 'singleton:system_administrator'], parent_role=['organization.project_admin_role', 'singleton:system_administrator'],
related_name='+', related_name='+',
to='main.Role', to='main.Role',
@@ -96,7 +96,7 @@ class Migration(migrations.Migration):
name='admin_role', name='admin_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['singleton:system_administrator', 'organization.workflow_admin_role'], parent_role=['singleton:system_administrator', 'organization.workflow_admin_role'],
related_name='+', related_name='+',
to='main.Role', to='main.Role',
@@ -107,7 +107,7 @@ class Migration(migrations.Migration):
name='execute_role', name='execute_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['admin_role', 'organization.execute_role'], parent_role=['admin_role', 'organization.execute_role'],
related_name='+', related_name='+',
to='main.Role', to='main.Role',
@@ -119,7 +119,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, editable=False,
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['project.organization.job_template_admin_role', 'inventory.organization.job_template_admin_role'], parent_role=['project.organization.job_template_admin_role', 'inventory.organization.job_template_admin_role'],
related_name='+', related_name='+',
to='main.Role', to='main.Role',
@@ -130,7 +130,7 @@ class Migration(migrations.Migration):
name='execute_role', name='execute_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['admin_role', 'project.organization.execute_role', 'inventory.organization.execute_role'], parent_role=['admin_role', 'project.organization.execute_role', 'inventory.organization.execute_role'],
related_name='+', related_name='+',
to='main.Role', to='main.Role',
@@ -142,7 +142,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, editable=False,
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=[ parent_role=[
'admin_role', 'admin_role',
'execute_role', 'execute_role',

View File

@@ -18,7 +18,7 @@ class Migration(migrations.Migration):
model_name='organization', model_name='organization',
name='member_role', name='member_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['admin_role'], related_name='+', to='main.Role' editable=False, null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role=['admin_role'], related_name='+', to='main.Role'
), ),
), ),
migrations.AlterField( migrations.AlterField(
@@ -27,7 +27,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, editable=False,
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=[ parent_role=[
'member_role', 'member_role',
'auditor_role', 'auditor_role',

View File

@@ -36,7 +36,7 @@ class Migration(migrations.Migration):
model_name='organization', model_name='organization',
name='approval_role', name='approval_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role' editable=False, null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
), ),
preserve_default='True', preserve_default='True',
), ),
@@ -46,7 +46,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, editable=False,
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['organization.approval_role', 'admin_role'], parent_role=['organization.approval_role', 'admin_role'],
related_name='+', related_name='+',
to='main.Role', to='main.Role',
@@ -116,7 +116,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, editable=False,
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=[ parent_role=[
'member_role', 'member_role',
'auditor_role', 'auditor_role',
@@ -139,7 +139,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, editable=False,
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['singleton:system_auditor', 'organization.auditor_role', 'execute_role', 'admin_role', 'approval_role'], parent_role=['singleton:system_auditor', 'organization.auditor_role', 'execute_role', 'admin_role', 'approval_role'],
related_name='+', related_name='+',
to='main.Role', to='main.Role',

View File

@@ -80,7 +80,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, editable=False,
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['organization.job_template_admin_role'], parent_role=['organization.job_template_admin_role'],
related_name='+', related_name='+',
to='main.Role', to='main.Role',
@@ -92,7 +92,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, editable=False,
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['admin_role', 'organization.execute_role'], parent_role=['admin_role', 'organization.execute_role'],
related_name='+', related_name='+',
to='main.Role', to='main.Role',
@@ -104,7 +104,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, editable=False,
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['organization.auditor_role', 'inventory.organization.auditor_role', 'execute_role', 'admin_role'], parent_role=['organization.auditor_role', 'inventory.organization.auditor_role', 'execute_role', 'admin_role'],
related_name='+', related_name='+',
to='main.Role', to='main.Role',

View File

@@ -26,7 +26,7 @@ class Migration(migrations.Migration):
model_name='organization', model_name='organization',
name='execution_environment_admin_role', name='execution_environment_admin_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role' editable=False, null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
), ),
preserve_default='True', preserve_default='True',
), ),

View File

@@ -17,7 +17,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, editable=False,
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=[ parent_role=[
'member_role', 'member_role',
'auditor_role', 'auditor_role',

View File

@@ -17,7 +17,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, editable=False,
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['singleton:system_administrator'], parent_role=['singleton:system_administrator'],
related_name='+', related_name='+',
to='main.role', to='main.role',
@@ -30,7 +30,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, editable=False,
null='True', null='True',
on_delete=django.db.models.deletion.CASCADE, on_delete=django.db.models.deletion.SET_NULL,
parent_role=['singleton:system_auditor', 'use_role', 'admin_role'], parent_role=['singleton:system_auditor', 'use_role', 'admin_role'],
related_name='+', related_name='+',
to='main.role', to='main.role',
@@ -41,7 +41,7 @@ class Migration(migrations.Migration):
model_name='instancegroup', model_name='instancegroup',
name='use_role', name='use_role',
field=awx.main.fields.ImplicitRoleField( field=awx.main.fields.ImplicitRoleField(
editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['admin_role'], related_name='+', to='main.role' editable=False, null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role=['admin_role'], related_name='+', to='main.role'
), ),
preserve_default='True', preserve_default='True',
), ),

View File

@@ -0,0 +1,51 @@
# Generated by Django 4.2.6 on 2024-05-08 07:29
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0192_custom_roles'),
]
operations = [
migrations.AlterField(
model_name='notification',
name='notification_type',
field=models.CharField(
choices=[
('awssns', 'AWS SNS'),
('email', 'Email'),
('grafana', 'Grafana'),
('irc', 'IRC'),
('mattermost', 'Mattermost'),
('pagerduty', 'Pagerduty'),
('rocketchat', 'Rocket.Chat'),
('slack', 'Slack'),
('twilio', 'Twilio'),
('webhook', 'Webhook'),
],
max_length=32,
),
),
migrations.AlterField(
model_name='notificationtemplate',
name='notification_type',
field=models.CharField(
choices=[
('awssns', 'AWS SNS'),
('email', 'Email'),
('grafana', 'Grafana'),
('irc', 'IRC'),
('mattermost', 'Mattermost'),
('pagerduty', 'Pagerduty'),
('rocketchat', 'Rocket.Chat'),
('slack', 'Slack'),
('twilio', 'Twilio'),
('webhook', 'Webhook'),
],
max_length=32,
),
),
]

View File

@@ -0,0 +1,61 @@
# Generated by Django 4.2.10 on 2024-06-12 19:59
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0193_alter_notification_notification_type_and_more'),
]
operations = [
migrations.AlterField(
model_name='inventorysource',
name='source',
field=models.CharField(
choices=[
('file', 'File, Directory or Script'),
('constructed', 'Template additional groups and hostvars at runtime'),
('scm', 'Sourced from a Project'),
('ec2', 'Amazon EC2'),
('gce', 'Google Compute Engine'),
('azure_rm', 'Microsoft Azure Resource Manager'),
('vmware', 'VMware vCenter'),
('satellite6', 'Red Hat Satellite 6'),
('openstack', 'OpenStack'),
('rhv', 'Red Hat Virtualization'),
('controller', 'Red Hat Ansible Automation Platform'),
('insights', 'Red Hat Insights'),
('terraform', 'Terraform State'),
('openshift_virtualization', 'OpenShift Virtualization'),
],
default=None,
max_length=32,
),
),
migrations.AlterField(
model_name='inventoryupdate',
name='source',
field=models.CharField(
choices=[
('file', 'File, Directory or Script'),
('constructed', 'Template additional groups and hostvars at runtime'),
('scm', 'Sourced from a Project'),
('ec2', 'Amazon EC2'),
('gce', 'Google Compute Engine'),
('azure_rm', 'Microsoft Azure Resource Manager'),
('vmware', 'VMware vCenter'),
('satellite6', 'Red Hat Satellite 6'),
('openstack', 'OpenStack'),
('rhv', 'Red Hat Virtualization'),
('controller', 'Red Hat Ansible Automation Platform'),
('insights', 'Red Hat Insights'),
('terraform', 'Terraform State'),
('openshift_virtualization', 'OpenShift Virtualization'),
],
default=None,
max_length=32,
),
),
]

View File

@@ -4,11 +4,12 @@ import datetime
from datetime import timezone from datetime import timezone
import logging import logging
from collections import defaultdict from collections import defaultdict
import itertools
import time import time
from django.conf import settings from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist from django.core.exceptions import ObjectDoesNotExist
from django.db import models, DatabaseError from django.db import models, DatabaseError, transaction
from django.db.models.functions import Cast from django.db.models.functions import Cast
from django.utils.dateparse import parse_datetime from django.utils.dateparse import parse_datetime
from django.utils.text import Truncator from django.utils.text import Truncator
@@ -605,19 +606,23 @@ class JobEvent(BasePlaybookEvent):
def _update_host_metrics(updated_hosts_list): def _update_host_metrics(updated_hosts_list):
from awx.main.models import HostMetric # circular import from awx.main.models import HostMetric # circular import
# bulk-create
current_time = now() current_time = now()
HostMetric.objects.bulk_create(
[HostMetric(hostname=hostname, last_automation=current_time) for hostname in updated_hosts_list], ignore_conflicts=True, batch_size=100 # FUTURE:
) # - Hand-rolled implementation of itertools.batched(), introduced in Python 3.12. Replace.
# bulk-update # - Ability to do ORM upserts *may* have been introduced in Django 5.0.
batch_start, batch_size = 0, 1000 # See the entry about `create_defaults` in https://docs.djangoproject.com/en/5.0/releases/5.0/#models.
while batch_start <= len(updated_hosts_list): # Hopefully this will be fully ready for batch use by 5.2 LTS.
batched_host_list = updated_hosts_list[batch_start : (batch_start + batch_size)]
HostMetric.objects.filter(hostname__in=batched_host_list).update( args = [iter(updated_hosts_list)] * 500
last_automation=current_time, automated_counter=models.F('automated_counter') + 1, deleted=False for hosts in itertools.zip_longest(*args):
) with transaction.atomic():
batch_start += batch_size HostMetric.objects.bulk_create(
[HostMetric(hostname=hostname, last_automation=current_time) for hostname in hosts if hostname is not None], ignore_conflicts=True
)
HostMetric.objects.filter(hostname__in=hosts).update(
last_automation=current_time, automated_counter=models.F('automated_counter') + 1, deleted=False
)
@property @property
def job_verbosity(self): def job_verbosity(self):

View File

@@ -933,6 +933,7 @@ class InventorySourceOptions(BaseModel):
('controller', _('Red Hat Ansible Automation Platform')), ('controller', _('Red Hat Ansible Automation Platform')),
('insights', _('Red Hat Insights')), ('insights', _('Red Hat Insights')),
('terraform', _('Terraform State')), ('terraform', _('Terraform State')),
('openshift_virtualization', _('OpenShift Virtualization')),
] ]
# From the options of the Django management base command # From the options of the Django management base command
@@ -1042,7 +1043,7 @@ class InventorySourceOptions(BaseModel):
def cloud_credential_validation(source, cred): def cloud_credential_validation(source, cred):
if not source: if not source:
return None return None
if cred and source not in ('custom', 'scm'): if cred and source not in ('custom', 'scm', 'openshift_virtualization'):
# If a credential was provided, it's important that it matches # If a credential was provided, it's important that it matches
# the actual inventory source being used (Amazon requires Amazon # the actual inventory source being used (Amazon requires Amazon
# credentials; Rackspace requires Rackspace credentials; etc...) # credentials; Rackspace requires Rackspace credentials; etc...)
@@ -1051,12 +1052,14 @@ class InventorySourceOptions(BaseModel):
# Allow an EC2 source to omit the credential. If Tower is running on # Allow an EC2 source to omit the credential. If Tower is running on
# an EC2 instance with an IAM Role assigned, boto will use credentials # an EC2 instance with an IAM Role assigned, boto will use credentials
# from the instance metadata instead of those explicitly provided. # from the instance metadata instead of those explicitly provided.
elif source in CLOUD_PROVIDERS and source != 'ec2': elif source in CLOUD_PROVIDERS and source not in ['ec2', 'openshift_virtualization']:
return _('Credential is required for a cloud source.') return _('Credential is required for a cloud source.')
elif source == 'custom' and cred and cred.credential_type.kind in ('scm', 'ssh', 'insights', 'vault'): elif source == 'custom' and cred and cred.credential_type.kind in ('scm', 'ssh', 'insights', 'vault'):
return _('Credentials of type machine, source control, insights and vault are disallowed for custom inventory sources.') return _('Credentials of type machine, source control, insights and vault are disallowed for custom inventory sources.')
elif source == 'scm' and cred and cred.credential_type.kind in ('insights', 'vault'): elif source == 'scm' and cred and cred.credential_type.kind in ('insights', 'vault'):
return _('Credentials of type insights and vault are disallowed for scm inventory sources.') return _('Credentials of type insights and vault are disallowed for scm inventory sources.')
elif source == 'openshift_virtualization' and cred and cred.credential_type.kind != 'kubernetes':
return _('Credentials of type kubernetes is requred for openshift_virtualization inventory sources.')
return None return None
def get_cloud_credential(self): def get_cloud_credential(self):
@@ -1660,7 +1663,7 @@ class terraform(PluginFileInjector):
credential = inventory_update.get_cloud_credential() credential = inventory_update.get_cloud_credential()
private_data = {'credentials': {}} private_data = {'credentials': {}}
gce_cred = credential.get_input('gce_credentials') gce_cred = credential.get_input('gce_credentials', default=None)
if gce_cred: if gce_cred:
private_data['credentials'][credential] = gce_cred private_data['credentials'][credential] = gce_cred
return private_data return private_data
@@ -1669,7 +1672,7 @@ class terraform(PluginFileInjector):
env = super(terraform, self).get_plugin_env(inventory_update, private_data_dir, private_data_files) env = super(terraform, self).get_plugin_env(inventory_update, private_data_dir, private_data_files)
credential = inventory_update.get_cloud_credential() credential = inventory_update.get_cloud_credential()
cred_data = private_data_files['credentials'] cred_data = private_data_files['credentials']
if cred_data[credential]: if credential in cred_data:
env['GOOGLE_BACKEND_CREDENTIALS'] = to_container_path(cred_data[credential], private_data_dir) env['GOOGLE_BACKEND_CREDENTIALS'] = to_container_path(cred_data[credential], private_data_dir)
return env return env
@@ -1693,6 +1696,16 @@ class insights(PluginFileInjector):
use_fqcn = True use_fqcn = True
class openshift_virtualization(PluginFileInjector):
plugin_name = 'kubevirt'
base_injector = 'template'
namespace = 'kubevirt'
collection = 'core'
downstream_namespace = 'redhat'
downstream_collection = 'openshift_virtualization'
use_fqcn = True
class constructed(PluginFileInjector): class constructed(PluginFileInjector):
plugin_name = 'constructed' plugin_name = 'constructed'
namespace = 'ansible' namespace = 'ansible'

View File

@@ -31,6 +31,7 @@ from awx.main.notifications.mattermost_backend import MattermostBackend
from awx.main.notifications.grafana_backend import GrafanaBackend from awx.main.notifications.grafana_backend import GrafanaBackend
from awx.main.notifications.rocketchat_backend import RocketChatBackend from awx.main.notifications.rocketchat_backend import RocketChatBackend
from awx.main.notifications.irc_backend import IrcBackend from awx.main.notifications.irc_backend import IrcBackend
from awx.main.notifications.awssns_backend import AWSSNSBackend
logger = logging.getLogger('awx.main.models.notifications') logger = logging.getLogger('awx.main.models.notifications')
@@ -40,6 +41,7 @@ __all__ = ['NotificationTemplate', 'Notification']
class NotificationTemplate(CommonModelNameNotUnique): class NotificationTemplate(CommonModelNameNotUnique):
NOTIFICATION_TYPES = [ NOTIFICATION_TYPES = [
('awssns', _('AWS SNS'), AWSSNSBackend),
('email', _('Email'), CustomEmailBackend), ('email', _('Email'), CustomEmailBackend),
('slack', _('Slack'), SlackBackend), ('slack', _('Slack'), SlackBackend),
('twilio', _('Twilio'), TwilioBackend), ('twilio', _('Twilio'), TwilioBackend),

View File

@@ -17,7 +17,7 @@ from collections import OrderedDict
# Django # Django
from django.conf import settings from django.conf import settings
from django.db import models, connection from django.db import models, connection, transaction
from django.core.exceptions import NON_FIELD_ERRORS from django.core.exceptions import NON_FIELD_ERRORS
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
from django.utils.timezone import now from django.utils.timezone import now
@@ -273,7 +273,14 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, ExecutionEn
if new_next_schedule: if new_next_schedule:
if new_next_schedule.pk == self.next_schedule_id and new_next_schedule.next_run == self.next_job_run: if new_next_schedule.pk == self.next_schedule_id and new_next_schedule.next_run == self.next_job_run:
return # no-op, common for infrequent schedules return # no-op, common for infrequent schedules
self.next_schedule = new_next_schedule
# If in a transaction, use select_for_update to lock the next schedule row, which
# prevents a race condition if new_next_schedule is deleted elsewhere during this transaction
if transaction.get_autocommit():
self.next_schedule = related_schedules.first()
else:
self.next_schedule = related_schedules.select_for_update().first()
self.next_job_run = new_next_schedule.next_run self.next_job_run = new_next_schedule.next_run
self.save(update_fields=['next_schedule', 'next_job_run']) self.save(update_fields=['next_schedule', 'next_job_run'])
@@ -823,7 +830,7 @@ class UnifiedJob(
update_fields.append(key) update_fields.append(key)
if parent_instance: if parent_instance:
if self.status in ('pending', 'waiting', 'running'): if self.status in ('pending', 'running'):
if parent_instance.current_job != self: if parent_instance.current_job != self:
parent_instance_set('current_job', self) parent_instance_set('current_job', self)
# Update parent with all the 'good' states of it's child # Update parent with all the 'good' states of it's child
@@ -860,7 +867,7 @@ class UnifiedJob(
# If this job already exists in the database, retrieve a copy of # If this job already exists in the database, retrieve a copy of
# the job in its prior state. # the job in its prior state.
# If update_fields are given without status, then that indicates no change # If update_fields are given without status, then that indicates no change
if self.pk and ((not update_fields) or ('status' in update_fields)): if self.status != 'waiting' and self.pk and ((not update_fields) or ('status' in update_fields)):
self_before = self.__class__.objects.get(pk=self.pk) self_before = self.__class__.objects.get(pk=self.pk)
if self_before.status != self.status: if self_before.status != self.status:
status_before = self_before.status status_before = self_before.status
@@ -902,7 +909,8 @@ class UnifiedJob(
update_fields.append('elapsed') update_fields.append('elapsed')
# Ensure that the job template information is current. # Ensure that the job template information is current.
if self.unified_job_template != self._get_parent_instance(): # unless status is 'waiting', because this happens in large batches at end of task manager runs and is blocking
if self.status != 'waiting' and self.unified_job_template != self._get_parent_instance():
self.unified_job_template = self._get_parent_instance() self.unified_job_template = self._get_parent_instance()
if 'unified_job_template' not in update_fields: if 'unified_job_template' not in update_fields:
update_fields.append('unified_job_template') update_fields.append('unified_job_template')
@@ -915,8 +923,9 @@ class UnifiedJob(
# Okay; we're done. Perform the actual save. # Okay; we're done. Perform the actual save.
result = super(UnifiedJob, self).save(*args, **kwargs) result = super(UnifiedJob, self).save(*args, **kwargs)
# If status changed, update the parent instance. # If status changed, update the parent instance
if self.status != status_before: # unless status is 'waiting', because this happens in large batches at end of task manager runs and is blocking
if self.status != status_before and self.status != 'waiting':
# Update parent outside of the transaction for Job w/ allow_simultaneous=True # Update parent outside of the transaction for Job w/ allow_simultaneous=True
# This dodges lock contention at the expense of the foreign key not being # This dodges lock contention at the expense of the foreign key not being
# completely correct. # completely correct.

View File

@@ -0,0 +1,70 @@
# Copyright (c) 2016 Ansible, Inc.
# All Rights Reserved.
import json
import logging
import boto3
from botocore.exceptions import ClientError
from awx.main.notifications.base import AWXBaseEmailBackend
from awx.main.notifications.custom_notification_base import CustomNotificationBase
logger = logging.getLogger('awx.main.notifications.awssns_backend')
WEBSOCKET_TIMEOUT = 30
class AWSSNSBackend(AWXBaseEmailBackend, CustomNotificationBase):
init_parameters = {
"aws_region": {"label": "AWS Region", "type": "string", "default": ""},
"aws_access_key_id": {"label": "Access Key ID", "type": "string", "default": ""},
"aws_secret_access_key": {"label": "Secret Access Key", "type": "password", "default": ""},
"aws_session_token": {"label": "Session Token", "type": "password", "default": ""},
"sns_topic_arn": {"label": "SNS Topic ARN", "type": "string", "default": ""},
}
recipient_parameter = "sns_topic_arn"
sender_parameter = None
DEFAULT_BODY = "{{ job_metadata }}"
default_messages = CustomNotificationBase.job_metadata_messages
def __init__(self, aws_region, aws_access_key_id, aws_secret_access_key, aws_session_token, fail_silently=False, **kwargs):
session = boto3.session.Session()
client_config = {"service_name": 'sns'}
if aws_region:
client_config["region_name"] = aws_region
if aws_secret_access_key:
client_config["aws_secret_access_key"] = aws_secret_access_key
if aws_access_key_id:
client_config["aws_access_key_id"] = aws_access_key_id
if aws_session_token:
client_config["aws_session_token"] = aws_session_token
self.client = session.client(**client_config)
super(AWSSNSBackend, self).__init__(fail_silently=fail_silently)
def _sns_publish(self, topic_arn, message):
self.client.publish(TopicArn=topic_arn, Message=message, MessageAttributes={})
def format_body(self, body):
if isinstance(body, str):
try:
body = json.loads(body)
except json.JSONDecodeError:
pass
if isinstance(body, dict):
body = json.dumps(body)
# convert dict body to json string
return body
def send_messages(self, messages):
sent_messages = 0
for message in messages:
sns_topic_arn = str(message.recipients()[0])
try:
self._sns_publish(topic_arn=sns_topic_arn, message=message.body)
sent_messages += 1
except ClientError as error:
if not self.fail_silently:
raise error
return sent_messages

View File

@@ -32,3 +32,15 @@ class CustomNotificationBase(object):
"denied": {"message": DEFAULT_APPROVAL_DENIED_MSG, "body": None}, "denied": {"message": DEFAULT_APPROVAL_DENIED_MSG, "body": None},
}, },
} }
job_metadata_messages = {
"started": {"body": "{{ job_metadata }}"},
"success": {"body": "{{ job_metadata }}"},
"error": {"body": "{{ job_metadata }}"},
"workflow_approval": {
"running": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" needs review. This node can be viewed at: {{ workflow_url }}"}'},
"approved": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" was approved. {{ workflow_url }}"}'},
"timed_out": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" has timed out. {{ workflow_url }}"}'},
"denied": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" was denied. {{ workflow_url }}"}'},
},
}

View File

@@ -27,17 +27,7 @@ class WebhookBackend(AWXBaseEmailBackend, CustomNotificationBase):
sender_parameter = None sender_parameter = None
DEFAULT_BODY = "{{ job_metadata }}" DEFAULT_BODY = "{{ job_metadata }}"
default_messages = { default_messages = CustomNotificationBase.job_metadata_messages
"started": {"body": DEFAULT_BODY},
"success": {"body": DEFAULT_BODY},
"error": {"body": DEFAULT_BODY},
"workflow_approval": {
"running": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" needs review. This node can be viewed at: {{ workflow_url }}"}'},
"approved": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" was approved. {{ workflow_url }}"}'},
"timed_out": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" has timed out. {{ workflow_url }}"}'},
"denied": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" was denied. {{ workflow_url }}"}'},
},
}
def __init__(self, http_method, headers, disable_ssl_verification=False, fail_silently=False, username=None, password=None, **kwargs): def __init__(self, http_method, headers, disable_ssl_verification=False, fail_silently=False, username=None, password=None, **kwargs):
self.http_method = http_method self.http_method = http_method

View File

@@ -63,6 +63,10 @@ websocket_urlpatterns = [
re_path(r'api/websocket/$', consumers.EventConsumer.as_asgi()), re_path(r'api/websocket/$', consumers.EventConsumer.as_asgi()),
re_path(r'websocket/$', consumers.EventConsumer.as_asgi()), re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),
] ]
if settings.OPTIONAL_API_URLPATTERN_PREFIX:
websocket_urlpatterns.append(re_path(r'api/{}/v2/websocket/$'.format(settings.OPTIONAL_API_URLPATTERN_PREFIX), consumers.EventConsumer.as_asgi()))
websocket_relay_urlpatterns = [ websocket_relay_urlpatterns = [
re_path(r'websocket/relay/$', consumers.RelayConsumer.as_asgi()), re_path(r'websocket/relay/$', consumers.RelayConsumer.as_asgi()),
] ]

View File

@@ -36,6 +36,9 @@ import ansible_runner.cleanup
# dateutil # dateutil
from dateutil.parser import parse as parse_date from dateutil.parser import parse as parse_date
# django-ansible-base
from ansible_base.resource_registry.tasks.sync import SyncExecutor
# AWX # AWX
from awx import __version__ as awx_application_version from awx import __version__ as awx_application_version
from awx.main.access import access_registry from awx.main.access import access_registry
@@ -964,3 +967,17 @@ def deep_copy_model_obj(model_module, model_name, obj_pk, new_obj_pk, user_pk, p
permission_check_func(creater, copy_mapping.values()) permission_check_func(creater, copy_mapping.values())
if isinstance(new_obj, Inventory): if isinstance(new_obj, Inventory):
update_inventory_computed_fields.delay(new_obj.id) update_inventory_computed_fields.delay(new_obj.id)
@task(queue=get_task_queuename)
def periodic_resource_sync():
if not getattr(settings, 'RESOURCE_SERVER', None):
logger.debug("Skipping periodic resource_sync, RESOURCE_SERVER not configured")
return
with advisory_lock('periodic_resource_sync', wait=False) as acquired:
if acquired is False:
logger.debug("Not running periodic_resource_sync, another task holds lock")
return
SyncExecutor().run()

View File

@@ -0,0 +1,5 @@
{
"K8S_AUTH_HOST": "https://foo.invalid",
"K8S_AUTH_API_KEY": "fooo",
"K8S_AUTH_VERIFY_SSL": "False"
}

View File

@@ -9,8 +9,8 @@ def test_user_role_view_access(rando, inventory, mocker, post):
role_pk = inventory.admin_role.pk role_pk = inventory.admin_role.pk
data = {"id": role_pk} data = {"id": role_pk}
mock_access = mocker.MagicMock(can_attach=mocker.MagicMock(return_value=False)) mock_access = mocker.MagicMock(can_attach=mocker.MagicMock(return_value=False))
with mocker.patch('awx.main.access.RoleAccess', return_value=mock_access): mocker.patch('awx.main.access.RoleAccess', return_value=mock_access)
post(url=reverse('api:user_roles_list', kwargs={'pk': rando.pk}), data=data, user=rando, expect=403) post(url=reverse('api:user_roles_list', kwargs={'pk': rando.pk}), data=data, user=rando, expect=403)
mock_access.can_attach.assert_called_once_with(inventory.admin_role, rando, 'members', data, skip_sub_obj_read_check=False) mock_access.can_attach.assert_called_once_with(inventory.admin_role, rando, 'members', data, skip_sub_obj_read_check=False)
@@ -21,8 +21,8 @@ def test_team_role_view_access(rando, team, inventory, mocker, post):
role_pk = inventory.admin_role.pk role_pk = inventory.admin_role.pk
data = {"id": role_pk} data = {"id": role_pk}
mock_access = mocker.MagicMock(can_attach=mocker.MagicMock(return_value=False)) mock_access = mocker.MagicMock(can_attach=mocker.MagicMock(return_value=False))
with mocker.patch('awx.main.access.RoleAccess', return_value=mock_access): mocker.patch('awx.main.access.RoleAccess', return_value=mock_access)
post(url=reverse('api:team_roles_list', kwargs={'pk': team.pk}), data=data, user=rando, expect=403) post(url=reverse('api:team_roles_list', kwargs={'pk': team.pk}), data=data, user=rando, expect=403)
mock_access.can_attach.assert_called_once_with(inventory.admin_role, team, 'member_role.parents', data, skip_sub_obj_read_check=False) mock_access.can_attach.assert_called_once_with(inventory.admin_role, team, 'member_role.parents', data, skip_sub_obj_read_check=False)
@@ -33,8 +33,8 @@ def test_role_team_view_access(rando, team, inventory, mocker, post):
role_pk = inventory.admin_role.pk role_pk = inventory.admin_role.pk
data = {"id": team.pk} data = {"id": team.pk}
mock_access = mocker.MagicMock(return_value=False, __name__='mocked') mock_access = mocker.MagicMock(return_value=False, __name__='mocked')
with mocker.patch('awx.main.access.RoleAccess.can_attach', mock_access): mocker.patch('awx.main.access.RoleAccess.can_attach', mock_access)
post(url=reverse('api:role_teams_list', kwargs={'pk': role_pk}), data=data, user=rando, expect=403) post(url=reverse('api:role_teams_list', kwargs={'pk': role_pk}), data=data, user=rando, expect=403)
mock_access.assert_called_once_with(inventory.admin_role, team, 'member_role.parents', data, skip_sub_obj_read_check=False) mock_access.assert_called_once_with(inventory.admin_role, team, 'member_role.parents', data, skip_sub_obj_read_check=False)

View File

@@ -0,0 +1,66 @@
import pytest
from awx.api.versioning import reverse
from awx.main.models import Organization
@pytest.mark.django_db
class TestImmutableSharedFields:
@pytest.fixture(autouse=True)
def configure_settings(self, settings):
settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT = False
def test_create_raises_permission_denied(self, admin_user, post):
orgA = Organization.objects.create(name='orgA')
resp = post(
url=reverse('api:team_list'),
data={'name': 'teamA', 'organization': orgA.id},
user=admin_user,
expect=403,
)
assert "Creation of this resource is not allowed" in resp.data['detail']
def test_perform_delete_raises_permission_denied(self, admin_user, delete):
orgA = Organization.objects.create(name='orgA')
team = orgA.teams.create(name='teamA')
resp = delete(
url=reverse('api:team_detail', kwargs={'pk': team.id}),
user=admin_user,
expect=403,
)
assert "Deletion of this resource is not allowed" in resp.data['detail']
def test_perform_update(self, admin_user, patch):
orgA = Organization.objects.create(name='orgA')
team = orgA.teams.create(name='teamA')
# allow patching non-shared fields
patch(
url=reverse('api:team_detail', kwargs={'pk': team.id}),
data={"description": "can change this field"},
user=admin_user,
expect=200,
)
orgB = Organization.objects.create(name='orgB')
# prevent patching shared fields
resp = patch(url=reverse('api:team_detail', kwargs={'pk': team.id}), data={"organization": orgB.id}, user=admin_user, expect=403)
assert "Cannot change shared field" in resp.data['organization']
@pytest.mark.parametrize(
'role',
['admin_role', 'member_role'],
)
@pytest.mark.parametrize('resource', ['organization', 'team'])
def test_prevent_assigning_member_to_organization_or_team(self, admin_user, post, resource, role):
orgA = Organization.objects.create(name='orgA')
if resource == 'organization':
role = getattr(orgA, role)
elif resource == 'team':
teamA = orgA.teams.create(name='teamA')
role = getattr(teamA, role)
resp = post(
url=reverse('api:user_roles_list', kwargs={'pk': admin_user.id}),
data={'id': role.id},
user=admin_user,
expect=403,
)
assert f"Cannot directly modify user membership to {resource}." in resp.data['msg']

View File

@@ -131,11 +131,11 @@ def test_job_ignore_unprompted_vars(runtime_data, job_template_prompts, post, ad
mock_job = mocker.MagicMock(spec=Job, id=968, **runtime_data) mock_job = mocker.MagicMock(spec=Job, id=968, **runtime_data)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job): mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'): mocker.patch('awx.api.serializers.JobSerializer.to_representation')
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, admin_user, expect=201) response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, admin_user, expect=201)
assert JobTemplate.create_unified_job.called assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == () assert JobTemplate.create_unified_job.call_args == ()
# Check that job is serialized correctly # Check that job is serialized correctly
job_id = response.data['job'] job_id = response.data['job']
@@ -167,12 +167,12 @@ def test_job_accept_prompted_vars(runtime_data, job_template_prompts, post, admi
mock_job = mocker.MagicMock(spec=Job, id=968, **runtime_data) mock_job = mocker.MagicMock(spec=Job, id=968, **runtime_data)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job): mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'): mocker.patch('awx.api.serializers.JobSerializer.to_representation')
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, admin_user, expect=201) response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, admin_user, expect=201)
assert JobTemplate.create_unified_job.called assert JobTemplate.create_unified_job.called
called_with = data_to_internal(runtime_data) called_with = data_to_internal(runtime_data)
JobTemplate.create_unified_job.assert_called_with(**called_with) JobTemplate.create_unified_job.assert_called_with(**called_with)
job_id = response.data['job'] job_id = response.data['job']
assert job_id == 968 assert job_id == 968
@@ -187,11 +187,11 @@ def test_job_accept_empty_tags(job_template_prompts, post, admin_user, mocker):
mock_job = mocker.MagicMock(spec=Job, id=968) mock_job = mocker.MagicMock(spec=Job, id=968)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job): mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'): mocker.patch('awx.api.serializers.JobSerializer.to_representation')
post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), {'job_tags': '', 'skip_tags': ''}, admin_user, expect=201) post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), {'job_tags': '', 'skip_tags': ''}, admin_user, expect=201)
assert JobTemplate.create_unified_job.called assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ({'job_tags': '', 'skip_tags': ''},) assert JobTemplate.create_unified_job.call_args == ({'job_tags': '', 'skip_tags': ''},)
mock_job.signal_start.assert_called_once() mock_job.signal_start.assert_called_once()
@@ -203,14 +203,14 @@ def test_slice_timeout_forks_need_int(job_template_prompts, post, admin_user, mo
mock_job = mocker.MagicMock(spec=Job, id=968) mock_job = mocker.MagicMock(spec=Job, id=968)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job): mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'): mocker.patch('awx.api.serializers.JobSerializer.to_representation')
response = post( response = post(
reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), {'timeout': '', 'job_slice_count': '', 'forks': ''}, admin_user, expect=400 reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), {'timeout': '', 'job_slice_count': '', 'forks': ''}, admin_user, expect=400
) )
assert 'forks' in response.data and response.data['forks'][0] == 'A valid integer is required.' assert 'forks' in response.data and response.data['forks'][0] == 'A valid integer is required.'
assert 'job_slice_count' in response.data and response.data['job_slice_count'][0] == 'A valid integer is required.' assert 'job_slice_count' in response.data and response.data['job_slice_count'][0] == 'A valid integer is required.'
assert 'timeout' in response.data and response.data['timeout'][0] == 'A valid integer is required.' assert 'timeout' in response.data and response.data['timeout'][0] == 'A valid integer is required.'
@pytest.mark.django_db @pytest.mark.django_db
@@ -244,12 +244,12 @@ def test_job_accept_prompted_vars_null(runtime_data, job_template_prompts_null,
mock_job = mocker.MagicMock(spec=Job, id=968, **runtime_data) mock_job = mocker.MagicMock(spec=Job, id=968, **runtime_data)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job): mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'): mocker.patch('awx.api.serializers.JobSerializer.to_representation')
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, rando, expect=201) response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, rando, expect=201)
assert JobTemplate.create_unified_job.called assert JobTemplate.create_unified_job.called
expected_call = data_to_internal(runtime_data) expected_call = data_to_internal(runtime_data)
assert JobTemplate.create_unified_job.call_args == (expected_call,) assert JobTemplate.create_unified_job.call_args == (expected_call,)
job_id = response.data['job'] job_id = response.data['job']
assert job_id == 968 assert job_id == 968
@@ -641,18 +641,18 @@ def test_job_launch_unprompted_vars_with_survey(mocker, survey_spec_factory, job
job_template.survey_spec = survey_spec_factory('survey_var') job_template.survey_spec = survey_spec_factory('survey_var')
job_template.save() job_template.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'): mocker.patch('awx.main.access.BaseAccess.check_license')
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4}) mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job): mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
with mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={}): mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={})
response = post( response = post(
reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), reverse('api:job_template_launch', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}), dict(extra_vars={"job_launch_var": 3, "survey_var": 4}),
admin_user, admin_user,
expect=201, expect=201,
) )
assert JobTemplate.create_unified_job.called assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ({'extra_vars': {'survey_var': 4}},) assert JobTemplate.create_unified_job.call_args == ({'extra_vars': {'survey_var': 4}},)
job_id = response.data['job'] job_id = response.data['job']
assert job_id == 968 assert job_id == 968
@@ -670,22 +670,22 @@ def test_callback_accept_prompted_extra_var(mocker, survey_spec_factory, job_tem
job_template.survey_spec = survey_spec_factory('survey_var') job_template.survey_spec = survey_spec_factory('survey_var')
job_template.save() job_template.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'): mocker.patch('awx.main.access.BaseAccess.check_license')
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4}) mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
with mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job): mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job)
with mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={}): mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={})
with mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host]): mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host])
post( post(
reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"), dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user, admin_user,
expect=201, expect=201,
format='json', format='json',
) )
assert UnifiedJobTemplate.create_unified_job.called assert UnifiedJobTemplate.create_unified_job.called
call_args = UnifiedJobTemplate.create_unified_job.call_args[1] call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {'extra_vars': {'survey_var': 4, 'job_launch_var': 3}, 'limit': 'single-host'} assert call_args == {'extra_vars': {'survey_var': 4, 'job_launch_var': 3}, 'limit': 'single-host'}
mock_job.signal_start.assert_called_once() mock_job.signal_start.assert_called_once()
@@ -697,22 +697,22 @@ def test_callback_ignore_unprompted_extra_var(mocker, survey_spec_factory, job_t
job_template.host_config_key = "foo" job_template.host_config_key = "foo"
job_template.save() job_template.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'): mocker.patch('awx.main.access.BaseAccess.check_license')
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4}) mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
with mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job): mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job)
with mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={}): mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={})
with mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host]): mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host])
post( post(
reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"), dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user, admin_user,
expect=201, expect=201,
format='json', format='json',
) )
assert UnifiedJobTemplate.create_unified_job.called assert UnifiedJobTemplate.create_unified_job.called
call_args = UnifiedJobTemplate.create_unified_job.call_args[1] call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {'limit': 'single-host'} assert call_args == {'limit': 'single-host'}
mock_job.signal_start.assert_called_once() mock_job.signal_start.assert_called_once()
@@ -725,9 +725,9 @@ def test_callback_find_matching_hosts(mocker, get, job_template_prompts, admin_u
job_template.save() job_template.save()
host_with_alias = Host(name='localhost', inventory=job_template.inventory) host_with_alias = Host(name='localhost', inventory=job_template.inventory)
host_with_alias.save() host_with_alias.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'): mocker.patch('awx.main.access.BaseAccess.check_license')
r = get(reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), user=admin_user, expect=200) r = get(reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), user=admin_user, expect=200)
assert tuple(r.data['matching_hosts']) == ('localhost',) assert tuple(r.data['matching_hosts']) == ('localhost',)
@pytest.mark.django_db @pytest.mark.django_db
@@ -738,6 +738,6 @@ def test_callback_extra_var_takes_priority_over_host_name(mocker, get, job_templ
job_template.save() job_template.save()
host_with_alias = Host(name='localhost', variables={'ansible_host': 'foobar'}, inventory=job_template.inventory) host_with_alias = Host(name='localhost', variables={'ansible_host': 'foobar'}, inventory=job_template.inventory)
host_with_alias.save() host_with_alias.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'): mocker.patch('awx.main.access.BaseAccess.check_license')
r = get(reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), user=admin_user, expect=200) r = get(reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), user=admin_user, expect=200)
assert not r.data['matching_hosts'] assert not r.data['matching_hosts']

View File

@@ -165,8 +165,8 @@ class TestAccessListCapabilities:
def test_access_list_direct_access_capability(self, inventory, rando, get, mocker, mock_access_method): def test_access_list_direct_access_capability(self, inventory, rando, get, mocker, mock_access_method):
inventory.admin_role.members.add(rando) inventory.admin_role.members.add(rando)
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method): mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), rando) response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), rando)
mock_access_method.assert_called_once_with(inventory.admin_role, rando, 'members', **self.extra_kwargs) mock_access_method.assert_called_once_with(inventory.admin_role, rando, 'members', **self.extra_kwargs)
self._assert_one_in_list(response.data) self._assert_one_in_list(response.data)
@@ -174,8 +174,8 @@ class TestAccessListCapabilities:
assert direct_access_list[0]['role']['user_capabilities']['unattach'] == 'foobar' assert direct_access_list[0]['role']['user_capabilities']['unattach'] == 'foobar'
def test_access_list_indirect_access_capability(self, inventory, organization, org_admin, get, mocker, mock_access_method): def test_access_list_indirect_access_capability(self, inventory, organization, org_admin, get, mocker, mock_access_method):
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method): mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), org_admin) response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), org_admin)
mock_access_method.assert_called_once_with(organization.admin_role, org_admin, 'members', **self.extra_kwargs) mock_access_method.assert_called_once_with(organization.admin_role, org_admin, 'members', **self.extra_kwargs)
self._assert_one_in_list(response.data, sublist='indirect_access') self._assert_one_in_list(response.data, sublist='indirect_access')
@@ -185,8 +185,8 @@ class TestAccessListCapabilities:
def test_access_list_team_direct_access_capability(self, inventory, team, team_member, get, mocker, mock_access_method): def test_access_list_team_direct_access_capability(self, inventory, team, team_member, get, mocker, mock_access_method):
team.member_role.children.add(inventory.admin_role) team.member_role.children.add(inventory.admin_role)
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method): mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), team_member) response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), team_member)
mock_access_method.assert_called_once_with(inventory.admin_role, team.member_role, 'parents', **self.extra_kwargs) mock_access_method.assert_called_once_with(inventory.admin_role, team.member_role, 'parents', **self.extra_kwargs)
self._assert_one_in_list(response.data) self._assert_one_in_list(response.data)
@@ -198,8 +198,8 @@ class TestAccessListCapabilities:
def test_team_roles_unattach(mocker, team, team_member, inventory, mock_access_method, get): def test_team_roles_unattach(mocker, team, team_member, inventory, mock_access_method, get):
team.member_role.children.add(inventory.admin_role) team.member_role.children.add(inventory.admin_role)
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method): mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:team_roles_list', kwargs={'pk': team.id}), team_member) response = get(reverse('api:team_roles_list', kwargs={'pk': team.id}), team_member)
# Did we assess whether team_member can remove team's permission to the inventory? # Did we assess whether team_member can remove team's permission to the inventory?
mock_access_method.assert_called_once_with(inventory.admin_role, team.member_role, 'parents', skip_sub_obj_read_check=True, data={}) mock_access_method.assert_called_once_with(inventory.admin_role, team.member_role, 'parents', skip_sub_obj_read_check=True, data={})
@@ -212,8 +212,8 @@ def test_user_roles_unattach(mocker, organization, alice, bob, mock_access_metho
organization.member_role.members.add(alice) organization.member_role.members.add(alice)
organization.member_role.members.add(bob) organization.member_role.members.add(bob)
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method): mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:user_roles_list', kwargs={'pk': alice.id}), bob) response = get(reverse('api:user_roles_list', kwargs={'pk': alice.id}), bob)
# Did we assess whether bob can remove alice's permission to the inventory? # Did we assess whether bob can remove alice's permission to the inventory?
mock_access_method.assert_called_once_with(organization.member_role, alice, 'members', skip_sub_obj_read_check=True, data={}) mock_access_method.assert_called_once_with(organization.member_role, alice, 'members', skip_sub_obj_read_check=True, data={})

View File

@@ -43,9 +43,9 @@ def run_command(name, *args, **options):
], ],
) )
def test_update_password_command(mocker, username, password, expected, changed): def test_update_password_command(mocker, username, password, expected, changed):
with mocker.patch.object(UpdatePassword, 'update_password', return_value=changed): mocker.patch.object(UpdatePassword, 'update_password', return_value=changed)
result, stdout, stderr = run_command('update_password', username=username, password=password) result, stdout, stderr = run_command('update_password', username=username, password=password)
if result is None: if result is None:
assert stdout == expected assert stdout == expected
else: else:
assert str(result) == expected assert str(result) == expected

View File

@@ -21,13 +21,13 @@ class TestComputedFields:
def test_computed_fields_normal_use(self, mocker, inventory): def test_computed_fields_normal_use(self, mocker, inventory):
job = Job.objects.create(name='fake-job', inventory=inventory) job = Job.objects.create(name='fake-job', inventory=inventory)
with immediate_on_commit(): with immediate_on_commit():
with mocker.patch.object(update_inventory_computed_fields, 'delay'): mocker.patch.object(update_inventory_computed_fields, 'delay')
job.delete() job.delete()
update_inventory_computed_fields.delay.assert_called_once_with(inventory.id) update_inventory_computed_fields.delay.assert_called_once_with(inventory.id)
def test_disable_computed_fields(self, mocker, inventory): def test_disable_computed_fields(self, mocker, inventory):
job = Job.objects.create(name='fake-job', inventory=inventory) job = Job.objects.create(name='fake-job', inventory=inventory)
with disable_computed_fields(): with disable_computed_fields():
with mocker.patch.object(update_inventory_computed_fields, 'delay'): mocker.patch.object(update_inventory_computed_fields, 'delay')
job.delete() job.delete()
update_inventory_computed_fields.delay.assert_not_called() update_inventory_computed_fields.delay.assert_not_called()

View File

@@ -21,13 +21,13 @@ def test_multi_group_basic_job_launch(instance_factory, controlplane_instance_gr
j2 = create_job(objects2.job_template) j2 = create_job(objects2.job_template)
with mock.patch('awx.main.models.Job.task_impact', new_callable=mock.PropertyMock) as mock_task_impact: with mock.patch('awx.main.models.Job.task_impact', new_callable=mock.PropertyMock) as mock_task_impact:
mock_task_impact.return_value = 500 mock_task_impact.return_value = 500
with mocker.patch("awx.main.scheduler.TaskManager.start_task"): mocker.patch("awx.main.scheduler.TaskManager.start_task")
TaskManager().schedule() TaskManager().schedule()
TaskManager.start_task.assert_has_calls([mock.call(j1, ig1, i1), mock.call(j2, ig2, i2)]) TaskManager.start_task.assert_has_calls([mock.call(j1, ig1, i1), mock.call(j2, ig2, i2)])
@pytest.mark.django_db @pytest.mark.django_db
def test_multi_group_with_shared_dependency(instance_factory, controlplane_instance_group, mocker, instance_group_factory, job_template_factory): def test_multi_group_with_shared_dependency(instance_factory, controlplane_instance_group, instance_group_factory, job_template_factory):
i1 = instance_factory("i1") i1 = instance_factory("i1")
i2 = instance_factory("i2") i2 = instance_factory("i2")
ig1 = instance_group_factory("ig1", instances=[i1]) ig1 = instance_group_factory("ig1", instances=[i1])
@@ -50,7 +50,7 @@ def test_multi_group_with_shared_dependency(instance_factory, controlplane_insta
objects2 = job_template_factory('jt2', organization=objects1.organization, project=p, inventory='inv2', credential='cred2') objects2 = job_template_factory('jt2', organization=objects1.organization, project=p, inventory='inv2', credential='cred2')
objects2.job_template.instance_groups.add(ig2) objects2.job_template.instance_groups.add(ig2)
j2 = create_job(objects2.job_template, dependencies_processed=False) j2 = create_job(objects2.job_template, dependencies_processed=False)
with mocker.patch("awx.main.scheduler.TaskManager.start_task"): with mock.patch("awx.main.scheduler.TaskManager.start_task"):
DependencyManager().schedule() DependencyManager().schedule()
TaskManager().schedule() TaskManager().schedule()
pu = p.project_updates.first() pu = p.project_updates.first()
@@ -73,10 +73,10 @@ def test_workflow_job_no_instancegroup(workflow_job_template_factory, controlpla
wfj = wfjt.create_unified_job() wfj = wfjt.create_unified_job()
wfj.status = "pending" wfj.status = "pending"
wfj.save() wfj.save()
with mocker.patch("awx.main.scheduler.TaskManager.start_task"): mocker.patch("awx.main.scheduler.TaskManager.start_task")
TaskManager().schedule() TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(wfj, None, None) TaskManager.start_task.assert_called_once_with(wfj, None, None)
assert wfj.instance_group is None assert wfj.instance_group is None
@pytest.mark.django_db @pytest.mark.django_db

View File

@@ -16,9 +16,9 @@ def test_single_job_scheduler_launch(hybrid_instance, controlplane_instance_grou
instance = controlplane_instance_group.instances.all()[0] instance = controlplane_instance_group.instances.all()[0]
objects = job_template_factory('jt', organization='org1', project='proj', inventory='inv', credential='cred') objects = job_template_factory('jt', organization='org1', project='proj', inventory='inv', credential='cred')
j = create_job(objects.job_template) j = create_job(objects.job_template)
with mocker.patch("awx.main.scheduler.TaskManager.start_task"): mocker.patch("awx.main.scheduler.TaskManager.start_task")
TaskManager().schedule() TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance) TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance)
@pytest.mark.django_db @pytest.mark.django_db

View File

@@ -46,6 +46,8 @@ def generate_fake_var(element):
def credential_kind(source): def credential_kind(source):
"""Given the inventory source kind, return expected credential kind""" """Given the inventory source kind, return expected credential kind"""
if source == 'openshift_virtualization':
return 'kubernetes_bearer_token'
return source.replace('ec2', 'aws') return source.replace('ec2', 'aws')

View File

@@ -1,8 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import pytest import pytest
from django.conf import settings
from awx.api.versioning import reverse from awx.api.versioning import reverse
from awx.main.middleware import URLModificationMiddleware from awx.main.middleware import URLModificationMiddleware
from awx.main.models import ( # noqa from awx.main.models import ( # noqa
@@ -121,7 +119,7 @@ def test_notification_template(get, admin_user):
@pytest.mark.django_db @pytest.mark.django_db
def test_instance(get, admin_user): def test_instance(get, admin_user, settings):
test_instance = Instance.objects.create(uuid=settings.SYSTEM_UUID, hostname="localhost", capacity=100) test_instance = Instance.objects.create(uuid=settings.SYSTEM_UUID, hostname="localhost", capacity=100)
url = reverse('api:instance_detail', kwargs={'pk': test_instance.pk}) url = reverse('api:instance_detail', kwargs={'pk': test_instance.pk})
response = get(url, user=admin_user, expect=200) response = get(url, user=admin_user, expect=200)
@@ -205,3 +203,65 @@ def test_403_vs_404(get):
get(f'/api/v2/users/{cindy.pk}/', expect=401) get(f'/api/v2/users/{cindy.pk}/', expect=401)
get('/api/v2/users/cindy/', expect=404) get('/api/v2/users/cindy/', expect=404)
@pytest.mark.django_db
class TestConvertNamedUrl:
@pytest.mark.parametrize(
"url",
(
"/api/",
"/api/v2/",
"/api/v2/hosts/",
"/api/v2/hosts/1/",
"/api/v2/organizations/1/inventories/",
"/api/foo/",
"/api/foo/v2/",
"/api/foo/v2/organizations/",
"/api/foo/v2/organizations/1/",
"/api/foo/v2/organizations/1/inventories/",
"/api/foobar/",
"/api/foobar/v2/",
"/api/foobar/v2/organizations/",
"/api/foobar/v2/organizations/1/",
"/api/foobar/v2/organizations/1/inventories/",
"/api/foobar/v2/organizations/1/inventories/",
),
)
def test_noop(self, url, settings):
settings.OPTIONAL_API_URLPATTERN_PREFIX = ''
assert URLModificationMiddleware._convert_named_url(url) == url
settings.OPTIONAL_API_URLPATTERN_PREFIX = 'foo'
assert URLModificationMiddleware._convert_named_url(url) == url
def test_named_org(self):
test_org = Organization.objects.create(name='test_org')
assert URLModificationMiddleware._convert_named_url('/api/v2/organizations/test_org/') == f'/api/v2/organizations/{test_org.pk}/'
def test_named_org_optional_api_urlpattern_prefix_interaction(self, settings):
settings.OPTIONAL_API_URLPATTERN_PREFIX = 'bar'
test_org = Organization.objects.create(name='test_org')
assert URLModificationMiddleware._convert_named_url('/api/bar/v2/organizations/test_org/') == f'/api/bar/v2/organizations/{test_org.pk}/'
@pytest.mark.parametrize("prefix", ['', 'bar'])
def test_named_org_not_found(self, prefix, settings):
settings.OPTIONAL_API_URLPATTERN_PREFIX = prefix
if prefix:
prefix += '/'
assert URLModificationMiddleware._convert_named_url(f'/api/{prefix}v2/organizations/does-not-exist/') == f'/api/{prefix}v2/organizations/0/'
@pytest.mark.parametrize("prefix", ['', 'bar'])
def test_named_sub_resource(self, prefix, settings):
settings.OPTIONAL_API_URLPATTERN_PREFIX = prefix
test_org = Organization.objects.create(name='test_org')
if prefix:
prefix += '/'
assert (
URLModificationMiddleware._convert_named_url(f'/api/{prefix}v2/organizations/test_org/inventories/')
== f'/api/{prefix}v2/organizations/{test_org.pk}/inventories/'
)

View File

@@ -187,7 +187,7 @@ def test_remove_role_from_user(role, post, admin):
@pytest.mark.django_db @pytest.mark.django_db
@override_settings(ANSIBLE_BASE_ALLOW_TEAM_ORG_ADMIN=True) @override_settings(ANSIBLE_BASE_ALLOW_TEAM_ORG_ADMIN=True, ANSIBLE_BASE_ALLOW_TEAM_ORG_MEMBER=True)
def test_get_teams_roles_list(get, team, organization, admin): def test_get_teams_roles_list(get, team, organization, admin):
team.member_role.children.add(organization.admin_role) team.member_role.children.add(organization.admin_role)
url = reverse('api:team_roles_list', kwargs={'pk': team.id}) url = reverse('api:team_roles_list', kwargs={'pk': team.id})

View File

@@ -76,15 +76,15 @@ class TestJobTemplateSerializerGetRelated:
class TestJobTemplateSerializerGetSummaryFields: class TestJobTemplateSerializerGetSummaryFields:
def test_survey_spec_exists(self, test_get_summary_fields, mocker, job_template): def test_survey_spec_exists(self, test_get_summary_fields, mocker, job_template):
job_template.survey_spec = {'name': 'blah', 'description': 'blah blah'} job_template.survey_spec = {'name': 'blah', 'description': 'blah blah'}
with mocker.patch.object(JobTemplateSerializer, '_recent_jobs') as mock_rj: mock_rj = mocker.patch.object(JobTemplateSerializer, '_recent_jobs')
mock_rj.return_value = [] mock_rj.return_value = []
test_get_summary_fields(JobTemplateSerializer, job_template, 'survey') test_get_summary_fields(JobTemplateSerializer, job_template, 'survey')
def test_survey_spec_absent(self, get_summary_fields_mock_and_run, mocker, job_template): def test_survey_spec_absent(self, get_summary_fields_mock_and_run, mocker, job_template):
job_template.survey_spec = None job_template.survey_spec = None
with mocker.patch.object(JobTemplateSerializer, '_recent_jobs') as mock_rj: mock_rj = mocker.patch.object(JobTemplateSerializer, '_recent_jobs')
mock_rj.return_value = [] mock_rj.return_value = []
summary = get_summary_fields_mock_and_run(JobTemplateSerializer, job_template) summary = get_summary_fields_mock_and_run(JobTemplateSerializer, job_template)
assert 'survey' not in summary assert 'survey' not in summary
def test_copy_edit_standard(self, mocker, job_template_factory): def test_copy_edit_standard(self, mocker, job_template_factory):
@@ -107,10 +107,10 @@ class TestJobTemplateSerializerGetSummaryFields:
view.kwargs = {} view.kwargs = {}
serializer.context['view'] = view serializer.context['view'] = view
with mocker.patch("awx.api.serializers.role_summary_fields_generator", return_value='Can eat pie'): mocker.patch("awx.api.serializers.role_summary_fields_generator", return_value='Can eat pie')
with mocker.patch("awx.main.access.JobTemplateAccess.can_change", return_value='foobar'): mocker.patch("awx.main.access.JobTemplateAccess.can_change", return_value='foobar')
with mocker.patch("awx.main.access.JobTemplateAccess.can_copy", return_value='foo'): mocker.patch("awx.main.access.JobTemplateAccess.can_copy", return_value='foo')
response = serializer.get_summary_fields(jt_obj) response = serializer.get_summary_fields(jt_obj)
assert response['user_capabilities']['copy'] == 'foo' assert response['user_capabilities']['copy'] == 'foo'
assert response['user_capabilities']['edit'] == 'foobar' assert response['user_capabilities']['edit'] == 'foobar'

View File

@@ -189,8 +189,8 @@ class TestWorkflowJobTemplateNodeSerializerSurveyPasswords:
serializer = WorkflowJobTemplateNodeSerializer() serializer = WorkflowJobTemplateNodeSerializer()
wfjt = WorkflowJobTemplate.objects.create(name='fake-wfjt') wfjt = WorkflowJobTemplate.objects.create(name='fake-wfjt')
serializer.instance = WorkflowJobTemplateNode(workflow_job_template=wfjt, unified_job_template=jt, extra_data={'var1': '$encrypted$foooooo'}) serializer.instance = WorkflowJobTemplateNode(workflow_job_template=wfjt, unified_job_template=jt, extra_data={'var1': '$encrypted$foooooo'})
with mocker.patch('awx.main.models.mixins.decrypt_value', return_value='foo'): mocker.patch('awx.main.models.mixins.decrypt_value', return_value='foo')
attrs = serializer.validate({'unified_job_template': jt, 'workflow_job_template': wfjt, 'extra_data': {'var1': '$encrypted$'}}) attrs = serializer.validate({'unified_job_template': jt, 'workflow_job_template': wfjt, 'extra_data': {'var1': '$encrypted$'}})
assert 'survey_passwords' in attrs assert 'survey_passwords' in attrs
assert 'var1' in attrs['survey_passwords'] assert 'var1' in attrs['survey_passwords']
assert attrs['extra_data']['var1'] == '$encrypted$foooooo' assert attrs['extra_data']['var1'] == '$encrypted$foooooo'

View File

@@ -191,16 +191,16 @@ class TestResourceAccessList:
def test_parent_access_check_failed(self, mocker, mock_organization): def test_parent_access_check_failed(self, mocker, mock_organization):
mock_access = mocker.MagicMock(__name__='for logger', return_value=False) mock_access = mocker.MagicMock(__name__='for logger', return_value=False)
with mocker.patch('awx.main.access.BaseAccess.can_read', mock_access): mocker.patch('awx.main.access.BaseAccess.can_read', mock_access)
with pytest.raises(PermissionDenied): with pytest.raises(PermissionDenied):
self.mock_view(parent=mock_organization).check_permissions(self.mock_request()) self.mock_view(parent=mock_organization).check_permissions(self.mock_request())
mock_access.assert_called_once_with(mock_organization) mock_access.assert_called_once_with(mock_organization)
def test_parent_access_check_worked(self, mocker, mock_organization): def test_parent_access_check_worked(self, mocker, mock_organization):
mock_access = mocker.MagicMock(__name__='for logger', return_value=True) mock_access = mocker.MagicMock(__name__='for logger', return_value=True)
with mocker.patch('awx.main.access.BaseAccess.can_read', mock_access): mocker.patch('awx.main.access.BaseAccess.can_read', mock_access)
self.mock_view(parent=mock_organization).check_permissions(self.mock_request()) self.mock_view(parent=mock_organization).check_permissions(self.mock_request())
mock_access.assert_called_once_with(mock_organization) mock_access.assert_called_once_with(mock_organization)
def test_related_search_reverse_FK_field(): def test_related_search_reverse_FK_field():

View File

@@ -66,7 +66,7 @@ class TestJobTemplateLabelList:
mock_request = mock.MagicMock() mock_request = mock.MagicMock()
super(JobTemplateLabelList, view).unattach(mock_request, None, None) super(JobTemplateLabelList, view).unattach(mock_request, None, None)
assert mixin_unattach.called_with(mock_request, None, None) mixin_unattach.assert_called_with(mock_request, None, None)
class TestInventoryInventorySourcesUpdate: class TestInventoryInventorySourcesUpdate:
@@ -108,15 +108,16 @@ class TestInventoryInventorySourcesUpdate:
mock_request = mocker.MagicMock() mock_request = mocker.MagicMock()
mock_request.user.can_access.return_value = can_access mock_request.user.can_access.return_value = can_access
with mocker.patch.object(InventoryInventorySourcesUpdate, 'get_object', return_value=obj): mocker.patch.object(InventoryInventorySourcesUpdate, 'get_object', return_value=obj)
with mocker.patch.object(InventoryInventorySourcesUpdate, 'get_serializer_context', return_value=None): mocker.patch.object(InventoryInventorySourcesUpdate, 'get_serializer_context', return_value=None)
with mocker.patch('awx.api.serializers.InventoryUpdateDetailSerializer') as serializer_class: serializer_class = mocker.patch('awx.api.serializers.InventoryUpdateDetailSerializer')
serializer = serializer_class.return_value
serializer.to_representation.return_value = {}
view = InventoryInventorySourcesUpdate() serializer = serializer_class.return_value
response = view.post(mock_request) serializer.to_representation.return_value = {}
assert response.data == expected
view = InventoryInventorySourcesUpdate()
response = view.post(mock_request)
assert response.data == expected
class TestSurveySpecValidation: class TestSurveySpecValidation:

View File

@@ -155,35 +155,35 @@ def test_node_getter_and_setters():
class TestWorkflowJobCreate: class TestWorkflowJobCreate:
def test_create_no_prompts(self, wfjt_node_no_prompts, workflow_job_unit, mocker): def test_create_no_prompts(self, wfjt_node_no_prompts, workflow_job_unit, mocker):
mock_create = mocker.MagicMock() mock_create = mocker.MagicMock()
with mocker.patch('awx.main.models.WorkflowJobNode.objects.create', mock_create): mocker.patch('awx.main.models.WorkflowJobNode.objects.create', mock_create)
wfjt_node_no_prompts.create_workflow_job_node(workflow_job=workflow_job_unit) wfjt_node_no_prompts.create_workflow_job_node(workflow_job=workflow_job_unit)
mock_create.assert_called_once_with( mock_create.assert_called_once_with(
all_parents_must_converge=False, all_parents_must_converge=False,
extra_data={}, extra_data={},
survey_passwords={}, survey_passwords={},
char_prompts=wfjt_node_no_prompts.char_prompts, char_prompts=wfjt_node_no_prompts.char_prompts,
inventory=None, inventory=None,
unified_job_template=wfjt_node_no_prompts.unified_job_template, unified_job_template=wfjt_node_no_prompts.unified_job_template,
workflow_job=workflow_job_unit, workflow_job=workflow_job_unit,
identifier=mocker.ANY, identifier=mocker.ANY,
execution_environment=None, execution_environment=None,
) )
def test_create_with_prompts(self, wfjt_node_with_prompts, workflow_job_unit, credential, mocker): def test_create_with_prompts(self, wfjt_node_with_prompts, workflow_job_unit, credential, mocker):
mock_create = mocker.MagicMock() mock_create = mocker.MagicMock()
with mocker.patch('awx.main.models.WorkflowJobNode.objects.create', mock_create): mocker.patch('awx.main.models.WorkflowJobNode.objects.create', mock_create)
wfjt_node_with_prompts.create_workflow_job_node(workflow_job=workflow_job_unit) wfjt_node_with_prompts.create_workflow_job_node(workflow_job=workflow_job_unit)
mock_create.assert_called_once_with( mock_create.assert_called_once_with(
all_parents_must_converge=False, all_parents_must_converge=False,
extra_data={}, extra_data={},
survey_passwords={}, survey_passwords={},
char_prompts=wfjt_node_with_prompts.char_prompts, char_prompts=wfjt_node_with_prompts.char_prompts,
inventory=wfjt_node_with_prompts.inventory, inventory=wfjt_node_with_prompts.inventory,
unified_job_template=wfjt_node_with_prompts.unified_job_template, unified_job_template=wfjt_node_with_prompts.unified_job_template,
workflow_job=workflow_job_unit, workflow_job=workflow_job_unit,
identifier=mocker.ANY, identifier=mocker.ANY,
execution_environment=None, execution_environment=None,
) )
@pytest.mark.django_db @pytest.mark.django_db

View File

@@ -0,0 +1,26 @@
from unittest import mock
from django.core.mail.message import EmailMessage
import awx.main.notifications.awssns_backend as awssns_backend
def test_send_messages():
with mock.patch('awx.main.notifications.awssns_backend.AWSSNSBackend._sns_publish') as sns_publish_mock:
aws_region = 'us-east-1'
sns_topic = f"arn:aws:sns:{aws_region}:111111111111:topic-mock"
backend = awssns_backend.AWSSNSBackend(aws_region=aws_region, aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None)
message = EmailMessage(
'test subject',
{'body': 'test body'},
[],
[
sns_topic,
],
)
sent_messages = backend.send_messages(
[
message,
]
)
sns_publish_mock.assert_called_once_with(topic_arn=sns_topic, message=message.body)
assert sent_messages == 1

View File

@@ -137,10 +137,10 @@ def test_send_notifications_not_list():
def test_send_notifications_job_id(mocker): def test_send_notifications_job_id(mocker):
with mocker.patch('awx.main.models.UnifiedJob.objects.get'): mocker.patch('awx.main.models.UnifiedJob.objects.get')
system.send_notifications([], job_id=1) system.send_notifications([], job_id=1)
assert UnifiedJob.objects.get.called assert UnifiedJob.objects.get.called
assert UnifiedJob.objects.get.called_with(id=1) assert UnifiedJob.objects.get.called_with(id=1)
@mock.patch('awx.main.models.UnifiedJob.objects.get') @mock.patch('awx.main.models.UnifiedJob.objects.get')

View File

@@ -7,15 +7,15 @@ def test_produce_supervisor_command(mocker):
mock_process = mocker.MagicMock() mock_process = mocker.MagicMock()
mock_process.communicate = communicate_mock mock_process.communicate = communicate_mock
Popen_mock = mocker.MagicMock(return_value=mock_process) Popen_mock = mocker.MagicMock(return_value=mock_process)
with mocker.patch.object(reload.subprocess, 'Popen', Popen_mock): mocker.patch.object(reload.subprocess, 'Popen', Popen_mock)
reload.supervisor_service_command("restart") reload.supervisor_service_command("restart")
reload.subprocess.Popen.assert_called_once_with( reload.subprocess.Popen.assert_called_once_with(
[ [
'supervisorctl', 'supervisorctl',
'restart', 'restart',
'tower-processes:*', 'tower-processes:*',
], ],
stderr=-1, stderr=-1,
stdin=-1, stdin=-1,
stdout=-1, stdout=-1,
) )

View File

@@ -2,9 +2,11 @@
# All Rights Reserved. # All Rights Reserved.
# Python # Python
import base64
import logging import logging
import sys import sys
import traceback import traceback
import os
from datetime import datetime from datetime import datetime
# Django # Django
@@ -15,6 +17,15 @@ from django.utils.encoding import force_str
# AWX # AWX
from awx.main.exceptions import PostRunError from awx.main.exceptions import PostRunError
# OTEL
from opentelemetry._logs import set_logger_provider
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporter as OTLPGrpcLogExporter
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter as OTLPHttpLogExporter
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.sdk.resources import Resource
class RSysLogHandler(logging.handlers.SysLogHandler): class RSysLogHandler(logging.handlers.SysLogHandler):
append_nul = False append_nul = False
@@ -133,3 +144,39 @@ if settings.COLOR_LOGS is True:
pass pass
else: else:
ColorHandler = logging.StreamHandler ColorHandler = logging.StreamHandler
class OTLPHandler(LoggingHandler):
def __init__(self, endpoint=None, protocol='grpc', service_name=None, instance_id=None, auth=None, username=None, password=None):
if not endpoint:
raise ValueError("endpoint required")
if auth == 'basic' and (username is None or password is None):
raise ValueError("auth type basic requires username and passsword parameters")
self.endpoint = endpoint
self.service_name = service_name or (sys.argv[1] if len(sys.argv) > 1 else (sys.argv[0] or 'unknown_service'))
self.instance_id = instance_id or os.uname().nodename
logger_provider = LoggerProvider(
resource=Resource.create(
{
"service.name": self.service_name,
"service.instance.id": self.instance_id,
}
),
)
set_logger_provider(logger_provider)
headers = {}
if auth == 'basic':
secret = f'{username}:{password}'
headers['Authorization'] = "Basic " + base64.b64encode(secret.encode()).decode()
if protocol == 'grpc':
otlp_exporter = OTLPGrpcLogExporter(endpoint=self.endpoint, insecure=True, headers=headers)
elif protocol == 'http':
otlp_exporter = OTLPHttpLogExporter(endpoint=self.endpoint, headers=headers)
logger_provider.add_log_record_processor(BatchLogRecordProcessor(otlp_exporter))
super().__init__(level=logging.NOTSET, logger_provider=logger_provider)

View File

@@ -285,8 +285,6 @@ class WebSocketRelayManager(object):
except asyncio.CancelledError: except asyncio.CancelledError:
# Handle the case where the task was already cancelled by the time we got here. # Handle the case where the task was already cancelled by the time we got here.
pass pass
except Exception as e:
logger.warning(f"Failed to cancel relay connection for {hostname}: {e}")
del self.relay_connections[hostname] del self.relay_connections[hostname]
@@ -297,8 +295,6 @@ class WebSocketRelayManager(object):
self.stats_mgr.delete_remote_host_stats(hostname) self.stats_mgr.delete_remote_host_stats(hostname)
except KeyError: except KeyError:
pass pass
except Exception as e:
logger.warning(f"Failed to delete stats for {hostname}: {e}")
async def run(self): async def run(self):
event_loop = asyncio.get_running_loop() event_loop = asyncio.get_running_loop()
@@ -306,7 +302,6 @@ class WebSocketRelayManager(object):
self.stats_mgr = RelayWebsocketStatsManager(event_loop, self.local_hostname) self.stats_mgr = RelayWebsocketStatsManager(event_loop, self.local_hostname)
self.stats_mgr.start() self.stats_mgr.start()
# Set up a pg_notify consumer for allowing web nodes to "provision" and "deprovision" themselves gracefully.
database_conf = deepcopy(settings.DATABASES['default']) database_conf = deepcopy(settings.DATABASES['default'])
database_conf['OPTIONS'] = deepcopy(database_conf.get('OPTIONS', {})) database_conf['OPTIONS'] = deepcopy(database_conf.get('OPTIONS', {}))
@@ -318,79 +313,54 @@ class WebSocketRelayManager(object):
if 'PASSWORD' in database_conf: if 'PASSWORD' in database_conf:
database_conf['OPTIONS']['password'] = database_conf.pop('PASSWORD') database_conf['OPTIONS']['password'] = database_conf.pop('PASSWORD')
task = None async_conn = await psycopg.AsyncConnection.connect(
dbname=database_conf['NAME'],
host=database_conf['HOST'],
user=database_conf['USER'],
port=database_conf['PORT'],
**database_conf.get("OPTIONS", {}),
)
# Managing the async_conn here so that we can close it if we need to restart the connection await async_conn.set_autocommit(True)
async_conn = None on_ws_heartbeat_task = event_loop.create_task(self.on_ws_heartbeat(async_conn))
# Establishes a websocket connection to /websocket/relay on all API servers # Establishes a websocket connection to /websocket/relay on all API servers
try: while True:
while True: if on_ws_heartbeat_task.done():
if not task or task.done(): raise Exception("on_ws_heartbeat_task has exited")
try:
# Try to close the connection if it's open
if async_conn:
try:
await async_conn.close()
except Exception as e:
logger.warning(f"Failed to close connection to database for pg_notify: {e}")
# and re-establish the connection future_remote_hosts = self.known_hosts.keys()
async_conn = await psycopg.AsyncConnection.connect( current_remote_hosts = self.relay_connections.keys()
dbname=database_conf['NAME'], deleted_remote_hosts = set(current_remote_hosts) - set(future_remote_hosts)
host=database_conf['HOST'], new_remote_hosts = set(future_remote_hosts) - set(current_remote_hosts)
user=database_conf['USER'],
port=database_conf['PORT'],
**database_conf.get("OPTIONS", {}),
)
await async_conn.set_autocommit(True)
# before creating the task that uses the connection # This loop handles if we get an advertisement from a host we already know about but
task = event_loop.create_task(self.on_ws_heartbeat(async_conn), name="on_ws_heartbeat") # the advertisement has a different IP than we are currently connected to.
logger.info("Creating `on_ws_heartbeat` task in event loop.") for hostname, address in self.known_hosts.items():
if hostname not in self.relay_connections:
# We've picked up a new hostname that we don't know about yet.
continue
except Exception as e: if address != self.relay_connections[hostname].remote_host:
logger.warning(f"Failed to connect to database for pg_notify: {e}") deleted_remote_hosts.add(hostname)
new_remote_hosts.add(hostname)
future_remote_hosts = self.known_hosts.keys() # Delete any hosts with closed connections
current_remote_hosts = self.relay_connections.keys() for hostname, relay_conn in self.relay_connections.items():
deleted_remote_hosts = set(current_remote_hosts) - set(future_remote_hosts) if not relay_conn.connected:
new_remote_hosts = set(future_remote_hosts) - set(current_remote_hosts) deleted_remote_hosts.add(hostname)
# This loop handles if we get an advertisement from a host we already know about but if deleted_remote_hosts:
# the advertisement has a different IP than we are currently connected to. logger.info(f"Removing {deleted_remote_hosts} from websocket broadcast list")
for hostname, address in self.known_hosts.items(): await asyncio.gather(*[self.cleanup_offline_host(h) for h in deleted_remote_hosts])
if hostname not in self.relay_connections:
# We've picked up a new hostname that we don't know about yet.
continue
if address != self.relay_connections[hostname].remote_host: if new_remote_hosts:
deleted_remote_hosts.add(hostname) logger.info(f"Adding {new_remote_hosts} to websocket broadcast list")
new_remote_hosts.add(hostname)
# Delete any hosts with closed connections for h in new_remote_hosts:
for hostname, relay_conn in self.relay_connections.items(): stats = self.stats_mgr.new_remote_host_stats(h)
if not relay_conn.connected: relay_connection = WebsocketRelayConnection(name=self.local_hostname, stats=stats, remote_host=self.known_hosts[h])
deleted_remote_hosts.add(hostname) relay_connection.start()
self.relay_connections[h] = relay_connection
if deleted_remote_hosts: await asyncio.sleep(settings.BROADCAST_WEBSOCKET_NEW_INSTANCE_POLL_RATE_SECONDS)
logger.info(f"Removing {deleted_remote_hosts} from websocket broadcast list")
await asyncio.gather(*[self.cleanup_offline_host(h) for h in deleted_remote_hosts])
if new_remote_hosts:
logger.info(f"Adding {new_remote_hosts} to websocket broadcast list")
for h in new_remote_hosts:
stats = self.stats_mgr.new_remote_host_stats(h)
relay_connection = WebsocketRelayConnection(name=self.local_hostname, stats=stats, remote_host=self.known_hosts[h])
relay_connection.start()
self.relay_connections[h] = relay_connection
await asyncio.sleep(settings.BROADCAST_WEBSOCKET_NEW_INSTANCE_POLL_RATE_SECONDS)
finally:
if async_conn:
logger.info("Shutting down db connection for wsrelay.")
try:
await async_conn.close()
except Exception as e:
logger.info(f"Failed to close connection to database for pg_notify: {e}")

View File

@@ -114,6 +114,7 @@ MEDIA_ROOT = os.path.join(BASE_DIR, 'public', 'media')
MEDIA_URL = '/media/' MEDIA_URL = '/media/'
LOGIN_URL = '/api/login/' LOGIN_URL = '/api/login/'
LOGOUT_ALLOWED_HOSTS = None
# Absolute filesystem path to the directory to host projects (with playbooks). # Absolute filesystem path to the directory to host projects (with playbooks).
# This directory should not be web-accessible. # This directory should not be web-accessible.
@@ -491,6 +492,7 @@ CELERYBEAT_SCHEDULE = {
'cleanup_images': {'task': 'awx.main.tasks.system.cleanup_images_and_files', 'schedule': timedelta(hours=3)}, 'cleanup_images': {'task': 'awx.main.tasks.system.cleanup_images_and_files', 'schedule': timedelta(hours=3)},
'cleanup_host_metrics': {'task': 'awx.main.tasks.host_metrics.cleanup_host_metrics', 'schedule': timedelta(hours=3, minutes=30)}, 'cleanup_host_metrics': {'task': 'awx.main.tasks.host_metrics.cleanup_host_metrics', 'schedule': timedelta(hours=3, minutes=30)},
'host_metric_summary_monthly': {'task': 'awx.main.tasks.host_metrics.host_metric_summary_monthly', 'schedule': timedelta(hours=4)}, 'host_metric_summary_monthly': {'task': 'awx.main.tasks.host_metrics.host_metric_summary_monthly', 'schedule': timedelta(hours=4)},
'periodic_resource_sync': {'task': 'awx.main.tasks.system.periodic_resource_sync', 'schedule': timedelta(minutes=15)},
} }
# Django Caching Configuration # Django Caching Configuration
@@ -655,6 +657,10 @@ AWX_ANSIBLE_CALLBACK_PLUGINS = ""
# Automatically remove nodes that have missed their heartbeats after some time # Automatically remove nodes that have missed their heartbeats after some time
AWX_AUTO_DEPROVISION_INSTANCES = False AWX_AUTO_DEPROVISION_INSTANCES = False
# If False, do not allow creation of resources that are shared with the platform ingress
# e.g. organizations, teams, and users
ALLOW_LOCAL_RESOURCE_MANAGEMENT = True
# Enable Pendo on the UI, possible values are 'off', 'anonymous', and 'detailed' # Enable Pendo on the UI, possible values are 'off', 'anonymous', and 'detailed'
# Note: This setting may be overridden by database settings. # Note: This setting may be overridden by database settings.
PENDO_TRACKING_STATE = "off" PENDO_TRACKING_STATE = "off"
@@ -777,6 +783,11 @@ INSIGHTS_EXCLUDE_EMPTY_GROUPS = False
TERRAFORM_INSTANCE_ID_VAR = 'id' TERRAFORM_INSTANCE_ID_VAR = 'id'
TERRAFORM_EXCLUDE_EMPTY_GROUPS = True TERRAFORM_EXCLUDE_EMPTY_GROUPS = True
# ------------------------
# OpenShift Virtualization
# ------------------------
OPENSHIFT_VIRTUALIZATION_EXCLUDE_EMPTY_GROUPS = True
# --------------------- # ---------------------
# ----- Custom ----- # ----- Custom -----
# --------------------- # ---------------------
@@ -879,6 +890,7 @@ LOGGING = {
'address': '/var/run/awx-rsyslog/rsyslog.sock', 'address': '/var/run/awx-rsyslog/rsyslog.sock',
'filters': ['external_log_enabled', 'dynamic_level_filter', 'guid'], 'filters': ['external_log_enabled', 'dynamic_level_filter', 'guid'],
}, },
'otel': {'class': 'logging.NullHandler'},
}, },
'loggers': { 'loggers': {
'django': {'handlers': ['console']}, 'django': {'handlers': ['console']},
@@ -1160,9 +1172,6 @@ ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED = True
# Permissions a user will get when creating a new item # Permissions a user will get when creating a new item
ANSIBLE_BASE_CREATOR_DEFAULTS = ['change', 'delete', 'execute', 'use', 'adhoc', 'approve', 'update', 'view'] ANSIBLE_BASE_CREATOR_DEFAULTS = ['change', 'delete', 'execute', 'use', 'adhoc', 'approve', 'update', 'view']
# This is a stopgap, will delete after resource registry integration
ANSIBLE_BASE_SERVICE_PREFIX = "awx"
# Temporary, for old roles API compatibility, save child permissions at organization level # Temporary, for old roles API compatibility, save child permissions at organization level
ANSIBLE_BASE_CACHE_PARENT_PERMISSIONS = True ANSIBLE_BASE_CACHE_PARENT_PERMISSIONS = True
@@ -1176,6 +1185,3 @@ ANSIBLE_BASE_ALLOW_SINGLETON_ROLES_API = False # Do not allow creating user-def
# system username for django-ansible-base # system username for django-ansible-base
SYSTEM_USERNAME = None SYSTEM_USERNAME = None
# Use AWX base view, to give 401 on unauthenticated requests
ANSIBLE_BASE_CUSTOM_VIEW_PARENT = 'awx.api.generics.APIView'

File diff suppressed because it is too large Load Diff

View File

@@ -7,18 +7,18 @@ from django.core.cache import cache
def test_ldap_default_settings(mocker): def test_ldap_default_settings(mocker):
from_db = mocker.Mock(**{'order_by.return_value': []}) from_db = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=from_db): mocker.patch('awx.conf.models.Setting.objects.filter', return_value=from_db)
settings = LDAPSettings() settings = LDAPSettings()
assert settings.ORGANIZATION_MAP == {} assert settings.ORGANIZATION_MAP == {}
assert settings.TEAM_MAP == {} assert settings.TEAM_MAP == {}
def test_ldap_default_network_timeout(mocker): def test_ldap_default_network_timeout(mocker):
cache.clear() # clearing cache avoids picking up stray default for OPT_REFERRALS cache.clear() # clearing cache avoids picking up stray default for OPT_REFERRALS
from_db = mocker.Mock(**{'order_by.return_value': []}) from_db = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=from_db): mocker.patch('awx.conf.models.Setting.objects.filter', return_value=from_db)
settings = LDAPSettings() settings = LDAPSettings()
assert settings.CONNECTION_OPTIONS[ldap.OPT_NETWORK_TIMEOUT] == 30 assert settings.CONNECTION_OPTIONS[ldap.OPT_NETWORK_TIMEOUT] == 30
def test_ldap_filter_validator(): def test_ldap_filter_validator():

View File

@@ -62,7 +62,7 @@ function CredentialLookup({
? { credential_type: credentialTypeId } ? { credential_type: credentialTypeId }
: {}; : {};
const typeKindParams = credentialTypeKind const typeKindParams = credentialTypeKind
? { credential_type__kind: credentialTypeKind } ? { credential_type__kind__in: credentialTypeKind }
: {}; : {};
const typeNamespaceParams = credentialTypeNamespace const typeNamespaceParams = credentialTypeNamespace
? { credential_type__namespace: credentialTypeNamespace } ? { credential_type__namespace: credentialTypeNamespace }
@@ -125,7 +125,7 @@ function CredentialLookup({
? { credential_type: credentialTypeId } ? { credential_type: credentialTypeId }
: {}; : {};
const typeKindParams = credentialTypeKind const typeKindParams = credentialTypeKind
? { credential_type__kind: credentialTypeKind } ? { credential_type__kind__in: credentialTypeKind }
: {}; : {};
const typeNamespaceParams = credentialTypeNamespace const typeNamespaceParams = credentialTypeNamespace
? { credential_type__namespace: credentialTypeNamespace } ? { credential_type__namespace: credentialTypeNamespace }

View File

@@ -190,6 +190,7 @@ function NotificationList({
name: t`Notification type`, name: t`Notification type`,
key: 'or__notification_type', key: 'or__notification_type',
options: [ options: [
['awssns', t`AWS SNS`],
['email', t`Email`], ['email', t`Email`],
['grafana', t`Grafana`], ['grafana', t`Grafana`],
['hipchat', t`Hipchat`], ['hipchat', t`Hipchat`],

View File

@@ -12,7 +12,7 @@ const Inner = styled.div`
border-radius: 2px; border-radius: 2px;
color: white; color: white;
left: 10px; left: 10px;
max-width: 300px; max-width: 500px;
padding: 5px 10px; padding: 5px 10px;
position: absolute; position: absolute;
top: 10px; top: 10px;

View File

@@ -12,6 +12,7 @@ const GridDL = styled.dl`
column-gap: 15px; column-gap: 15px;
display: grid; display: grid;
grid-template-columns: max-content; grid-template-columns: max-content;
overflow-wrap: anywhere;
row-gap: 0px; row-gap: 0px;
dt { dt {
grid-column-start: 1; grid-column-start: 1;

View File

@@ -56,6 +56,10 @@ describe('<InventorySourceAdd />', () => {
['satellite6', 'Red Hat Satellite 6'], ['satellite6', 'Red Hat Satellite 6'],
['openstack', 'OpenStack'], ['openstack', 'OpenStack'],
['rhv', 'Red Hat Virtualization'], ['rhv', 'Red Hat Virtualization'],
[
'openshift_virtualization',
'Red Hat OpenShift Virtualization',
],
['controller', 'Red Hat Ansible Automation Platform'], ['controller', 'Red Hat Ansible Automation Platform'],
], ],
}, },

View File

@@ -22,7 +22,9 @@ const ansibleDocUrls = {
constructed: constructed:
'https://docs.ansible.com/ansible/latest/collections/ansible/builtin/constructed_inventory.html', 'https://docs.ansible.com/ansible/latest/collections/ansible/builtin/constructed_inventory.html',
terraform: terraform:
'https://github.com/ansible-collections/cloud.terraform/blob/stable-statefile-inventory/plugins/inventory/terraform_state.py', 'https://github.com/ansible-collections/cloud.terraform/blob/main/docs/cloud.terraform.terraform_state_inventory.rst',
openshift_virtualization:
'https://kubevirt.io/kubevirt.core/latest/plugins/kubevirt.html',
}; };
const getInventoryHelpTextStrings = () => ({ const getInventoryHelpTextStrings = () => ({
@@ -121,7 +123,7 @@ const getInventoryHelpTextStrings = () => ({
<br /> <br />
{value && ( {value && (
<div> <div>
{t`If you want the Inventory Source to update on launch , click on Update on Launch, {t`If you want the Inventory Source to update on launch , click on Update on Launch,
and also go to `} and also go to `}
<Link to={`/projects/${value.id}/details`}> {value.name} </Link> <Link to={`/projects/${value.id}/details`}> {value.name} </Link>
{t`and click on Update Revision on Launch.`} {t`and click on Update Revision on Launch.`}
@@ -140,7 +142,7 @@ const getInventoryHelpTextStrings = () => ({
<br /> <br />
{value && ( {value && (
<div> <div>
{t`If you want the Inventory Source to update on launch , click on Update on Launch, {t`If you want the Inventory Source to update on launch , click on Update on Launch,
and also go to `} and also go to `}
<Link to={`/projects/${value.id}/details`}> {value.name} </Link> <Link to={`/projects/${value.id}/details`}> {value.name} </Link>
{t`and click on Update Revision on Launch`} {t`and click on Update Revision on Launch`}

View File

@@ -26,6 +26,7 @@ import {
TerraformSubForm, TerraformSubForm,
VMwareSubForm, VMwareSubForm,
VirtualizationSubForm, VirtualizationSubForm,
OpenShiftVirtualizationSubForm,
} from './InventorySourceSubForms'; } from './InventorySourceSubForms';
const buildSourceChoiceOptions = (options) => { const buildSourceChoiceOptions = (options) => {
@@ -231,6 +232,15 @@ const InventorySourceFormFields = ({
sourceOptions={sourceOptions} sourceOptions={sourceOptions}
/> />
), ),
openshift_virtualization: (
<OpenShiftVirtualizationSubForm
autoPopulateCredential={
!source?.id ||
source?.source !== 'openshift_virtualization'
}
sourceOptions={sourceOptions}
/>
),
}[sourceField.value] }[sourceField.value]
} }
</FormColumnLayout> </FormColumnLayout>

View File

@@ -0,0 +1,64 @@
import React, { useCallback } from 'react';
import { useField, useFormikContext } from 'formik';
import { t } from '@lingui/macro';
import { useConfig } from 'contexts/Config';
import getDocsBaseUrl from 'util/getDocsBaseUrl';
import CredentialLookup from 'components/Lookup/CredentialLookup';
import { required } from 'util/validators';
import {
OptionsField,
VerbosityField,
EnabledVarField,
EnabledValueField,
HostFilterField,
SourceVarsField,
} from './SharedFields';
import getHelpText from '../Inventory.helptext';
const OpenShiftVirtualizationSubForm = ({ autoPopulateCredential }) => {
const helpText = getHelpText();
const { setFieldValue, setFieldTouched } = useFormikContext();
const [credentialField, credentialMeta, credentialHelpers] =
useField('credential');
const config = useConfig();
const handleCredentialUpdate = useCallback(
(value) => {
setFieldValue('credential', value);
setFieldTouched('credential', true, false);
},
[setFieldValue, setFieldTouched]
);
const docsBaseUrl = getDocsBaseUrl(config);
return (
<>
<CredentialLookup
credentialTypeNamespace="kubernetes_bearer_token"
label={t`Credential`}
helperTextInvalid={credentialMeta.error}
isValid={!credentialMeta.touched || !credentialMeta.error}
onBlur={() => credentialHelpers.setTouched()}
onChange={handleCredentialUpdate}
value={credentialField.value}
required
autoPopulate={autoPopulateCredential}
validate={required(t`Select a value for this field`)}
/>
<VerbosityField />
<HostFilterField />
<EnabledVarField />
<EnabledValueField />
<OptionsField />
<SourceVarsField
popoverContent={helpText.sourceVars(
docsBaseUrl,
'openshift_virtualization'
)}
/>
</>
);
};
export default OpenShiftVirtualizationSubForm;

View File

@@ -0,0 +1,65 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import { Formik } from 'formik';
import { CredentialsAPI } from 'api';
import { mountWithContexts } from '../../../../../testUtils/enzymeHelpers';
import VirtualizationSubForm from './VirtualizationSubForm';
jest.mock('../../../../api');
const initialValues = {
credential: null,
overwrite: false,
overwrite_vars: false,
source_path: '',
source_project: null,
source_script: null,
source_vars: '---\n',
update_cache_timeout: 0,
update_on_launch: true,
verbosity: 1,
};
describe('<VirtualizationSubForm />', () => {
let wrapper;
beforeEach(async () => {
CredentialsAPI.read.mockResolvedValue({
data: { count: 0, results: [] },
});
await act(async () => {
wrapper = mountWithContexts(
<Formik initialValues={initialValues}>
<VirtualizationSubForm />
</Formik>
);
});
});
afterAll(() => {
jest.clearAllMocks();
});
test('should render subform fields', () => {
expect(wrapper.find('FormGroup[label="Credential"]')).toHaveLength(1);
expect(wrapper.find('FormGroup[label="Verbosity"]')).toHaveLength(1);
expect(wrapper.find('FormGroup[label="Update options"]')).toHaveLength(1);
expect(
wrapper.find('FormGroup[label="Cache timeout (seconds)"]')
).toHaveLength(1);
expect(
wrapper.find('VariablesField[label="Source variables"]')
).toHaveLength(1);
});
test('should make expected api calls', () => {
expect(CredentialsAPI.read).toHaveBeenCalledTimes(1);
expect(CredentialsAPI.read).toHaveBeenCalledWith({
credential_type__namespace: 'rhv',
order_by: 'name',
page: 1,
page_size: 5,
});
});
});

View File

@@ -87,7 +87,7 @@ const SCMSubForm = ({ autoPopulateProject }) => {
/> />
)} )}
<CredentialLookup <CredentialLookup
credentialTypeKind="cloud" credentialTypeKind="cloud,kubernetes"
label={t`Credential`} label={t`Credential`}
value={credentialField.value} value={credentialField.value}
onChange={handleCredentialUpdate} onChange={handleCredentialUpdate}

View File

@@ -9,3 +9,4 @@ export { default as ControllerSubForm } from './ControllerSubForm';
export { default as TerraformSubForm } from './TerraformSubForm'; export { default as TerraformSubForm } from './TerraformSubForm';
export { default as VMwareSubForm } from './VMwareSubForm'; export { default as VMwareSubForm } from './VMwareSubForm';
export { default as VirtualizationSubForm } from './VirtualizationSubForm'; export { default as VirtualizationSubForm } from './VirtualizationSubForm';
export { default as OpenShiftVirtualizationSubForm } from './OpenShiftVirtualizationSubForm';

View File

@@ -138,6 +138,25 @@ function NotificationTemplateDetail({ template, defaultMessages }) {
} }
dataCy="nt-detail-type" dataCy="nt-detail-type"
/> />
{template.notification_type === 'awssns' && (
<>
<Detail
label={t`AWS Region`}
value={configuration.aws_region}
dataCy="nt-detail-aws-region"
/>
<Detail
label={t`Access Key ID`}
value={configuration.aws_access_key_id}
dataCy="nt-detail-aws-access-key-id"
/>
<Detail
label={t`SNS Topic ARN`}
value={configuration.sns_topic_arn}
dataCy="nt-detail-sns-topic-arn"
/>
</>
)}
{template.notification_type === 'email' && ( {template.notification_type === 'email' && (
<> <>
<Detail <Detail
@@ -455,8 +474,8 @@ function NotificationTemplateDetail({ template, defaultMessages }) {
} }
function CustomMessageDetails({ messages, defaults, type }) { function CustomMessageDetails({ messages, defaults, type }) {
const showMessages = type !== 'webhook'; const showMessages = !['awssns', 'webhook'].includes(type);
const showBodies = ['email', 'pagerduty', 'webhook'].includes(type); const showBodies = ['email', 'pagerduty', 'webhook', 'awssns'].includes(type);
return ( return (
<> <>

View File

@@ -120,7 +120,7 @@ function NotificationTemplatesList() {
toolbarSearchColumns={[ toolbarSearchColumns={[
{ {
name: t`Name`, name: t`Name`,
key: 'name', key: 'name__icontains',
isDefault: true, isDefault: true,
}, },
{ {
@@ -131,6 +131,7 @@ function NotificationTemplatesList() {
name: t`Notification type`, name: t`Notification type`,
key: 'or__notification_type', key: 'or__notification_type',
options: [ options: [
['awssns', t`AWS SNS`],
['email', t`Email`], ['email', t`Email`],
['grafana', t`Grafana`], ['grafana', t`Grafana`],
['hipchat', t`Hipchat`], ['hipchat', t`Hipchat`],

View File

@@ -1,5 +1,6 @@
/* eslint-disable-next-line import/prefer-default-export */ /* eslint-disable-next-line import/prefer-default-export */
export const NOTIFICATION_TYPES = { export const NOTIFICATION_TYPES = {
awssns: 'AWS SNS',
email: 'Email', email: 'Email',
grafana: 'Grafana', grafana: 'Grafana',
irc: 'IRC', irc: 'IRC',

View File

@@ -11,8 +11,8 @@ import getDocsBaseUrl from 'util/getDocsBaseUrl';
function CustomMessagesSubForm({ defaultMessages, type }) { function CustomMessagesSubForm({ defaultMessages, type }) {
const [useCustomField, , useCustomHelpers] = useField('useCustomMessages'); const [useCustomField, , useCustomHelpers] = useField('useCustomMessages');
const showMessages = type !== 'webhook'; const showMessages = !['webhook', 'awssns'].includes(type);
const showBodies = ['email', 'pagerduty', 'webhook'].includes(type); const showBodies = ['email', 'pagerduty', 'webhook', 'awssns'].includes(type);
const { setFieldValue } = useFormikContext(); const { setFieldValue } = useFormikContext();
const config = useConfig(); const config = useConfig();

View File

@@ -78,6 +78,7 @@ function NotificationTemplateFormFields({ defaultMessages, template }) {
label: t`Choose a Notification Type`, label: t`Choose a Notification Type`,
isDisabled: true, isDisabled: true,
}, },
{ value: 'awssns', key: 'awssns', label: t`AWS SNS` },
{ value: 'email', key: 'email', label: t`E-mail` }, { value: 'email', key: 'email', label: t`E-mail` },
{ value: 'grafana', key: 'grafana', label: 'Grafana' }, { value: 'grafana', key: 'grafana', label: 'Grafana' },
{ value: 'irc', key: 'irc', label: 'IRC' }, { value: 'irc', key: 'irc', label: 'IRC' },

View File

@@ -29,6 +29,7 @@ import Popover from '../../../components/Popover/Popover';
import getHelpText from './Notifications.helptext'; import getHelpText from './Notifications.helptext';
const TypeFields = { const TypeFields = {
awssns: AWSSNSFields,
email: EmailFields, email: EmailFields,
grafana: GrafanaFields, grafana: GrafanaFields,
irc: IRCFields, irc: IRCFields,
@@ -58,6 +59,44 @@ TypeInputsSubForm.propTypes = {
export default TypeInputsSubForm; export default TypeInputsSubForm;
function AWSSNSFields() {
return (
<>
<FormField
id="awssns-aws-region"
label={t`AWS Region`}
name="notification_configuration.aws_region"
type="text"
isRequired
/>
<FormField
id="awssns-aws-access-key-id"
label={t`Access Key ID`}
name="notification_configuration.aws_access_key_id"
type="text"
/>
<PasswordField
id="awssns-aws-secret-access-key"
label={t`Secret Access Key`}
name="notification_configuration.aws_secret_access_key"
/>
<PasswordField
id="awssns-aws-session-token"
label={t`Session Token`}
name="notification_configuration.aws_session_token"
/>
<FormField
id="awssns-sns-topic-arn"
label={t`SNS Topic ARN`}
name="notification_configuration.sns_topic_arn"
type="text"
validate={required(null)}
isRequired
/>
</>
);
}
function EmailFields() { function EmailFields() {
const helpText = getHelpText(); const helpText = getHelpText();
return ( return (

View File

@@ -203,6 +203,39 @@
} }
} }
}, },
"awssns": {
"started": {
"body": "{{ job_metadata }}"
},
"success": {
"body": "{{ job_metadata }}"
},
"error": {
"body": "{{ job_metadata }}"
},
"workflow_approval": {
"running": {
"body": {
"body": "The approval node \"{{ approval_node_name }}\" needs review. This node can be viewed at: {{ workflow_url }}"
}
},
"approved": {
"body": {
"body": "The approval node \"{{ approval_node_name }}\" was approved. {{ workflow_url }}"
}
},
"timed_out": {
"body": {
"body": "The approval node \"{{ approval_node_name }}\" has timed out. {{ workflow_url }}"
}
},
"denied": {
"body": {
"body": "The approval node \"{{ approval_node_name }}\" was denied. {{ workflow_url }}"
}
}
}
},
"mattermost": { "mattermost": {
"started": { "started": {
"message": "{{ job_friendly_name }} #{{ job.id }} '{{ job.name }}' {{ job.status }}: {{ url }}", "message": "{{ job_friendly_name }} #{{ job.id }} '{{ job.name }}' {{ job.status }}: {{ url }}",

View File

@@ -1,4 +1,11 @@
const typeFieldNames = { const typeFieldNames = {
awssns: [
'aws_region',
'aws_access_key_id',
'aws_secret_access_key',
'aws_session_token',
'sns_topic_arn',
],
email: [ email: [
'username', 'username',
'password', 'password',

View File

@@ -374,6 +374,7 @@ export const CredentialType = shape({
}); });
export const NotificationType = oneOf([ export const NotificationType = oneOf([
'awssns',
'email', 'email',
'grafana', 'grafana',
'irc', 'irc',

View File

@@ -17,7 +17,7 @@ import time
import re import re
from json import loads, dumps from json import loads, dumps
from os.path import isfile, expanduser, split, join, exists, isdir from os.path import isfile, expanduser, split, join, exists, isdir
from os import access, R_OK, getcwd, environ from os import access, R_OK, getcwd, environ, getenv
try: try:
@@ -107,7 +107,7 @@ class ControllerModule(AnsibleModule):
# Perform magic depending on whether controller_oauthtoken is a string or a dict # Perform magic depending on whether controller_oauthtoken is a string or a dict
if self.params.get('controller_oauthtoken'): if self.params.get('controller_oauthtoken'):
token_param = self.params.get('controller_oauthtoken') token_param = self.params.get('controller_oauthtoken')
if type(token_param) is dict: if isinstance(token_param, dict):
if 'token' in token_param: if 'token' in token_param:
self.oauth_token = self.params.get('controller_oauthtoken')['token'] self.oauth_token = self.params.get('controller_oauthtoken')['token']
else: else:
@@ -148,9 +148,10 @@ class ControllerModule(AnsibleModule):
# Make sure we start with /api/vX # Make sure we start with /api/vX
if not endpoint.startswith("/"): if not endpoint.startswith("/"):
endpoint = "/{0}".format(endpoint) endpoint = "/{0}".format(endpoint)
prefix = self.url_prefix.rstrip("/") hostname_prefix = self.url_prefix.rstrip("/")
if not endpoint.startswith(prefix + "/api/"): api_path = self.api_path()
endpoint = prefix + "/api/v2{0}".format(endpoint) if not endpoint.startswith(hostname_prefix + api_path):
endpoint = hostname_prefix + f"{api_path}v2{endpoint}"
if not endpoint.endswith('/') and '?' not in endpoint: if not endpoint.endswith('/') and '?' not in endpoint:
endpoint = "{0}/".format(endpoint) endpoint = "{0}/".format(endpoint)
@@ -215,7 +216,7 @@ class ControllerModule(AnsibleModule):
try: try:
config_data = yaml.load(config_string, Loader=yaml.SafeLoader) config_data = yaml.load(config_string, Loader=yaml.SafeLoader)
# If this is an actual ini file, yaml will return the whole thing as a string instead of a dict # If this is an actual ini file, yaml will return the whole thing as a string instead of a dict
if type(config_data) is not dict: if not isinstance(config_data, dict):
raise AssertionError("The yaml config file is not properly formatted as a dict.") raise AssertionError("The yaml config file is not properly formatted as a dict.")
try_config_parsing = False try_config_parsing = False
@@ -257,7 +258,7 @@ class ControllerModule(AnsibleModule):
if honorred_setting in config_data: if honorred_setting in config_data:
# Veriffy SSL must be a boolean # Veriffy SSL must be a boolean
if honorred_setting == 'verify_ssl': if honorred_setting == 'verify_ssl':
if type(config_data[honorred_setting]) is str: if isinstance(config_data[honorred_setting], str):
setattr(self, honorred_setting, strtobool(config_data[honorred_setting])) setattr(self, honorred_setting, strtobool(config_data[honorred_setting]))
else: else:
setattr(self, honorred_setting, bool(config_data[honorred_setting])) setattr(self, honorred_setting, bool(config_data[honorred_setting]))
@@ -603,6 +604,14 @@ class ControllerAPIModule(ControllerModule):
status_code = response.status status_code = response.status
return {'status_code': status_code, 'json': response_json} return {'status_code': status_code, 'json': response_json}
def api_path(self):
default_api_path = "/api/"
if self._COLLECTION_TYPE != "awx":
default_api_path = "/api/controller/"
prefix = getenv('CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX', default_api_path)
return prefix
def authenticate(self, **kwargs): def authenticate(self, **kwargs):
if self.username and self.password: if self.username and self.password:
# Attempt to get a token from /api/v2/tokens/ by giving it our username/password combo # Attempt to get a token from /api/v2/tokens/ by giving it our username/password combo
@@ -613,7 +622,7 @@ class ControllerAPIModule(ControllerModule):
"scope": "write", "scope": "write",
} }
# Preserve URL prefix # Preserve URL prefix
endpoint = self.url_prefix.rstrip('/') + '/api/v2/tokens/' endpoint = self.url_prefix.rstrip('/') + f'{self.api_path()}v2/tokens/'
# Post to the tokens endpoint with baisc auth to try and get a token # Post to the tokens endpoint with baisc auth to try and get a token
api_token_url = (self.url._replace(path=endpoint)).geturl() api_token_url = (self.url._replace(path=endpoint)).geturl()
@@ -1002,7 +1011,7 @@ class ControllerAPIModule(ControllerModule):
if self.authenticated and self.oauth_token_id: if self.authenticated and self.oauth_token_id:
# Attempt to delete our current token from /api/v2/tokens/ # Attempt to delete our current token from /api/v2/tokens/
# Post to the tokens endpoint with baisc auth to try and get a token # Post to the tokens endpoint with baisc auth to try and get a token
endpoint = self.url_prefix.rstrip('/') + '/api/v2/tokens/{0}/'.format(self.oauth_token_id) endpoint = self.url_prefix.rstrip('/') + f'{self.api_path()}v2/tokens/{self.oauth_token_id}/'
api_token_url = (self.url._replace(path=endpoint, query=None)).geturl() # in error cases, fail_json exists before exception handling api_token_url = (self.url._replace(path=endpoint, query=None)).geturl() # in error cases, fail_json exists before exception handling
try: try:

View File

@@ -163,7 +163,7 @@ def main():
for arg in ['job_type', 'limit', 'forks', 'verbosity', 'extra_vars', 'become_enabled', 'diff_mode']: for arg in ['job_type', 'limit', 'forks', 'verbosity', 'extra_vars', 'become_enabled', 'diff_mode']:
if module.params.get(arg): if module.params.get(arg):
# extra_var can receive a dict or a string, if a dict covert it to a string # extra_var can receive a dict or a string, if a dict covert it to a string
if arg == 'extra_vars' and type(module.params.get(arg)) is not str: if arg == 'extra_vars' and not isinstance(module.params.get(arg), str):
post_data[arg] = json.dumps(module.params.get(arg)) post_data[arg] = json.dumps(module.params.get(arg))
else: else:
post_data[arg] = module.params.get(arg) post_data[arg] = module.params.get(arg)

View File

@@ -121,6 +121,7 @@ def main():
client_type = module.params.get('client_type') client_type = module.params.get('client_type')
organization = module.params.get('organization') organization = module.params.get('organization')
redirect_uris = module.params.get('redirect_uris') redirect_uris = module.params.get('redirect_uris')
skip_authorization = module.params.get('skip_authorization')
state = module.params.get('state') state = module.params.get('state')
# Attempt to look up the related items the user specified (these will fail the module if not found) # Attempt to look up the related items the user specified (these will fail the module if not found)
@@ -146,6 +147,8 @@ def main():
application_fields['description'] = description application_fields['description'] = description
if redirect_uris is not None: if redirect_uris is not None:
application_fields['redirect_uris'] = ' '.join(redirect_uris) application_fields['redirect_uris'] = ' '.join(redirect_uris)
if skip_authorization is not None:
application_fields['skip_authorization'] = skip_authorization
response = module.create_or_update_if_needed(application, application_fields, endpoint='applications', item_type='application', auto_exit=False) response = module.create_or_update_if_needed(application, application_fields, endpoint='applications', item_type='application', auto_exit=False)
if 'client_id' in response: if 'client_id' in response:

View File

@@ -56,7 +56,7 @@ import logging
# In this module we don't use EXPORTABLE_RESOURCES, we just want to validate that our installed awxkit has import/export # In this module we don't use EXPORTABLE_RESOURCES, we just want to validate that our installed awxkit has import/export
try: try:
from awxkit.api.pages.api import EXPORTABLE_RESOURCES # noqa from awxkit.api.pages.api import EXPORTABLE_RESOURCES # noqa: F401; pylint: disable=unused-import
HAS_EXPORTABLE_RESOURCES = True HAS_EXPORTABLE_RESOURCES = True
except ImportError: except ImportError:

View File

@@ -42,7 +42,8 @@ options:
source: source:
description: description:
- The source to use for this group. - The source to use for this group.
choices: [ "scm", "ec2", "gce", "azure_rm", "vmware", "satellite6", "openstack", "rhv", "controller", "insights" ] choices: [ "scm", "ec2", "gce", "azure_rm", "vmware", "satellite6", "openstack", "rhv", "controller", "insights", "terraform",
"openshift_virtualization" ]
type: str type: str
source_path: source_path:
description: description:
@@ -170,7 +171,22 @@ def main():
# #
# How do we handle manual and file? The controller does not seem to be able to activate them # How do we handle manual and file? The controller does not seem to be able to activate them
# #
source=dict(choices=["scm", "ec2", "gce", "azure_rm", "vmware", "satellite6", "openstack", "rhv", "controller", "insights"]), source=dict(
choices=[
"scm",
"ec2",
"gce",
"azure_rm",
"vmware",
"satellite6",
"openstack",
"rhv",
"controller",
"insights",
"terraform",
"openshift_virtualization",
]
),
source_path=dict(), source_path=dict(),
source_vars=dict(type='dict'), source_vars=dict(type='dict'),
enabled_var=dict(), enabled_var=dict(),

View File

@@ -50,6 +50,7 @@ options:
description: description:
- The type of notification to be sent. - The type of notification to be sent.
choices: choices:
- 'awssns'
- 'email' - 'email'
- 'grafana' - 'grafana'
- 'irc' - 'irc'
@@ -219,7 +220,7 @@ def main():
copy_from=dict(), copy_from=dict(),
description=dict(), description=dict(),
organization=dict(), organization=dict(),
notification_type=dict(choices=['email', 'grafana', 'irc', 'mattermost', 'pagerduty', 'rocketchat', 'slack', 'twilio', 'webhook']), notification_type=dict(choices=['awssns', 'email', 'grafana', 'irc', 'mattermost', 'pagerduty', 'rocketchat', 'slack', 'twilio', 'webhook']),
notification_configuration=dict(type='dict'), notification_configuration=dict(type='dict'),
messages=dict(type='dict'), messages=dict(type='dict'),
state=dict(choices=['present', 'absent', 'exists'], default='present'), state=dict(choices=['present', 'absent', 'exists'], default='present'),

View File

@@ -19,7 +19,7 @@ from ansible.module_utils.six import raise_from
from ansible_base.rbac.models import RoleDefinition, DABPermission from ansible_base.rbac.models import RoleDefinition, DABPermission
from awx.main.tests.functional.conftest import _request from awx.main.tests.functional.conftest import _request
from awx.main.tests.functional.conftest import credentialtype_scm, credentialtype_ssh # noqa: F401; pylint: disable=unused-variable from awx.main.tests.functional.conftest import credentialtype_scm, credentialtype_ssh # noqa: F401; pylint: disable=unused-import
from awx.main.models import ( from awx.main.models import (
Organization, Organization,
Project, Project,

View File

@@ -0,0 +1 @@
plugins/modules/export.py validate-modules:nonexistent-parameter-documented # needs awxkit to construct argspec

View File

@@ -317,7 +317,10 @@ class ApiV2(base.Base):
if asset['natural_key']['type'] == 'project' and 'local_path' in post_data and _page['scm_type'] == post_data['scm_type']: if asset['natural_key']['type'] == 'project' and 'local_path' in post_data and _page['scm_type'] == post_data['scm_type']:
del post_data['local_path'] del post_data['local_path']
_page = _page.put(post_data) if asset['natural_key']['type'] == 'user':
_page = _page.patch(**post_data)
else:
_page = _page.put(post_data)
changed = True changed = True
except (exc.Common, AssertionError) as e: except (exc.Common, AssertionError) as e:
identifier = asset.get("name", None) or asset.get("username", None) or asset.get("hostname", None) identifier = asset.get("name", None) or asset.get("username", None) or asset.get("hostname", None)

View File

@@ -11,7 +11,7 @@ from . import page
job_results = ('any', 'error', 'success') job_results = ('any', 'error', 'success')
notification_types = ('email', 'irc', 'pagerduty', 'slack', 'twilio', 'webhook', 'mattermost', 'grafana', 'rocketchat') notification_types = ('awssns', 'email', 'irc', 'pagerduty', 'slack', 'twilio', 'webhook', 'mattermost', 'grafana', 'rocketchat')
class NotificationTemplate(HasCopy, HasCreate, base.Base): class NotificationTemplate(HasCopy, HasCreate, base.Base):
@@ -58,7 +58,10 @@ class NotificationTemplate(HasCopy, HasCreate, base.Base):
if payload.notification_configuration == {}: if payload.notification_configuration == {}:
services = config.credentials.notification_services services = config.credentials.notification_services
if notification_type == 'email': if notification_type == 'awssns':
fields = ('aws_region', 'aws_access_key_id', 'aws_secret_access_key', 'aws_session_token', 'sns_topic_arn')
cred = services.awssns
elif notification_type == 'email':
fields = ('host', 'username', 'password', 'port', 'use_ssl', 'use_tls', 'sender', 'recipients') fields = ('host', 'username', 'password', 'port', 'use_ssl', 'use_tls', 'sender', 'recipients')
cred = services.email cred = services.email
elif notification_type == 'irc': elif notification_type == 'irc':

View File

@@ -0,0 +1,48 @@
import logging
# from awxkit.api.mixins import DSAdapter, HasCreate, HasCopy
# from awxkit.api.pages import (
# Credential,
# Organization,
# )
from awxkit.api.resources import resources
# from awxkit.utils import random_title, PseudoNamespace, filter_by_class
from . import base
from . import page
log = logging.getLogger(__name__)
class RoleTeamAssignment(base.Base):
NATURAL_KEY = ('team', 'content_object', 'role_definition')
page.register_page(
[resources.role_team_assignment, (resources.role_definition_team_assignments, 'post'), (resources.role_team_assignments, 'post')], RoleTeamAssignment
)
class RoleUserAssignment(base.Base):
NATURAL_KEY = ('user', 'content_object', 'role_definition')
page.register_page(
[resources.role_user_assignment, (resources.role_definition_user_assignments, 'post'), (resources.role_user_assignments, 'post')], RoleUserAssignment
)
class RoleTeamAssignments(page.PageList, RoleTeamAssignment):
pass
page.register_page([resources.role_definition_team_assignments, resources.role_team_assignments], RoleTeamAssignments)
class RoleUserAssignments(page.PageList, RoleUserAssignment):
pass
page.register_page([resources.role_definition_user_assignments, resources.role_user_assignments], RoleUserAssignments)

View File

@@ -0,0 +1,30 @@
import logging
# from awxkit.api.mixins import DSAdapter, HasCreate, HasCopy
# from awxkit.api.pages import (
# Credential,
# Organization,
# )
from awxkit.api.resources import resources
# from awxkit.utils import random_title, PseudoNamespace, filter_by_class
from . import base
from . import page
log = logging.getLogger(__name__)
class RoleDefinition(base.Base):
NATURAL_KEY = ('name',)
page.register_page([resources.role_definition, (resources.role_definitions, 'post')], RoleDefinition)
class RoleDefinitions(page.PageList, RoleDefinition):
pass
page.register_page([resources.role_definitions], RoleDefinitions)

View File

@@ -197,6 +197,14 @@ class Resources(object):
_related_users = r'\w+/\d+/users/' _related_users = r'\w+/\d+/users/'
_related_workflow_job_templates = r'\w+/\d+/workflow_job_templates/' _related_workflow_job_templates = r'\w+/\d+/workflow_job_templates/'
_role = r'roles/\d+/' _role = r'roles/\d+/'
_role_definition = r'role_definitions/\d+/'
_role_definitions = r'role_definitions/'
_role_definition_team_assignments = r'role_definitions/\d+/team_assignments/'
_role_definition_user_assignments = r'role_definitions/\d+/user_assignments/'
_role_team_assignment = r'role_team_assignments/\d+/'
_role_team_assignments = r'role_team_assignments/'
_role_user_assignment = r'role_user_assignments/\d+/'
_role_user_assignments = r'role_user_assignments/'
_roles = 'roles/' _roles = 'roles/'
_roles_related_teams = r'roles/\d+/teams/' _roles_related_teams = r'roles/\d+/teams/'
_schedule = r'schedules/\d+/' _schedule = r'schedules/\d+/'

View File

@@ -185,7 +185,7 @@ def format_human(output, fmt):
def format_num(v): def format_num(v):
try: try:
return locale.format("%.*f", (0, int(v)), True) return locale.format_string("%.*f", (0, int(v)), True)
except (ValueError, TypeError): except (ValueError, TypeError):
if isinstance(v, (list, dict)): if isinstance(v, (list, dict)):
return json.dumps(v) return json.dumps(v)

View File

@@ -23,7 +23,7 @@ idna==3.4
# via requests # via requests
imagesize==1.4.1 imagesize==1.4.1
# via sphinx # via sphinx
jinja2==3.1.3 jinja2==3.1.4
# via # via
# -r requirements.in # -r requirements.in
# sphinx # sphinx

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Some files were not shown because too many files have changed in this diff Show More