Compare commits

..

123 Commits

Author SHA1 Message Date
Seth Foster
94e5795dfc Prevent assigning credential to user of other org (#15296)
Utilizes the `validate_role_assignment` callback
from dab (see dab PR #490) to prevent granting credential
access to a user of another organization.

This logic will work for role_user_assignments
and role_team_assignments endpoints.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-07-02 21:05:22 +00:00
Alan Rominger
c4688d6298 Add in missing read permissions for organization audit role (#15318)
* Add in missing read permissions for organization audit role

* Add missing audit permission, special case name handling
2024-07-02 15:20:40 -04:00
TVo
6763badea3 Added new OpenShift Virtualization inventory source to docs. (#15299)
* Added new OpenShift Virtualization inventory source to docs.

* Incorporated review feedback from @fosterseth and @TheRealHaoLiu.

* Fixed link to correct kubevirt.core.kubevirt documentation.
2024-07-01 11:47:39 -06:00
Hao Liu
2c4ad6ef0f Add better 403 error message for Job template create (#15307)
* Add better 403 error message for Job template create

To create Job template u need access to projects and inventory

---------

Co-authored-by: Chris Meyers <chris.meyers.fsu@gmail.com>
2024-07-01 15:02:07 +00:00
Hao Liu
37f44d7214 Add better error message for wfjt create 403 (#15309) 2024-07-01 10:50:49 -04:00
Alan Rominger
98bbc836a6 Fix server error from DAB ValidationError with strings (#15312) 2024-07-01 10:11:22 -04:00
Alan Rominger
b59aff50dc Update ExecutionEnvironment model so object-level roles work with DAB RBAC system (#15289)
* Add initial test for deletion of stale permission

* Delete existing EE view permission

* Hypothetically complete update of EE model permissions setup

* Tests passing locally

* Issue with user_capabilities was a test bug, fixed
2024-06-28 16:09:42 -04:00
Alan Rominger
a70b0c1ddc Do not use cache in github image build action (#15308)
* Do not use cache in actual image build action

* Add cache args to kube prod builds
2024-06-28 09:52:59 -04:00
Alan Rominger
db72c9d5b8 Fix permissions that come from an external auditor role (#15291)
* Add tests for external auditor

* Add assertion for unified JTs which fails

* Fix UJT listing bug

* Add test for ad hoc commands just to be sure
2024-06-27 15:57:39 -04:00
jamesmarshall24
4e0d19914f LISTENER_DATABASES clobbers DATABASES OPTIONS (#15306)
Do not overwrite DATABASES OPTIONS with LISTENER_DATABASES
2024-06-27 13:26:30 -04:00
Hao Liu
6f2307f50e Add TASK_MANAGER_LOCK_TIMEOUT (#15300)
* Add TASK_MANAGER_LOCK_TIMEOUT

`TASK_MANAGER_LOCK_TIMEOUT` controls the `idle_in_transaction_session_timeout` and `idle_session_timeout` configuration for task manager connections and lock in database

hope to prevent the situation that the task instance that holds the lock becomes unresponsive and preventing other instance to be able to run task manager

* Add session timeout to periodic scheduler and all sub task manager locks
2024-06-27 09:42:41 -04:00
Alan Rominger
dbc2215bb6 Make attached user models adhere to new API assignments (#15298) 2024-06-26 23:00:25 -04:00
Hao Liu
7c08b29827 Temporary workaround for CI failure (#15305)
Workaround
```
ERROR awx/main/tests/functional/test_licenses.py - pip._vendor.distlib.DistlibException: Unable to locate finder for 'pip._vendor.distlib'
```
2024-06-26 15:29:22 -04:00
TVo
407194d320 Added troubleshooting and tips tricks content (#15212)
* Added troubleshooting and tips tricks content

* Added troubleshooting and tips tricks content

* Moved DNS host entry override info to customize pod spec section of CG chapter.

* Added troubleshooting and tips tricks content

* Moved DNS host entry override info to customize pod spec section of CG chapter.

* Update docs/docsite/rst/administration/containers_instance_groups.rst

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Update docs/docsite/rst/administration/containers_instance_groups.rst

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Update docs/docsite/rst/administration/containers_instance_groups.rst

Co-authored-by: Sandra McCann <samccann@redhat.com>

* Incorp'd review feedback from @fosterseth and @samccann

* Update docs/docsite/rst/administration/containers_instance_groups.rst

Co-authored-by: Sandra McCann <samccann@redhat.com>

* Final revisions based on @fosterseth's inputs.

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
Co-authored-by: Sandra McCann <samccann@redhat.com>
2024-06-24 12:17:31 -06:00
Alan Rominger
853af295d9 Various RBAC fixes related to managed RoleDefinitions (#15287)
* Add migration testing for certain managed roles

* Fix managed role bugs

* Add more tests

* Fix another bug with org workflow admin role reference

* Add test because another issue is fixed

* Mark reason for test

* Remove internal markers

* Reword failure message

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2024-06-21 09:29:34 -04:00
Alan Rominger
4738c8333a Fix object-level permission bugs with DAB RBAC system (#15284)
* Fix object-level permission bugs with DAB RBAC system

* Fix NT organization change regression

* Mark tests to AAP number
2024-06-20 16:34:34 -04:00
Seth Foster
13dcea0afd Check for admin_role in role_check.py (#15283)
Script was falsely identifying cross-linked
parents. It needs to check if parent roles if
content type is Team and role_field is
member_role OR admin_role.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-06-20 14:04:04 -04:00
Chris Meyers
bc2d339981 Clarify the search for a proxy 2024-06-18 16:41:45 -04:00
Chris Meyers
bef9ef10bb Rename delete
* Include a bit of context into the name of the delete function. The
  HTTP_ added prepended string may be unexpected if Django's header
  transformation isn't top of mind.
2024-06-18 16:41:45 -04:00
Chris Meyers
8645fe5c57 Add support for x-trusted-proxy
* Increase the surface area of the set of headers that the proxy list
  feature looks at for the remote proxy IF x-trusted-proxy is valid.
2024-06-18 16:41:45 -04:00
Chris Meyers
b93aa20362 Revert "Trust proxy headers for host provision callback"
This reverts commit 49e3971cd577127705fc0fd1d3b4ab7e3a3c3c2b.
2024-06-18 16:41:45 -04:00
Chris Meyers
4bbfc8a946 Tests for trust proxy and existing explicit proxy
* Integration tests to ensure the integration of the two features.
2024-06-18 16:41:45 -04:00
Chris Meyers
2c8eef413b Trust proxy headers for host provision callback
* Do not remove special header list if request is from a trusted proxy.
* Continue to remove headers if request if from a non-trusted proxy.
2024-06-18 16:41:45 -04:00
Alan Rominger
d5bad1a533 Pass the Makefile python exe to ansible-playbook (#15282) 2024-06-18 13:03:01 -04:00
Alan Rominger
f6c0effcb2 Use public methods to reference registered models (#15277) 2024-06-17 11:45:44 -04:00
Chad Ferman
31a086b11a Add OpenShift Virtualization Inventory source option (#15047)
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-06-14 13:38:37 -04:00
a_nackov
d94f766fcb Fix notification name search (#15231)
Signed-off-by: Adrian Nackov <adrian.nackov@mail.schwarz>
2024-06-13 14:49:54 +00:00
Viktor Varga
a7113549eb Add 'Terraform State' inventory source support for collection (#15258) 2024-06-12 19:22:21 +00:00
Jake Jackson
bfd811f408 Upgrade aiohttp for cve 2024-23829 (#15257) 2024-06-12 19:20:40 +00:00
Jeff Bradberry
030704a9e1 Change all uses of ImplicitRoleField to do on_delete=SET_NULL
This will mitigate the problem where if any Role gets deleted for some
weird reason it could previously cascade delete important objects.
2024-06-12 13:08:03 -04:00
Seth Foster
c312d9bce3 Rename setting to allow local resource management (#15269)
rename AWX_DIRECT_SHARED_RESOURCE_MANAGEMENT_ENABLED
to
ALLOW_LOCAL_RESOURCE_MANAGEMENT

- clearer meaning
- drop prefix so the same setting is used across the platform

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-06-11 12:50:18 -04:00
Jeff Bradberry
aadcc217eb This should deal correctly with the ancestor list mismatches 2024-06-10 16:36:22 -04:00
Jeff Bradberry
345c1c11e9 Guard against the role field not being populated
when doing the final reset of Role.implicit_parents.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
2c3a7fafc5 Add a new test scenario
to trigger the implicit parent not being in the parents and ancestors lists.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
dbcd32a1d9 Mark and rebuild the implicit_parents field for all affected roles 2024-06-10 16:36:22 -04:00
Jeff Bradberry
d45e258a78 Wait until the end of the fix script to clean up orphaned roles 2024-06-10 16:36:22 -04:00
Jeff Bradberry
d16b69a102 Add output of the update and deletion counts to fix.py 2024-06-10 16:36:22 -04:00
Jeff Bradberry
8b4efbc973 Do not throw away the container of cross-linked parents
Since we use it twice, the second time to get the id field of each.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
4cb061e7db Add a readme file with instructions 2024-06-10 16:36:22 -04:00
Jeff Bradberry
31db6a1447 Fix another instance where a bad resource->Role fk could throw a traceback 2024-06-10 16:36:22 -04:00
Jeff Bradberry
ad9d5904d8 Adjusted foreignkeys.sql for correctness
Some relationships known to be handled by the special mapping sql file
were being caught as false positives.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
b837d549ff Split the foreign key sql script into an 'into' and 'from' portion
Also, make use of up-front defined arrays of the tables involved, for
ease of editing in the future.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
9e22865d2e Filter out the relations within the known topology tables 2024-06-10 16:36:22 -04:00
Jeff Bradberry
ee3e3e1516 First cut at detecting which foreign keys enter and exit the topology tables 2024-06-10 16:36:22 -04:00
Jeff Bradberry
4a8f6e45f8 Move the "test" files into their own directory 2024-06-10 16:36:22 -04:00
Jeff Bradberry
6a317cca1b Remove the role_chain.py module
it wound up being unworkable, and I think ultimately we only need to
check the immediate parentage of each role.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
d67af79451 Attempt to correct any crosslinked parents
I think that rebuild_role_ancestor_list() will then correctly update
all of the affected Role.ancestors.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
fe77fda7b2 Exclude more files in the .gitignore 2024-06-10 16:36:22 -04:00
Jeff Bradberry
f613b76baa Modify the role parent check logic to stay in the roles as much as possible
since the foreign keys to the roles from the resources can make us go
wrong almost immediately.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
054cbe69d7 Exclude the team grant false positives
The results in my test now look correct.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
87e9dcb6d7 Attempt to more thoroughly check the parents of each Role
This version, however, has false positives because Roles become
children of Team.member_role when a Role is granted to a Team.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
c8829b057e First cut at checking the role hierarchy
Checking if parents and implicit_parents are consistent with ancestors.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
a0b376a6ca Set up a scenario where IG.use_role_id points to something no longer there
This is actually happening for one customer, though it seems like it
shouldn't be if the foreign key constraint is set back up properly.
In order to recreate it, I had to add the constraint back with 'NOT
VALID' added on to prevent the check.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
d675207f99 Handle the case where a resource points to a Role which isn't in the db 2024-06-10 16:36:22 -04:00
Jeff Bradberry
20504042c9 Graph out only the parent/child chains from a given Role
Doing the entire graph is too much on any system with real amounts of Roles.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
0e87e97820 Check for a broken ContentType -> model and log and skip
Apparently this has happened to a customer, per Nate Becker.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
1f154742df Make the role_chain.py script emit a Graphviz file
of the Role relationships.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
85fc81aab1 Start a new script that can be used to examine a Role's ancestry 2024-06-10 16:36:22 -04:00
Jeff Bradberry
5cfeeb3e87 Treat resources with null role fks differently
The underlying role should be re-linked, instead of treated as orphaned.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
a8c07b06d8 Set up an enhanced version of Seth's bad role scenario 2024-06-10 16:36:22 -04:00
Jeff Bradberry
53c5feaf6b Set up Seth's bad role scenario 2024-06-10 16:36:22 -04:00
Jeff Bradberry
6f57aaa8f5 When checking reverse links, treat duplicate Roles different from bad ones
Also, null out the generic foreign key on orphaned roles before deleting.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
bea74a401d Attempt to be more efficient about grouping the content types
Also, attempt to rebuild the role ancestors in the fixup script.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
54e85813c8 First full check script
This version emits the first fix-up script as its output.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
b69ed08fe5 Specifically examine the InstanceGroup roles 2024-06-10 16:36:22 -04:00
Jeff Bradberry
de25408a23 Print out details of all of the crosslinked roles 2024-06-10 16:36:22 -04:00
Jeff Bradberry
b17f0a188b Initial check 2024-06-10 16:36:22 -04:00
Hao Liu
fb860d76ce Add receptor work list command to sosreport (#15207) 2024-06-10 19:39:24 +00:00
Artsiom Musin
451f20ce0f Use patch to update users in awx cli 2024-06-10 14:54:10 -04:00
Hao Liu
c1dc0c7b86 Periodically sync from share resource provider (#15264)
* Periodically sync from share resource provider

- add periodic task `periodic_resource_sync` run once every 15 min
- if `RESOURCE_SERVER` is not configured sync will not run
- only 1 node

example RESOURCE_SERVER configuration
```
RESOURCE_SERVER = {
    "URL": "<resource server url>",
    "SECRET_KEY": "<resource server auth token>",
    "VALIDATE_HTTPS": <True/False>,
}
RESOURCE_SERVICE_PATH = <resource_service_path>
```
2024-06-10 18:10:57 +00:00
Seth Foster
d65ea2a3d5 Fix race condition when deleting schedules (#15259)
If more than one schedule for a unified job template
is removed at once, a race condition can arise.

example scenario:  delete schedules with ids 7 and 8
- unified job template next_schedule is currently 7
- on delete of schedule 7, update_computed_fields will try to set
next_schedule to 8
- but while this logic is occurring, another transaction
is deleting 8

This leads to a db IntegrityError

The solution here is to call select_for_update() on the
next schedule, so that 8 cannot be deleted until
the transaction for deleting 7 is completed.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-06-09 20:39:18 -04:00
Dave
8827ae7554 Replace REMOTE_ADDR with ansible_base.lib.utils.requests.get_remote_host (#15175) 2024-06-06 14:47:04 +01:00
Jeff Bradberry
4915262af1 Do each batch of the HostMetric updates in a transaction
It looks like we can't do upserts currently without dropping to raw
SQL, but if we wrap each batch in a transaction, that should insure
that each is updated with the correct count.
2024-06-05 14:15:21 -04:00
Seth Foster
d43c91e1a5 Option for dev env to enable ssl for postgres (#15151)
PG_TLS=true make docker-compose

This will add some extra startup commands
for the postgres container to generate a key and
cert to use for postgres connections.
It will also mount in pgssl.conf which has ssl configuration.

This can be useful for debugging issues that only surface
when using ssl postgres connections.
2024-06-05 12:48:08 -04:00
Seth Foster
b470ca32af Prevent modifying shared resources when using platform ingress (#15234)
* Prevent modifying shared resources

Adds a class decorator to prevent modifying shared resources
when gateway is being used.

AWX_DIRECT_SHARED_RESOURCE_MANAGEMENT_ENABLED is the setting
to enable/disable this feature.

Works by overriding these view methods:
- create
- delete
- perform_update

create and delete are overridden to raise a
PermissionDenied exception.

perform_update is overridden to check if any shared
fields are being modified, and raise a PermissionDenied
exception if so.

Additional changes:

Prevent sso conf from registering external authentication related settings if
AWX_DIRECT_SHARED_RESOURCE_MANAGEMENT_ENABLED is False

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-06-05 12:44:01 -04:00
Satoe Imaishi
793777bec7 Add cython to VENV_BOOTSTRAP for grpcio (#15256) 2024-06-05 11:04:15 -04:00
Jake Jackson
6dc4a4508d fix cve 2024-24680 (#15250) 2024-06-04 15:44:09 -04:00
Hao Liu
cf09a4220d Repin cython due to https://github.com/yaml/pyyaml/pull/702 (#15248)
* Revert "Unpin cypthon (#15246)"

This reverts commit 659c3b64de.

* Pin grpcio

Avoid cython 3 due to https://github.com/yaml/pyyaml/pull/702

* Delete asyncpg.txt
2024-06-03 19:42:20 +00:00
Hao Liu
659c3b64de Unpin cypthon (#15246)
* Unpin cython

* Remove unused asyncpg

* Remove asyncpg license file
2024-06-03 11:41:56 -04:00
Ethem Cem Özkan
37ad690d09 Add AWS SNS notification support for webhook (#15184)
Support for AWS SNS notifications. SNS is a widespread service that is used to integrate with other AWS services(EG lambdas). This support would unlock use cases like triggering lambda functions, especially when AWX is deployed on EKS.

Decisions:

Data Structure
- I preferred using the same structure as Webhook for message body data because it contains all job details. For now, I directly linked to Webhook to avoid duplication, but I am open to suggestions.

AWS authentication
- To support non-AWS native environments, I added configuration options for AWS secret key, ID, and session tokens. When entered, these values are supplied to the underlining boto3 SNS client. If not entered, it falls back to the default authentication chain to support the native AWS environment. Properly configured EKS pods are created with temporary credentials that the default authentication chain can pick automatically.

---------

Signed-off-by: Ethem Cem Ozkan <ethemcem.ozkan@gmail.com>
2024-06-02 02:48:56 +00:00
Akira Yokochi
7845ec7e01 Modify the link to terraform_state inventory plugin (#15241)
fix link to terraform_state inventory plugin
2024-06-01 22:36:30 -04:00
Chris Meyers
a15bcf1d55 Add requirements comment 2024-05-31 13:55:17 -04:00
Chris Meyers
7b3fb2c2a8 Add example grafana dashboard
* Per-service log view
2024-05-31 13:55:17 -04:00
Chris Meyers
6df47c8449 Rework which loggers we sent to OTEL
* Send all propagate=False loggers to OTEL AND the awx logger
2024-05-31 13:55:17 -04:00
Chris Meyers
cae42653bf Add recording
* Always output awx logs to a file via otel
* That log file can always be later replayed into a product that
  supports otlp at a later date.
* Useful when you find a problem that you need a time series DB to help
  find and solve.
* Useful if a community member or customer has a problem where a time
  series db would be helpful. You can take a "remote" users log and
  replay it locally for analysis.
2024-05-31 13:55:17 -04:00
Chris Meyers
da46a29f40 Move requirements out of dev and into mainline
* Add new package license files
2024-05-31 13:55:17 -04:00
Chris Meyers
0eb465531c Centralized logging via otel 2024-05-31 13:55:17 -04:00
Hao Liu
d0fe0ed796 Add check_instance_ready management command (#15238)
- throw exception and return 1 if instance not ready
- return 0 if ready
2024-05-31 09:29:40 -04:00
Chris Meyers
ceafa14c9d Use settings fixture in tests
* Otherwise, settings value changes bleeds over into other tests.
* Remove django.conf settings import so that we do not accidentally
  forget to use the settings fixture.
2024-05-30 14:10:35 -05:00
Chris Meyers
08e1454098 Make named url work with optional url prefix
* Handle named url sub-resources
* i.e. /api/v2/inventories/my_inventory++Default/hosts/
2024-05-29 12:39:25 -05:00
Harshith u
776b661fb3 use optional api prefix in collection if set as environ vairable (#15205)
* use optional api prefix if set as environ variable

* Different default depending on collection type
2024-05-29 11:54:05 -04:00
Hao Liu
af6ccdbde5 Fix galaxy publishing (#15233)
- switch to galaxy search API for determining if the version we want to publish already exist
- switch from github action variable to env var for easier copy and paste testing
2024-05-28 15:27:34 -04:00
Matthew Jones
559ab3564b Include Kube credentials in the inventory source picker (#15223) 2024-05-28 14:05:24 -04:00
Alan Rominger
208ef0ce25 Update test so that DAB change can merge (#15222) 2024-05-28 11:53:01 -04:00
Alexander Pykavy
c3d9aa54d8 Mention in the docs that you can skip make docker-compose-build (#15149)
Signed-off-by: Alexander Pykavy <aleksandrpykavyj@gmail.com>
2024-05-22 19:33:13 +00:00
irozet12
66efe7198a Wrap long line to fit help window (#14597) (#15169)
Wrap long line to fit description window (#14597)

Co-authored-by: Ирина Розет <irozet@astralinux.ru>
2024-05-22 19:31:03 +00:00
Beni ~HB9HNT
adf930ee42 awxkit: replace deprecated locale.format() with locale.format_string() to fix human output on Python 3.12 (#15170)
Replace deprecated locale.format with locale.format_string

This will be removed in Python 3.12 and will break human output unless fixed.
2024-05-22 19:27:31 +00:00
Hao Liu
892410477a Fix promote from release event (#15215) 2024-05-22 18:58:11 +00:00
Seth Foster
0d4f653794 Fix up ansible-test sanity checks due to ansible 2.17 release (#15208)
* Fix up ansible sanity checks

* Fix awx-collection test failure

* Add ignore for ansible-test 2.17 

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-05-21 15:05:59 -04:00
Alan Rominger
8de8f6dce2 Update a few dev requirements (#15203)
* Update a few dev requirements

* Fix test failures due to upgrade

* Update patterns for mocker usage
2024-05-20 23:37:02 +00:00
Hao Liu
fc9064e27f Allow wsrelay to fail without FATAL (#15191)
We have not identify the root cause of wsrelay failure but attempt to make wsrelay restart itself resulted in postgres and redis connection leak. We were not able to fully identify where the redis connection leak comes from so reverting back to failing and removing startsecs 30 will prevent wsrelay to FATAL
2024-05-20 23:34:12 +00:00
TVo
7de350dc3e Added docs for new RBAC changes (#15150)
* Added docs for new RBAC changes

* Added UI changes with screens and API endpoints with sample commands.

* Update docs/docsite/rst/userguide/rbac.rst

Co-authored-by: Vidya Nambiar <43621546+vidyanambiar@users.noreply.github.com>

* Incorporated review feedback from @vidyanambiar.

---------

Co-authored-by: Vidya Nambiar <43621546+vidyanambiar@users.noreply.github.com>
2024-05-17 20:10:16 -04:00
Michael Anstis
d4bdaad4d8 Fix success_url_allowed_hosts set instantiation (#15196)
Co-authored-by: Michael Anstis <manstis@redhat.com>
2024-05-16 12:08:50 -04:00
Bikouo Aubin
a9b2ffa3e9 Fix terraform backend credential issue (#15141)
fix issue introduced by PR15055
2024-05-15 15:19:18 -04:00
Sean Sullivan
1b8d409043 Add skip authorization option to collection application module (#15190) 2024-05-15 09:29:00 -04:00
dependabot[bot]
da2bccf5a8 Bump jinja2 from 3.1.3 to 3.1.4 in /docs/docsite (#15168)
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.4.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.3...3.1.4)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-14 14:26:26 -04:00
Hao Liu
a2f083bd8e Fix podman failure in development environment (#15188)
```
ERRO[0000] path "/var/lib/awx/.config" exists and it is not owned by the current user
```
start to happen with podman 5

it seems that the config files are no longer needed removing it fixes the problem
2024-05-14 14:18:48 -04:00
Michael Anstis
4d641b6cf5 Support Django logout redirects (#15148)
* Allowed hosts for logout redirects can now be set via the LOGOUT_ALLOWED_HOSTS setting

Authored-by: Michael Anstis <manstis@redhat.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-05-13 13:03:27 -04:00
Elijah DeLee
439c3f0c23 Skip 3 expensive calls for jobs saving in 'waiting' status on UnifiedJob (#15174)
skip update parent logic for 'waiting' on UnifiedJob

by not looking up "status_before" from previous instance
we save 2 to 3 expensive calls (the self lookup of old state, the lookup
of parent, and the update to parent if allow_simultaneous == False or status == 'waiting')
2024-05-13 10:26:03 -04:00
jessicamack
946bbe3560 Clean up settings file (#15135)
remove unneeded settings
2024-05-10 11:25:15 -04:00
James
20f054d600 Expose websockets on api prefix v2 2024-05-01 10:44:51 -04:00
Alan Rominger
918d5b3565 Do some aesthetic adjustments to role presentation fields (#15153)
* Do some asthetic adjustments to role presentation fields

* Correctly test managed setup

* Minor migration adjustments
2024-04-29 17:11:10 -04:00
Hao Liu
158314af50 Delete deprecated Cypress UI e2e_test.yml (#15155)
Delete e2e_test.yml

Remove because it's no longer being maintained
2024-04-29 12:58:10 -04:00
Seth Foster
4754819a09 awx modules wait on event processing finished (#15152)
This change makes "wait: true" for jobs and syncs
look at the event_processing_finished instead of
finished field.

Right now there is a race condition where
a module might try to delete an inventory, but the events
for an inventory sync have not yet finished. We have a
RelatedJobsPreventDeleteMixin that checks for this condition.

bulk jobs don't have event_processing_finished so we just
use finished field in that case.
2024-04-26 17:33:34 -04:00
Seth Foster
78fc23138a Pin openssl 3.0.7 (#15147)
followup to PR #15142

This commit pins openssl in the awx image,
not just the builder image.
2024-04-26 12:29:22 -04:00
Alan Rominger
014534bfa5 Upgrade DRF (#15144)
* Upgrade DRF

* Fix failures caused by DRF upgrade
2024-04-25 15:37:08 -04:00
Seth Foster
2502e7c7d8 Temporarily downgrade openssl (#15142)
openssl 3.2.0 has incompatiblity issues with
the libpq version we are using, and causes
some C runtime errors:
"double free or corruption (out)"

see awx issue #15136

also this issue

github.com/conan-io/conan-center-index/pull/22615

once the libpq libraries on centos stream9 are
updated with the patch, we can unpin openssl

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-04-25 14:01:03 -04:00
Jeff Bradberry
fb237e3834 Stop pre-caching every resource in the system upon import
If we don't have something in the cache when we call
get_by_natural_key, do an actual filtered query for it and cache the
results.  We'll get more overall API calls this way, but they'll be
smaller and will happen while we are importing, not upfront.
2024-04-24 17:04:40 -04:00
irozet12
e4646ae611 Add help message for expiration tokens (#15076) (#15077)
Co-authored-by: Ирина Розет <irozet@astralinux.ru>
2024-04-24 19:58:09 +00:00
Bruno Sanchez
7dc77546f4 Adding CSRF Validation for schemas (#15027)
* Adding CSRF Validation for schemas

* Changing retrieve of scheme to avoid importing new library

* check if CSRF_TRUSTED_ORIGINS exists before accessing it

---------

Signed-off-by: Bruno Sanchez <brsanche@redhat.com>
2024-04-24 15:47:03 -04:00
Michael Tipton
f5f85666c8 Add ability to set SameSite policy for userLoggedIn cookie (#15100)
* Add ability to set SameSite policy for userLoggedIn cookie

* reformat line for linter
2024-04-24 15:44:31 -04:00
Alan Rominger
47a061eb39 Fix and test data migration error from DAB RBAC (#15138)
* Fix and test data migration error from DAB RBAC

* Fix up migration test

* Fix custom method bug

* Fix another fat fingered bug
2024-04-24 15:14:03 -04:00
Alan Rominger
c760577855 Adjust test for stricter DAB user view permission enforcement (#15130) 2024-04-23 15:21:06 -04:00
188 changed files with 8119 additions and 2585 deletions

View File

@@ -2,6 +2,7 @@
name: Build/Push Development Images
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
DOCKER_CACHE: "--no-cache" # using the cache will not rebuild git requirements and other things
on:
workflow_dispatch:
push:

View File

@@ -1,75 +0,0 @@
---
name: E2E Tests
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on:
pull_request_target:
types: [labeled]
jobs:
e2e-test:
if: contains(github.event.pull_request.labels.*.name, 'qe:e2e')
runs-on: ubuntu-latest
timeout-minutes: 40
permissions:
packages: write
contents: read
strategy:
matrix:
job: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: true
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Pull awx_cypress_base image
run: |
docker pull quay.io/awx/awx_cypress_base:latest
- name: Checkout test project
uses: actions/checkout@v3
with:
repository: ${{ github.repository_owner }}/tower-qa
ssh-key: ${{ secrets.QA_REPO_KEY }}
path: tower-qa
ref: devel
- name: Build cypress
run: |
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
docker build -t awx-pf-tests .
- name: Run E2E tests
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
run: |
export COMMIT_INFO_BRANCH=$GITHUB_HEAD_REF
export COMMIT_INFO_AUTHOR=$GITHUB_ACTOR
export COMMIT_INFO_SHA=$GITHUB_SHA
export COMMIT_INFO_REMOTE=$GITHUB_REPOSITORY_OWNER
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
AWX_IP=${{ steps.awx.outputs.ip }}
printenv > .env
echo "Executing tests:"
docker run \
--network '_sources_default' \
--ipc=host \
--env-file=.env \
-e CYPRESS_baseUrl="https://$AWX_IP:8043" \
-e CYPRESS_AWX_E2E_USERNAME=admin \
-e CYPRESS_AWX_E2E_PASSWORD='password' \
-e COMMAND="npm run cypress-concurrently-gha" \
-v /dev/shm:/dev/shm \
-v $PWD:/e2e \
-w /e2e \
awx-pf-tests run --project .
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: e2e-${{ matrix.job }}.log

View File

@@ -29,7 +29,7 @@ jobs:
- name: Set GitHub Env vars if release event
if: ${{ github.event_name == 'release' }}
run: |
echo "TAG_NAME=${{ env.TAG_NAME }}" >> $GITHUB_ENV
echo "TAG_NAME=${{ github.event.release.tag_name }}" >> $GITHUB_ENV
- name: Checkout awx
uses: actions/checkout@v3
@@ -60,15 +60,18 @@ jobs:
COLLECTION_VERSION: ${{ env.TAG_NAME }}
COLLECTION_TEMPLATE_VERSION: true
run: |
sudo apt-get install jq
make build_collection
curl_with_redirects=$(curl --head -sLw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz | tail -1)
curl_without_redirects=$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz | tail -1)
if [[ "$curl_with_redirects" == "302" ]] || [[ "$curl_without_redirects" == "302" ]]; then
count=$(curl -s https://galaxy.ansible.com/api/v3/plugin/ansible/search/collection-versions/\?namespace\=${COLLECTION_NAMESPACE}\&name\=awx\&version\=${COLLECTION_VERSION} | jq .meta.count)
if [[ "$count" == "1" ]]; then
echo "Galaxy release already done";
else
elif [[ "$count" == "0" ]]; then
ansible-galaxy collection publish \
--token=${{ secrets.GALAXY_TOKEN }} \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz;
awx_collection_build/${COLLECTION_NAMESPACE}-awx-${COLLECTION_VERSION}.tar.gz;
else
echo "Unexpected count from galaxy search: $count";
exit 1;
fi
- name: Set official pypi info

View File

@@ -11,6 +11,8 @@ ignore: |
# django template files
awx/api/templates/instance_install_bundle/**
.readthedocs.yaml
tools/loki
tools/otel
extends: default

View File

@@ -47,8 +47,14 @@ VAULT ?= false
VAULT_TLS ?= false
# If set to true docker-compose will also start a tacacs+ instance
TACACS ?= false
# If set to true docker-compose will also start an OpenTelemetry Collector instance
OTEL ?= false
# If set to true docker-compose will also start a Loki instance
LOKI ?= false
# If set to true docker-compose will install editable dependencies
EDITABLE_DEPENDENCIES ?= false
# If set to true, use tls for postgres connection
PG_TLS ?= false
VENV_BASE ?= /var/lib/awx/venv
@@ -57,6 +63,11 @@ DEV_DOCKER_OWNER ?= ansible
DEV_DOCKER_OWNER_LOWER = $(shell echo $(DEV_DOCKER_OWNER) | tr A-Z a-z)
DEV_DOCKER_TAG_BASE ?= ghcr.io/$(DEV_DOCKER_OWNER_LOWER)
DEVEL_IMAGE_NAME ?= $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
IMAGE_KUBE_DEV=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG)
IMAGE_KUBE=$(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG)
# Common command to use for running ansible-playbook
ANSIBLE_PLAYBOOK ?= ansible-playbook -e ansible_python_interpreter=$(PYTHON)
RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
@@ -65,7 +76,7 @@ RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio
# These should be upgraded in the AWX and Ansible venv before attempting
# to install the actual requirements
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==69.0.2 setuptools_scm[toml]==8.0.4 wheel==0.42.0
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==69.0.2 setuptools_scm[toml]==8.0.4 wheel==0.42.0 cython==0.29.37
NAME ?= awx
@@ -80,6 +91,18 @@ I18N_FLAG_FILE = .i18n_built
## PLATFORMS defines the target platforms for the manager image be build to provide support to multiple
PLATFORMS ?= linux/amd64,linux/arm64 # linux/ppc64le,linux/s390x
# Set up cache variables for image builds, allowing to control whether cache is used or not, ex:
# DOCKER_CACHE=--no-cache make docker-compose-build
ifeq ($(DOCKER_CACHE),)
DOCKER_DEVEL_CACHE_FLAG=--cache-from=$(DEVEL_IMAGE_NAME)
DOCKER_KUBE_DEV_CACHE_FLAG=--cache-from=$(IMAGE_KUBE_DEV)
DOCKER_KUBE_CACHE_FLAG=--cache-from=$(IMAGE_KUBE)
else
DOCKER_DEVEL_CACHE_FLAG=$(DOCKER_CACHE)
DOCKER_KUBE_DEV_CACHE_FLAG=$(DOCKER_CACHE)
DOCKER_KUBE_CACHE_FLAG=$(DOCKER_CACHE)
endif
.PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \
develop refresh adduser migrate dbchange \
receiver test test_unit test_coverage coverage_html \
@@ -362,7 +385,7 @@ symlink_collection:
ln -s $(shell pwd)/awx_collection $(COLLECTION_INSTALL)
awx_collection_build: $(shell find awx_collection -type f)
ansible-playbook -i localhost, awx_collection/tools/template_galaxy.yml \
$(ANSIBLE_PLAYBOOK) -i localhost, awx_collection/tools/template_galaxy.yml \
-e collection_package=$(COLLECTION_PACKAGE) \
-e collection_namespace=$(COLLECTION_NAMESPACE) \
-e collection_version=$(COLLECTION_VERSION) \
@@ -516,10 +539,10 @@ endif
docker-compose-sources: .git/hooks/pre-commit
@if [ $(MINIKUBE_CONTAINER_GROUP) = true ]; then\
ansible-playbook -i tools/docker-compose/inventory -e minikube_setup=$(MINIKUBE_SETUP) tools/docker-compose-minikube/deploy.yml; \
$(ANSIBLE_PLAYBOOK) -i tools/docker-compose/inventory -e minikube_setup=$(MINIKUBE_SETUP) tools/docker-compose-minikube/deploy.yml; \
fi;
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/sources.yml \
$(ANSIBLE_PLAYBOOK) -i tools/docker-compose/inventory tools/docker-compose/ansible/sources.yml \
-e awx_image=$(DEV_DOCKER_TAG_BASE)/awx_devel \
-e awx_image_tag=$(COMPOSE_TAG) \
-e receptor_image=$(RECEPTOR_IMAGE) \
@@ -535,12 +558,15 @@ docker-compose-sources: .git/hooks/pre-commit
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS) \
-e enable_tacacs=$(TACACS) \
-e enable_otel=$(OTEL) \
-e enable_loki=$(LOKI) \
-e install_editable_dependencies=$(EDITABLE_DEPENDENCIES) \
-e pg_tls=$(PG_TLS) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)
docker-compose: awx/projects docker-compose-sources
ansible-galaxy install --ignore-certs -r tools/docker-compose/ansible/requirements.yml;
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
$(ANSIBLE_PLAYBOOK) -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS) \
-e enable_ldap=$(LDAP); \
@@ -583,7 +609,7 @@ docker-compose-container-group-clean:
.PHONY: Dockerfile.dev
## Generate Dockerfile.dev for awx_devel image
Dockerfile.dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml \
$(ANSIBLE_PLAYBOOK) tools/ansible/dockerfile.yml \
-e dockerfile_name=Dockerfile.dev \
-e build_dev=True \
-e receptor_image=$(RECEPTOR_IMAGE)
@@ -594,8 +620,7 @@ docker-compose-build: Dockerfile.dev
-f Dockerfile.dev \
-t $(DEVEL_IMAGE_NAME) \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
$(DOCKER_DEVEL_CACHE_FLAG) .
.PHONY: docker-compose-buildx
## Build awx_devel image for docker compose development environment for multiple architectures
@@ -605,7 +630,7 @@ docker-compose-buildx: Dockerfile.dev
- docker buildx build \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) \
$(DOCKER_DEVEL_CACHE_FLAG) \
--platform=$(PLATFORMS) \
--tag $(DEVEL_IMAGE_NAME) \
-f Dockerfile.dev .
@@ -658,7 +683,7 @@ version-for-buildyml:
.PHONY: Dockerfile
## Generate Dockerfile for awx image
Dockerfile: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml \
$(ANSIBLE_PLAYBOOK) tools/ansible/dockerfile.yml \
-e receptor_image=$(RECEPTOR_IMAGE) \
-e headless=$(HEADLESS)
@@ -668,7 +693,8 @@ awx-kube-build: Dockerfile
--build-arg VERSION=$(VERSION) \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
--build-arg HEADLESS=$(HEADLESS) \
-t $(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG) .
$(DOCKER_KUBE_CACHE_FLAG) \
-t $(IMAGE_KUBE) .
## Build multi-arch awx image for deployment on Kubernetes environment.
awx-kube-buildx: Dockerfile
@@ -680,7 +706,8 @@ awx-kube-buildx: Dockerfile
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
--build-arg HEADLESS=$(HEADLESS) \
--platform=$(PLATFORMS) \
--tag $(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG) \
$(DOCKER_KUBE_CACHE_FLAG) \
--tag $(IMAGE_KUBE) \
-f Dockerfile .
- docker buildx rm awx-kube-buildx
@@ -688,7 +715,7 @@ awx-kube-buildx: Dockerfile
.PHONY: Dockerfile.kube-dev
## Generate Docker.kube-dev for awx_kube_devel image
Dockerfile.kube-dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml \
$(ANSIBLE_PLAYBOOK) tools/ansible/dockerfile.yml \
-e dockerfile_name=Dockerfile.kube-dev \
-e kube_dev=True \
-e template_dest=_build_kube_dev \
@@ -698,8 +725,8 @@ Dockerfile.kube-dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
awx-kube-dev-build: Dockerfile.kube-dev
DOCKER_BUILDKIT=1 docker build -f Dockerfile.kube-dev \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
$(DOCKER_KUBE_DEV_CACHE_FLAG) \
-t $(IMAGE_KUBE_DEV) .
## Build and push multi-arch awx_kube_devel image for development on local Kubernetes environment.
awx-kube-dev-buildx: Dockerfile.kube-dev
@@ -708,14 +735,14 @@ awx-kube-dev-buildx: Dockerfile.kube-dev
- docker buildx build \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
$(DOCKER_KUBE_DEV_CACHE_FLAG) \
--platform=$(PLATFORMS) \
--tag $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
--tag $(IMAGE_KUBE_DEV) \
-f Dockerfile.kube-dev .
- docker buildx rm awx-kube-dev-buildx
kind-dev-load: awx-kube-dev-build
$(KIND_BIN) load docker-image $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG)
$(KIND_BIN) load docker-image $(IMAGE_KUBE_DEV)
# Translation TASKS
# --------------------------------------

View File

@@ -33,8 +33,10 @@ from rest_framework.negotiation import DefaultContentNegotiation
# django-ansible-base
from ansible_base.rest_filters.rest_framework.field_lookup_backend import FieldLookupBackend
from ansible_base.lib.utils.models import get_all_field_names
from ansible_base.lib.utils.requests import get_remote_host
from ansible_base.rbac.models import RoleEvaluation, RoleDefinition
from ansible_base.rbac.permission_registry import permission_registry
from ansible_base.jwt_consumer.common.util import validate_x_trusted_proxy_header
# AWX
from awx.main.models import UnifiedJob, UnifiedJobTemplate, User, Role, Credential, WorkflowJobTemplateNode, WorkflowApprovalTemplate
@@ -42,6 +44,7 @@ from awx.main.models.rbac import give_creator_permissions
from awx.main.access import optimize_queryset
from awx.main.utils import camelcase_to_underscore, get_search_fields, getattrd, get_object_or_400, decrypt_field, get_awx_version
from awx.main.utils.licensing import server_product_name
from awx.main.utils.proxy import is_proxy_in_headers, delete_headers_starting_with_http
from awx.main.views import ApiErrorView
from awx.api.serializers import ResourceAccessListElementSerializer, CopySerializer
from awx.api.versioning import URLPathVersioning
@@ -93,20 +96,26 @@ class LoggedLoginView(auth_views.LoginView):
def post(self, request, *args, **kwargs):
ret = super(LoggedLoginView, self).post(request, *args, **kwargs)
ip = get_remote_host(request) # request.META.get('REMOTE_ADDR', None)
if request.user.is_authenticated:
logger.info(smart_str(u"User {} logged in from {}".format(self.request.user.username, request.META.get('REMOTE_ADDR', None))))
ret.set_cookie('userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False))
logger.info(smart_str(u"User {} logged in from {}".format(self.request.user.username, ip)))
ret.set_cookie(
'userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False), samesite=getattr(settings, 'USER_COOKIE_SAMESITE', 'Lax')
)
ret.setdefault('X-API-Session-Cookie-Name', getattr(settings, 'SESSION_COOKIE_NAME', 'awx_sessionid'))
return ret
else:
if 'username' in self.request.POST:
logger.warning(smart_str(u"Login failed for user {} from {}".format(self.request.POST.get('username'), request.META.get('REMOTE_ADDR', None))))
logger.warning(smart_str(u"Login failed for user {} from {}".format(self.request.POST.get('username'), ip)))
ret.status_code = 401
return ret
class LoggedLogoutView(auth_views.LogoutView):
success_url_allowed_hosts = set(settings.LOGOUT_ALLOWED_HOSTS.split(",")) if settings.LOGOUT_ALLOWED_HOSTS else set()
def dispatch(self, request, *args, **kwargs):
original_user = getattr(request, 'user', None)
ret = super(LoggedLogoutView, self).dispatch(request, *args, **kwargs)
@@ -146,22 +155,23 @@ class APIView(views.APIView):
Store the Django REST Framework Request object as an attribute on the
normal Django request, store time the request started.
"""
remote_headers = ['REMOTE_ADDR', 'REMOTE_HOST']
self.time_started = time.time()
if getattr(settings, 'SQL_DEBUG', False):
self.queries_before = len(connection.queries)
if 'HTTP_X_TRUSTED_PROXY' in request.environ:
if validate_x_trusted_proxy_header(request.environ['HTTP_X_TRUSTED_PROXY']):
remote_headers = settings.REMOTE_HOST_HEADERS
else:
logger.warning("Request appeared to be a trusted upstream proxy but failed to provide a matching shared secret.")
# If there are any custom headers in REMOTE_HOST_HEADERS, make sure
# they respect the allowed proxy list
if all(
[
settings.PROXY_IP_ALLOWED_LIST,
request.environ.get('REMOTE_ADDR') not in settings.PROXY_IP_ALLOWED_LIST,
request.environ.get('REMOTE_HOST') not in settings.PROXY_IP_ALLOWED_LIST,
]
):
for custom_header in settings.REMOTE_HOST_HEADERS:
if custom_header.startswith('HTTP_'):
request.environ.pop(custom_header, None)
if settings.PROXY_IP_ALLOWED_LIST:
if not is_proxy_in_headers(self.request, settings.PROXY_IP_ALLOWED_LIST, remote_headers):
delete_headers_starting_with_http(request, settings.REMOTE_HOST_HEADERS)
drf_request = super(APIView, self).initialize_request(request, *args, **kwargs)
request.drf_request = drf_request
@@ -206,17 +216,21 @@ class APIView(views.APIView):
return response
if response.status_code >= 400:
ip = get_remote_host(request) # request.META.get('REMOTE_ADDR', None)
msg_data = {
'status_code': response.status_code,
'user_name': request.user,
'url_path': request.path,
'remote_addr': request.META.get('REMOTE_ADDR', None),
'remote_addr': ip,
}
if type(response.data) is dict:
msg_data['error'] = response.data.get('error', response.status_text)
elif type(response.data) is list:
msg_data['error'] = ", ".join(list(map(lambda x: x.get('error', response.status_text), response.data)))
if len(response.data) > 0 and isinstance(response.data[0], str):
msg_data['error'] = str(response.data[0])
else:
msg_data['error'] = ", ".join(list(map(lambda x: x.get('error', response.status_text), response.data)))
else:
msg_data['error'] = response.status_text

View File

@@ -5381,7 +5381,7 @@ class NotificationSerializer(BaseSerializer):
)
def get_body(self, obj):
if obj.notification_type in ('webhook', 'pagerduty'):
if obj.notification_type in ('webhook', 'pagerduty', 'awssns'):
if isinstance(obj.body, dict):
if 'body' in obj.body:
return obj.body['body']
@@ -5403,9 +5403,9 @@ class NotificationSerializer(BaseSerializer):
def to_representation(self, obj):
ret = super(NotificationSerializer, self).to_representation(obj)
if obj.notification_type == 'webhook':
if obj.notification_type in ('webhook', 'awssns'):
ret.pop('subject')
if obj.notification_type not in ('email', 'webhook', 'pagerduty'):
if obj.notification_type not in ('email', 'webhook', 'pagerduty', 'awssns'):
ret.pop('body')
return ret

View File

@@ -61,7 +61,9 @@ import pytz
from wsgiref.util import FileWrapper
# django-ansible-base
from ansible_base.lib.utils.requests import get_remote_hosts
from ansible_base.rbac.models import RoleEvaluation, ObjectRole
from ansible_base.resource_registry.shared_types import OrganizationType, TeamType, UserType
# AWX
from awx.main.tasks.system import send_notifications, update_inventory_computed_fields
@@ -128,6 +130,7 @@ from awx.api.views.mixin import (
from awx.api.pagination import UnifiedJobEventPagination
from awx.main.utils import set_environ
logger = logging.getLogger('awx.api.views')
@@ -710,16 +713,81 @@ class AuthView(APIView):
return Response(data)
def immutablesharedfields(cls):
'''
Class decorator to prevent modifying shared resources when ALLOW_LOCAL_RESOURCE_MANAGEMENT setting is set to False.
Works by overriding these view methods:
- create
- delete
- perform_update
create and delete are overridden to raise a PermissionDenied exception.
perform_update is overridden to check if any shared fields are being modified,
and raise a PermissionDenied exception if so.
'''
# create instead of perform_create because some of our views
# override create instead of perform_create
if hasattr(cls, 'create'):
cls.original_create = cls.create
@functools.wraps(cls.create)
def create_wrapper(*args, **kwargs):
if settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
return cls.original_create(*args, **kwargs)
raise PermissionDenied({'detail': _('Creation of this resource is not allowed. Create this resource via the platform ingress.')})
cls.create = create_wrapper
if hasattr(cls, 'delete'):
cls.original_delete = cls.delete
@functools.wraps(cls.delete)
def delete_wrapper(*args, **kwargs):
if settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
return cls.original_delete(*args, **kwargs)
raise PermissionDenied({'detail': _('Deletion of this resource is not allowed. Delete this resource via the platform ingress.')})
cls.delete = delete_wrapper
if hasattr(cls, 'perform_update'):
cls.original_perform_update = cls.perform_update
@functools.wraps(cls.perform_update)
def update_wrapper(*args, **kwargs):
if not settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
view, serializer = args
instance = view.get_object()
if instance:
if isinstance(instance, models.Organization):
shared_fields = OrganizationType._declared_fields.keys()
elif isinstance(instance, models.User):
shared_fields = UserType._declared_fields.keys()
elif isinstance(instance, models.Team):
shared_fields = TeamType._declared_fields.keys()
attrs = serializer.validated_data
for field in shared_fields:
if field in attrs and getattr(instance, field) != attrs[field]:
raise PermissionDenied({field: _(f"Cannot change shared field '{field}'. Alter this field via the platform ingress.")})
return cls.original_perform_update(*args, **kwargs)
cls.perform_update = update_wrapper
return cls
@immutablesharedfields
class TeamList(ListCreateAPIView):
model = models.Team
serializer_class = serializers.TeamSerializer
@immutablesharedfields
class TeamDetail(RetrieveUpdateDestroyAPIView):
model = models.Team
serializer_class = serializers.TeamSerializer
@immutablesharedfields
class TeamUsersList(BaseUsersList):
model = models.User
serializer_class = serializers.UserSerializer
@@ -1101,6 +1169,7 @@ class ProjectCopy(CopyAPIView):
copy_return_serializer_class = serializers.ProjectSerializer
@immutablesharedfields
class UserList(ListCreateAPIView):
model = models.User
serializer_class = serializers.UserSerializer
@@ -1271,7 +1340,16 @@ class UserRolesList(SubListAttachDetachAPIView):
user = get_object_or_400(models.User, pk=self.kwargs['pk'])
role = get_object_or_400(models.Role, pk=sub_id)
credential_content_type = ContentType.objects.get_for_model(models.Credential)
content_types = ContentType.objects.get_for_models(models.Organization, models.Team, models.Credential) # dict of {model: content_type}
# Prevent user to be associated with team/org when ALLOW_LOCAL_RESOURCE_MANAGEMENT is False
if not settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
for model in [models.Organization, models.Team]:
ct = content_types[model]
if role.content_type == ct and role.role_field in ['member_role', 'admin_role']:
data = dict(msg=_(f"Cannot directly modify user membership to {ct.model}. Direct shared resource management disabled"))
return Response(data, status=status.HTTP_403_FORBIDDEN)
credential_content_type = content_types[models.Credential]
if role.content_type == credential_content_type:
if 'disassociate' not in request.data and role.content_object.organization and user not in role.content_object.organization.member_role:
data = dict(msg=_("You cannot grant credential access to a user not in the credentials' organization"))
@@ -1343,6 +1421,7 @@ class UserActivityStreamList(SubListAPIView):
return qs.filter(Q(actor=parent) | Q(user__in=[parent]))
@immutablesharedfields
class UserDetail(RetrieveUpdateDestroyAPIView):
model = models.User
serializer_class = serializers.UserSerializer
@@ -2313,6 +2392,14 @@ class JobTemplateList(ListCreateAPIView):
serializer_class = serializers.JobTemplateSerializer
always_allow_superuser = False
def check_permissions(self, request):
if request.method == 'POST':
can_access, messages = request.user.can_access_with_errors(self.model, 'add', request.data)
if not can_access:
self.permission_denied(request, message=messages)
super(JobTemplateList, self).check_permissions(request)
class JobTemplateDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = models.JobTemplate
@@ -2692,12 +2779,7 @@ class JobTemplateCallback(GenericAPIView):
host for the current request.
"""
# Find the list of remote host names/IPs to check.
remote_hosts = set()
for header in settings.REMOTE_HOST_HEADERS:
for value in self.request.META.get(header, '').split(','):
value = value.strip()
if value:
remote_hosts.add(value)
remote_hosts = set(get_remote_hosts(self.request))
# Add the reverse lookup of IP addresses.
for rh in list(remote_hosts):
try:
@@ -3037,6 +3119,14 @@ class WorkflowJobTemplateList(ListCreateAPIView):
serializer_class = serializers.WorkflowJobTemplateSerializer
always_allow_superuser = False
def check_permissions(self, request):
if request.method == 'POST':
can_access, messages = request.user.can_access_with_errors(self.model, 'add', request.data)
if not can_access:
self.permission_denied(request, message=messages)
super(WorkflowJobTemplateList, self).check_permissions(request)
class WorkflowJobTemplateDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = models.WorkflowJobTemplate
@@ -4295,7 +4385,15 @@ class RoleUsersList(SubListAttachDetachAPIView):
user = get_object_or_400(models.User, pk=sub_id)
role = self.get_parent_object()
credential_content_type = ContentType.objects.get_for_model(models.Credential)
content_types = ContentType.objects.get_for_models(models.Organization, models.Team, models.Credential) # dict of {model: content_type}
if not settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
for model in [models.Organization, models.Team]:
ct = content_types[model]
if role.content_type == ct and role.role_field in ['member_role', 'admin_role']:
data = dict(msg=_(f"Cannot directly modify user membership to {ct.model}. Direct shared resource management disabled"))
return Response(data, status=status.HTTP_403_FORBIDDEN)
credential_content_type = content_types[models.Credential]
if role.content_type == credential_content_type:
if 'disassociate' not in request.data and role.content_object.organization and user not in role.content_object.organization.member_role:
data = dict(msg=_("You cannot grant credential access to a user not in the credentials' organization"))

View File

@@ -53,15 +53,18 @@ from awx.api.serializers import (
CredentialSerializer,
)
from awx.api.views.mixin import RelatedJobsPreventDeleteMixin, OrganizationCountsMixin
from awx.api.views import immutablesharedfields
logger = logging.getLogger('awx.api.views.organization')
@immutablesharedfields
class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization
serializer_class = OrganizationSerializer
@immutablesharedfields
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization
serializer_class = OrganizationSerializer
@@ -104,6 +107,7 @@ class OrganizationInventoriesList(SubListAPIView):
relationship = 'inventories'
@immutablesharedfields
class OrganizationUsersList(BaseUsersList):
model = User
serializer_class = UserSerializer
@@ -112,6 +116,7 @@ class OrganizationUsersList(BaseUsersList):
ordering = ('username',)
@immutablesharedfields
class OrganizationAdminsList(BaseUsersList):
model = User
serializer_class = UserSerializer
@@ -150,6 +155,7 @@ class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
parent_key = 'organization'
@immutablesharedfields
class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
model = Team
serializer_class = TeamSerializer

View File

@@ -61,6 +61,10 @@ class StringListBooleanField(ListField):
def to_representation(self, value):
try:
if isinstance(value, str):
# https://github.com/encode/django-rest-framework/commit/a180bde0fd965915718b070932418cabc831cee1
# DRF changed truthy and falsy lists to be capitalized
value = value.lower()
if isinstance(value, (list, tuple)):
return super(StringListBooleanField, self).to_representation(value)
elif value in BooleanField.TRUE_VALUES:
@@ -78,6 +82,8 @@ class StringListBooleanField(ListField):
def to_internal_value(self, data):
try:
if isinstance(data, str):
data = data.lower()
if isinstance(data, (list, tuple)):
return super(StringListBooleanField, self).to_internal_value(data)
elif data in BooleanField.TRUE_VALUES:

View File

@@ -130,9 +130,9 @@ def test_default_setting(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system', default='DEFAULT')
settings_to_cache = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache):
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.cache.get('AWX_SOME_SETTING') == 'DEFAULT'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache)
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.cache.get('AWX_SOME_SETTING') == 'DEFAULT'
@pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT')
@@ -146,9 +146,9 @@ def test_setting_is_not_from_setting_file(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system', default='DEFAULT')
settings_to_cache = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache):
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.registry.get_setting_field('AWX_SOME_SETTING').defined_in_file is False
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache)
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.registry.get_setting_field('AWX_SOME_SETTING').defined_in_file is False
def test_empty_setting(settings, mocker):
@@ -156,10 +156,10 @@ def test_empty_setting(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([]), 'first.return_value': None})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
with pytest.raises(AttributeError):
settings.AWX_SOME_SETTING
assert settings.cache.get('AWX_SOME_SETTING') == SETTING_CACHE_NOTSET
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
with pytest.raises(AttributeError):
settings.AWX_SOME_SETTING
assert settings.cache.get('AWX_SOME_SETTING') == SETTING_CACHE_NOTSET
def test_setting_from_db(settings, mocker):
@@ -168,9 +168,9 @@ def test_setting_from_db(settings, mocker):
setting_from_db = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([setting_from_db]), 'first.return_value': setting_from_db})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
assert settings.AWX_SOME_SETTING == 'FROM_DB'
assert settings.cache.get('AWX_SOME_SETTING') == 'FROM_DB'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
assert settings.AWX_SOME_SETTING == 'FROM_DB'
assert settings.cache.get('AWX_SOME_SETTING') == 'FROM_DB'
@pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT')
@@ -205,8 +205,8 @@ def test_db_setting_update(settings, mocker):
existing_setting = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
setting_list = mocker.Mock(**{'order_by.return_value.first.return_value': existing_setting})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=setting_list):
settings.AWX_SOME_SETTING = 'NEW-VALUE'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=setting_list)
settings.AWX_SOME_SETTING = 'NEW-VALUE'
assert existing_setting.value == 'NEW-VALUE'
existing_setting.save.assert_called_with(update_fields=['value'])
@@ -217,8 +217,8 @@ def test_db_setting_deletion(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system')
existing_setting = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=[existing_setting]):
del settings.AWX_SOME_SETTING
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=[existing_setting])
del settings.AWX_SOME_SETTING
assert existing_setting.delete.call_count == 1
@@ -283,10 +283,10 @@ def test_sensitive_cache_data_is_encrypted(settings, mocker):
# use its primary key as part of the encryption key
setting_from_db = mocker.Mock(pk=123, key='AWX_ENCRYPTED', value='SECRET!')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([setting_from_db]), 'first.return_value': setting_from_db})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
cache.set('AWX_ENCRYPTED', 'SECRET!')
assert cache.get('AWX_ENCRYPTED') == 'SECRET!'
assert native_cache.get('AWX_ENCRYPTED') == 'FRPERG!'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
cache.set('AWX_ENCRYPTED', 'SECRET!')
assert cache.get('AWX_ENCRYPTED') == 'SECRET!'
assert native_cache.get('AWX_ENCRYPTED') == 'FRPERG!'
def test_readonly_sensitive_cache_data_is_encrypted(settings):

View File

@@ -598,7 +598,7 @@ class InstanceGroupAccess(BaseAccess):
- a superuser
- admin role on the Instance group
I can add/delete Instance Groups:
- a superuser(system administrator)
- a superuser(system administrator), because these are not org-scoped
I can use Instance Groups when I have:
- use_role on the instance group
"""
@@ -627,7 +627,7 @@ class InstanceGroupAccess(BaseAccess):
def can_delete(self, obj):
if obj.name in [settings.DEFAULT_EXECUTION_QUEUE_NAME, settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]:
return False
return self.user.is_superuser
return self.user.has_obj_perm(obj, 'delete')
class UserAccess(BaseAccess):
@@ -1387,12 +1387,11 @@ class TeamAccess(BaseAccess):
class ExecutionEnvironmentAccess(BaseAccess):
"""
I can see an execution environment when:
- I'm a superuser
- I'm a member of the same organization
- it is a global ExecutionEnvironment
- I can see its organization
- It is a global ExecutionEnvironment
I can create/change an execution environment when:
- I'm a superuser
- I'm an admin for the organization(s)
- I have an organization or object role that gives access
"""
model = ExecutionEnvironment
@@ -1416,7 +1415,7 @@ class ExecutionEnvironmentAccess(BaseAccess):
raise PermissionDenied
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
if not self.user.has_obj_perm(obj, 'change'):
raise PermissionDenied
return False
else:
if self.user not in obj.organization.execution_environment_admin_role:
raise PermissionDenied
@@ -1424,7 +1423,7 @@ class ExecutionEnvironmentAccess(BaseAccess):
new_org = get_object_from_data('organization', Organization, data, obj=obj)
if not new_org or self.user not in new_org.execution_environment_admin_role:
return False
return self.check_related('organization', Organization, data, obj=obj, mandatory=True, role_field='execution_environment_admin_role')
return self.check_related('organization', Organization, data, obj=obj, role_field='execution_environment_admin_role')
def can_delete(self, obj):
if obj.managed:
@@ -1596,6 +1595,8 @@ class JobTemplateAccess(NotificationAttachMixin, UnifiedCredentialsMixin, BaseAc
inventory = get_value(Inventory, 'inventory')
if inventory:
if self.user not in inventory.use_role:
if self.save_messages:
self.messages['inventory'] = [_('You do not have use permission on Inventory')]
return False
if not self.check_related('execution_environment', ExecutionEnvironment, data, role_field='read_role'):
@@ -1604,11 +1605,16 @@ class JobTemplateAccess(NotificationAttachMixin, UnifiedCredentialsMixin, BaseAc
project = get_value(Project, 'project')
# If the user has admin access to the project (as an org admin), should
# be able to proceed without additional checks.
if project:
return self.user in project.use_role
else:
if not project:
return False
if self.user not in project.use_role:
if self.save_messages:
self.messages['project'] = [_('You do not have use permission on Project')]
return False
return True
@check_superuser
def can_copy_related(self, obj):
"""
@@ -2092,11 +2098,20 @@ class WorkflowJobTemplateAccess(NotificationAttachMixin, BaseAccess):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'workflow_admin_role').exists()
return bool(
self.check_related('organization', Organization, data, role_field='workflow_admin_role', mandatory=True)
and self.check_related('inventory', Inventory, data, role_field='use_role')
and self.check_related('execution_environment', ExecutionEnvironment, data, role_field='read_role')
)
if not self.check_related('organization', Organization, data, role_field='workflow_admin_role', mandatory=True):
if data.get('organization', None) is None:
self.messages['organization'] = [_('An organization is required to create a workflow job template for normal user')]
return False
if not self.check_related('inventory', Inventory, data, role_field='use_role'):
self.messages['inventory'] = [_('You do not have use_role to the inventory')]
return False
if not self.check_related('execution_environment', ExecutionEnvironment, data, role_field='read_role'):
self.messages['execution_environment'] = [_('You do not have read_role to the execution environment')]
return False
return True
def can_copy(self, obj):
if self.save_messages:
@@ -2628,7 +2643,7 @@ class ScheduleAccess(UnifiedCredentialsMixin, BaseAccess):
class NotificationTemplateAccess(BaseAccess):
"""
I can see/use a notification_template if I have permission to
Run standard logic from DAB RBAC
"""
model = NotificationTemplate
@@ -2649,10 +2664,7 @@ class NotificationTemplateAccess(BaseAccess):
@check_superuser
def can_change(self, obj, data):
if obj.organization is None:
# only superusers are allowed to edit orphan notification templates
return False
return self.check_related('organization', Organization, data, obj=obj, role_field='notification_admin_role', mandatory=True)
return self.user.has_obj_perm(obj, 'change') and self.check_related('organization', Organization, data, obj=obj, role_field='notification_admin_role')
def can_admin(self, obj, data):
return self.can_change(obj, data)
@@ -2662,9 +2674,7 @@ class NotificationTemplateAccess(BaseAccess):
@check_superuser
def can_start(self, obj, validate_license=True):
if obj.organization is None:
return False
return self.user in obj.organization.notification_admin_role
return self.can_change(obj, None)
class NotificationAccess(BaseAccess):

View File

@@ -2,6 +2,7 @@
import logging
# Django
from django.core.checks import Error
from django.utils.translation import gettext_lazy as _
# Django REST Framework
@@ -954,3 +955,27 @@ def logging_validate(serializer, attrs):
register_validate('logging', logging_validate)
def csrf_trusted_origins_validate(serializer, attrs):
if not serializer.instance or not hasattr(serializer.instance, 'CSRF_TRUSTED_ORIGINS'):
return attrs
if 'CSRF_TRUSTED_ORIGINS' not in attrs:
return attrs
errors = []
for origin in attrs['CSRF_TRUSTED_ORIGINS']:
if "://" not in origin:
errors.append(
Error(
"As of Django 4.0, the values in the CSRF_TRUSTED_ORIGINS "
"setting must start with a scheme (usually http:// or "
"https://) but found %s. See the release notes for details." % origin,
)
)
if errors:
error_messages = [error.msg for error in errors]
raise serializers.ValidationError(_('\n'.join(error_messages)))
return attrs
register_validate('system', csrf_trusted_origins_validate)

View File

@@ -14,7 +14,7 @@ __all__ = [
'STANDARD_INVENTORY_UPDATE_ENV',
]
CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'controller', 'insights', 'terraform')
CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'controller', 'insights', 'terraform', 'openshift_virtualization')
PRIVILEGE_ESCALATION_METHODS = [
('sudo', _('Sudo')),
('su', _('Su')),

View File

@@ -102,7 +102,8 @@ def create_listener_connection():
# Apply overrides specifically for the listener connection
for k, v in settings.LISTENER_DATABASES.get('default', {}).items():
conf[k] = v
if k != 'OPTIONS':
conf[k] = v
for k, v in settings.LISTENER_DATABASES.get('default', {}).get('OPTIONS', {}).items():
conf['OPTIONS'][k] = v

View File

@@ -252,7 +252,7 @@ class ImplicitRoleField(models.ForeignKey):
kwargs.setdefault('related_name', '+')
kwargs.setdefault('null', 'True')
kwargs.setdefault('editable', False)
kwargs.setdefault('on_delete', models.CASCADE)
kwargs.setdefault('on_delete', models.SET_NULL)
super(ImplicitRoleField, self).__init__(*args, **kwargs)
def deconstruct(self):

View File

@@ -0,0 +1,12 @@
from django.core.management.base import BaseCommand, CommandError
from awx.main.models.ha import Instance
class Command(BaseCommand):
help = 'Check if the task manager instance is ready throw error if not ready, can be use as readiness probe for k8s.'
def handle(self, *args, **options):
if Instance.objects.me().node_state != Instance.States.READY:
raise CommandError('Instance is not ready') # so that return code is not 0
return

View File

@@ -101,8 +101,9 @@ class Command(BaseCommand):
migrating = bool(executor.migration_plan(executor.loader.graph.leaf_nodes()))
connection.close() # Because of async nature, main loop will use new connection, so close this
except Exception as exc:
logger.warning(f'Error on startup of run_wsrelay (error: {exc}), retry in 10s...')
time.sleep(10)
time.sleep(10) # Prevent supervisor from restarting the service too quickly and the service to enter FATAL state
# sleeping before logging because logging rely on setting which require database connection...
logger.warning(f'Error on startup of run_wsrelay (error: {exc}), slept for 10s...')
return
# In containerized deployments, migrations happen in the task container,
@@ -121,13 +122,14 @@ class Command(BaseCommand):
return
try:
my_hostname = Instance.objects.my_hostname()
my_hostname = Instance.objects.my_hostname() # This relies on settings.CLUSTER_HOST_ID which requires database connection
logger.info('Active instance with hostname {} is registered.'.format(my_hostname))
except RuntimeError as e:
# the CLUSTER_HOST_ID in the task, and web instance must match and
# ensure network connectivity between the task and web instance
logger.info('Unable to return currently active instance: {}, retry in 5s...'.format(e))
time.sleep(5)
time.sleep(10) # Prevent supervisor from restarting the service too quickly and the service to enter FATAL state
# sleeping before logging because logging rely on setting which require database connection...
logger.warning(f"Unable to return currently active instance: {e}, slept for 10s before return.")
return
if options.get('status'):
@@ -166,12 +168,14 @@ class Command(BaseCommand):
WebsocketsMetricsServer().start()
while True:
try:
asyncio.run(WebSocketRelayManager().run())
except KeyboardInterrupt:
logger.info('Shutting down Websocket Relayer')
break
except Exception as e:
logger.exception('Error in Websocket Relayer, exception: {}. Restarting in 10 seconds'.format(e))
time.sleep(10)
try:
logger.info('Starting Websocket Relayer...')
websocket_relay_manager = WebSocketRelayManager()
asyncio.run(websocket_relay_manager.run())
except KeyboardInterrupt:
logger.info('Terminating Websocket Relayer')
except BaseException as e: # BaseException is used to catch all exceptions including asyncio.CancelledError
time.sleep(10) # Prevent supervisor from restarting the service too quickly and the service to enter FATAL state
# sleeping before logging because logging rely on setting which require database connection...
logger.warning(f"Encounter error while running Websocket Relayer {e}, slept for 10s...")
return

View File

@@ -6,7 +6,7 @@ import logging
import threading
import time
import urllib.parse
from pathlib import Path
from pathlib import Path, PurePosixPath
from django.conf import settings
from django.contrib.auth import logout
@@ -138,14 +138,36 @@ class URLModificationMiddleware(MiddlewareMixin):
@classmethod
def _convert_named_url(cls, url_path):
url_units = url_path.split('/')
# If the identifier is an empty string, it is always invalid.
if len(url_units) < 6 or url_units[1] != 'api' or url_units[2] not in ['v2'] or not url_units[4]:
return url_path
resource = url_units[3]
default_prefix = PurePosixPath('/api/v2/')
optional_prefix = PurePosixPath(f'/api/{settings.OPTIONAL_API_URLPATTERN_PREFIX}/v2/')
url_path_original = url_path
url_path = PurePosixPath(url_path)
if set(optional_prefix.parts).issubset(set(url_path.parts)):
url_prefix = optional_prefix
elif set(default_prefix.parts).issubset(set(url_path.parts)):
url_prefix = default_prefix
else:
return url_path_original
# Remove prefix
url_path = PurePosixPath(*url_path.parts[len(url_prefix.parts) :])
try:
resource_path = PurePosixPath(url_path.parts[0])
name = url_path.parts[1]
url_suffix = PurePosixPath(*url_path.parts[2:]) # remove name and resource
except IndexError:
return url_path_original
resource = resource_path.parts[0]
if resource in settings.NAMED_URL_MAPPINGS:
url_units[4] = cls._named_url_to_pk(settings.NAMED_URL_GRAPH[settings.NAMED_URL_MAPPINGS[resource]], resource, url_units[4])
return '/'.join(url_units)
pk = PurePosixPath(cls._named_url_to_pk(settings.NAMED_URL_GRAPH[settings.NAMED_URL_MAPPINGS[resource]], resource, name))
else:
return url_path_original
parts = url_prefix.parts + resource_path.parts + pk.parts + url_suffix.parts
return PurePosixPath(*parts).as_posix() + '/'
def process_request(self, request):
old_path = request.path_info

View File

@@ -17,49 +17,49 @@ class Migration(migrations.Migration):
model_name='organization',
name='execute_role',
field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'
null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
),
),
migrations.AddField(
model_name='organization',
name='job_template_admin_role',
field=awx.main.fields.ImplicitRoleField(
editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'
editable=False, null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
),
),
migrations.AddField(
model_name='organization',
name='credential_admin_role',
field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'
null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
),
),
migrations.AddField(
model_name='organization',
name='inventory_admin_role',
field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'
null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
),
),
migrations.AddField(
model_name='organization',
name='project_admin_role',
field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'
null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
),
),
migrations.AddField(
model_name='organization',
name='workflow_admin_role',
field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'
null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
),
),
migrations.AddField(
model_name='organization',
name='notification_admin_role',
field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'
null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
),
),
migrations.AlterField(
@@ -67,7 +67,7 @@ class Migration(migrations.Migration):
name='admin_role',
field=awx.main.fields.ImplicitRoleField(
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['singleton:system_administrator', 'organization.credential_admin_role'],
related_name='+',
to='main.Role',
@@ -77,7 +77,7 @@ class Migration(migrations.Migration):
model_name='inventory',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(
null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='organization.inventory_admin_role', related_name='+', to='main.Role'
null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='organization.inventory_admin_role', related_name='+', to='main.Role'
),
),
migrations.AlterField(
@@ -85,7 +85,7 @@ class Migration(migrations.Migration):
name='admin_role',
field=awx.main.fields.ImplicitRoleField(
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['organization.project_admin_role', 'singleton:system_administrator'],
related_name='+',
to='main.Role',
@@ -96,7 +96,7 @@ class Migration(migrations.Migration):
name='admin_role',
field=awx.main.fields.ImplicitRoleField(
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['singleton:system_administrator', 'organization.workflow_admin_role'],
related_name='+',
to='main.Role',
@@ -107,7 +107,7 @@ class Migration(migrations.Migration):
name='execute_role',
field=awx.main.fields.ImplicitRoleField(
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['admin_role', 'organization.execute_role'],
related_name='+',
to='main.Role',
@@ -119,7 +119,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['project.organization.job_template_admin_role', 'inventory.organization.job_template_admin_role'],
related_name='+',
to='main.Role',
@@ -130,7 +130,7 @@ class Migration(migrations.Migration):
name='execute_role',
field=awx.main.fields.ImplicitRoleField(
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['admin_role', 'project.organization.execute_role', 'inventory.organization.execute_role'],
related_name='+',
to='main.Role',
@@ -142,7 +142,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=[
'admin_role',
'execute_role',

View File

@@ -18,7 +18,7 @@ class Migration(migrations.Migration):
model_name='organization',
name='member_role',
field=awx.main.fields.ImplicitRoleField(
editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['admin_role'], related_name='+', to='main.Role'
editable=False, null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role=['admin_role'], related_name='+', to='main.Role'
),
),
migrations.AlterField(
@@ -27,7 +27,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=[
'member_role',
'auditor_role',

View File

@@ -36,7 +36,7 @@ class Migration(migrations.Migration):
model_name='organization',
name='approval_role',
field=awx.main.fields.ImplicitRoleField(
editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'
editable=False, null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
),
preserve_default='True',
),
@@ -46,7 +46,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['organization.approval_role', 'admin_role'],
related_name='+',
to='main.Role',
@@ -116,7 +116,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=[
'member_role',
'auditor_role',
@@ -139,7 +139,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['singleton:system_auditor', 'organization.auditor_role', 'execute_role', 'admin_role', 'approval_role'],
related_name='+',
to='main.Role',

View File

@@ -80,7 +80,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['organization.job_template_admin_role'],
related_name='+',
to='main.Role',
@@ -92,7 +92,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['admin_role', 'organization.execute_role'],
related_name='+',
to='main.Role',
@@ -104,7 +104,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['organization.auditor_role', 'inventory.organization.auditor_role', 'execute_role', 'admin_role'],
related_name='+',
to='main.Role',

View File

@@ -26,7 +26,7 @@ class Migration(migrations.Migration):
model_name='organization',
name='execution_environment_admin_role',
field=awx.main.fields.ImplicitRoleField(
editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'
editable=False, null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role='admin_role', related_name='+', to='main.Role'
),
preserve_default='True',
),

View File

@@ -17,7 +17,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=[
'member_role',
'auditor_role',

View File

@@ -17,7 +17,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['singleton:system_administrator'],
related_name='+',
to='main.role',
@@ -30,7 +30,7 @@ class Migration(migrations.Migration):
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
on_delete=django.db.models.deletion.SET_NULL,
parent_role=['singleton:system_auditor', 'use_role', 'admin_role'],
related_name='+',
to='main.role',
@@ -41,7 +41,7 @@ class Migration(migrations.Migration):
model_name='instancegroup',
name='use_role',
field=awx.main.fields.ImplicitRoleField(
editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['admin_role'], related_name='+', to='main.role'
editable=False, null='True', on_delete=django.db.models.deletion.SET_NULL, parent_role=['admin_role'], related_name='+', to='main.role'
),
preserve_default='True',
),

View File

@@ -4,7 +4,6 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0189_inbound_hop_nodes'),
]

View File

@@ -0,0 +1,51 @@
# Generated by Django 4.2.6 on 2024-05-08 07:29
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0192_custom_roles'),
]
operations = [
migrations.AlterField(
model_name='notification',
name='notification_type',
field=models.CharField(
choices=[
('awssns', 'AWS SNS'),
('email', 'Email'),
('grafana', 'Grafana'),
('irc', 'IRC'),
('mattermost', 'Mattermost'),
('pagerduty', 'Pagerduty'),
('rocketchat', 'Rocket.Chat'),
('slack', 'Slack'),
('twilio', 'Twilio'),
('webhook', 'Webhook'),
],
max_length=32,
),
),
migrations.AlterField(
model_name='notificationtemplate',
name='notification_type',
field=models.CharField(
choices=[
('awssns', 'AWS SNS'),
('email', 'Email'),
('grafana', 'Grafana'),
('irc', 'IRC'),
('mattermost', 'Mattermost'),
('pagerduty', 'Pagerduty'),
('rocketchat', 'Rocket.Chat'),
('slack', 'Slack'),
('twilio', 'Twilio'),
('webhook', 'Webhook'),
],
max_length=32,
),
),
]

View File

@@ -0,0 +1,61 @@
# Generated by Django 4.2.10 on 2024-06-12 19:59
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0193_alter_notification_notification_type_and_more'),
]
operations = [
migrations.AlterField(
model_name='inventorysource',
name='source',
field=models.CharField(
choices=[
('file', 'File, Directory or Script'),
('constructed', 'Template additional groups and hostvars at runtime'),
('scm', 'Sourced from a Project'),
('ec2', 'Amazon EC2'),
('gce', 'Google Compute Engine'),
('azure_rm', 'Microsoft Azure Resource Manager'),
('vmware', 'VMware vCenter'),
('satellite6', 'Red Hat Satellite 6'),
('openstack', 'OpenStack'),
('rhv', 'Red Hat Virtualization'),
('controller', 'Red Hat Ansible Automation Platform'),
('insights', 'Red Hat Insights'),
('terraform', 'Terraform State'),
('openshift_virtualization', 'OpenShift Virtualization'),
],
default=None,
max_length=32,
),
),
migrations.AlterField(
model_name='inventoryupdate',
name='source',
field=models.CharField(
choices=[
('file', 'File, Directory or Script'),
('constructed', 'Template additional groups and hostvars at runtime'),
('scm', 'Sourced from a Project'),
('ec2', 'Amazon EC2'),
('gce', 'Google Compute Engine'),
('azure_rm', 'Microsoft Azure Resource Manager'),
('vmware', 'VMware vCenter'),
('satellite6', 'Red Hat Satellite 6'),
('openstack', 'OpenStack'),
('rhv', 'Red Hat Virtualization'),
('controller', 'Red Hat Ansible Automation Platform'),
('insights', 'Red Hat Insights'),
('terraform', 'Terraform State'),
('openshift_virtualization', 'OpenShift Virtualization'),
],
default=None,
max_length=32,
),
),
]

View File

@@ -0,0 +1,26 @@
# Generated by Django 4.2.6 on 2024-06-20 15:55
from django.db import migrations
def delete_execution_environment_read_role(apps, schema_editor):
permission_classes = [apps.get_model('auth', 'Permission'), apps.get_model('dab_rbac', 'DABPermission')]
for permission_cls in permission_classes:
ee_read_perm = permission_cls.objects.filter(codename='view_executionenvironment').first()
if ee_read_perm:
ee_read_perm.delete()
class Migration(migrations.Migration):
dependencies = [
('main', '0194_alter_inventorysource_source_and_more'),
]
operations = [
migrations.AlterModelOptions(
name='executionenvironment',
options={'default_permissions': ('add', 'change', 'delete'), 'ordering': ('-created',)},
),
migrations.RunPython(delete_execution_environment_read_role, migrations.RunPython.noop),
]

View File

@@ -134,12 +134,22 @@ def get_permissions_for_role(role_field, children_map, apps):
# more special cases for those same above special org-level roles
if role_field.name == 'auditor_role':
for codename in ('view_notificationtemplate', 'view_executionenvironment'):
perm_list.append(Permission.objects.get(codename=codename))
perm_list.append(Permission.objects.get(codename='view_notificationtemplate'))
return perm_list
def model_class(ct, apps):
"""
You can not use model methods in migrations, so this duplicates
what ContentType.model_class does, using current apps
"""
try:
return apps.get_model(ct.app_label, ct.model)
except LookupError:
return None
def migrate_to_new_rbac(apps, schema_editor):
"""
This method moves the assigned permissions from the old rbac.py models
@@ -197,7 +207,7 @@ def migrate_to_new_rbac(apps, schema_editor):
role_definition = managed_definitions[permissions]
else:
action = role.role_field.rsplit('_', 1)[0] # remove the _field ending of the name
role_definition_name = f'{role.content_type.model_class().__name__} {action.title()}'
role_definition_name = f'{model_class(role.content_type, apps).__name__} {action.title()}'
description = role_descriptions[role.role_field]
if type(description) == dict:
@@ -264,7 +274,13 @@ def setup_managed_role_definitions(apps, schema_editor):
"""
Idepotent method to create or sync the managed role definitions
"""
to_create = settings.ANSIBLE_BASE_ROLE_PRECREATE
to_create = {
'object_admin': '{cls.__name__} Admin',
'org_admin': 'Organization Admin',
'org_audit': 'Organization Audit',
'org_children': 'Organization {cls.__name__} Admin',
'special': '{cls.__name__} {action}',
}
ContentType = apps.get_model('contenttypes', 'ContentType')
Permission = apps.get_model('dab_rbac', 'DABPermission')
@@ -274,14 +290,15 @@ def setup_managed_role_definitions(apps, schema_editor):
managed_role_definitions = []
org_perms = set()
for cls in permission_registry._registry:
for cls in permission_registry.all_registered_models:
ct = ContentType.objects.get_for_model(cls)
cls_name = cls._meta.model_name
object_perms = set(Permission.objects.filter(content_type=ct))
# Special case for InstanceGroup which has an organiation field, but is not an organization child object
if cls._meta.model_name != 'instancegroup':
if cls_name != 'instancegroup':
org_perms.update(object_perms)
if 'object_admin' in to_create and cls != Organization:
if 'object_admin' in to_create and cls_name != 'organization':
indiv_perms = object_perms.copy()
add_perms = [perm for perm in indiv_perms if perm.codename.startswith('add_')]
if add_perms:
@@ -294,7 +311,7 @@ def setup_managed_role_definitions(apps, schema_editor):
)
)
if 'org_children' in to_create and cls != Organization:
if 'org_children' in to_create and (cls_name not in ('organization', 'instancegroup', 'team')):
org_child_perms = object_perms.copy()
org_child_perms.add(Permission.objects.get(codename='view_organization'))
@@ -311,7 +328,8 @@ def setup_managed_role_definitions(apps, schema_editor):
if 'special' in to_create:
special_perms = []
for perm in object_perms:
if perm.codename.split('_')[0] not in ('add', 'change', 'update', 'delete', 'view'):
# Organization auditor is handled separately
if perm.codename.split('_')[0] not in ('add', 'change', 'delete', 'view', 'audit'):
special_perms.append(perm)
for perm in special_perms:
action = perm.codename.split('_')[0]
@@ -337,6 +355,19 @@ def setup_managed_role_definitions(apps, schema_editor):
)
)
if 'org_audit' in to_create:
audit_permissions = [perm for perm in org_perms if perm.codename.startswith('view_')]
audit_permissions.append(Permission.objects.get(codename='audit_organization'))
managed_role_definitions.append(
get_or_create_managed(
to_create['org_audit'].format(cls=Organization),
'Has permission to view all objects inside of a single organization',
org_ct,
audit_permissions,
RoleDefinition,
)
)
unexpected_role_definitions = RoleDefinition.objects.filter(managed=True).exclude(pk__in=[rd.pk for rd in managed_role_definitions])
for role_definition in unexpected_role_definitions:
logger.info(f'Deleting old managed role definition {role_definition.name}, pk={role_definition.pk}')

View File

@@ -176,17 +176,17 @@ pre_delete.connect(cleanup_created_modified_by, sender=User)
@property
def user_get_organizations(user):
return Organization.objects.filter(member_role__members=user)
return Organization.access_qs(user, 'member')
@property
def user_get_admin_of_organizations(user):
return Organization.objects.filter(admin_role__members=user)
return Organization.access_qs(user, 'change')
@property
def user_get_auditor_of_organizations(user):
return Organization.objects.filter(auditor_role__members=user)
return Organization.access_qs(user, 'audit')
@property

View File

@@ -21,6 +21,10 @@ from django.conf import settings
from django.utils.encoding import force_str
from django.utils.functional import cached_property
from django.utils.timezone import now
from django.contrib.auth.models import User
# DRF
from rest_framework.serializers import ValidationError as DRFValidationError
# AWX
from awx.api.versioning import reverse
@@ -41,6 +45,7 @@ from awx.main.models.rbac import (
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
ROLE_SINGLETON_SYSTEM_AUDITOR,
)
from awx.main.models import Team, Organization
from awx.main.utils import encrypt_field
from . import injectors as builtin_injectors
@@ -315,6 +320,15 @@ class Credential(PasswordFieldsModel, CommonModelNameNotUnique, ResourceMixin):
else:
raise ValueError('{} is not a dynamic input field'.format(field_name))
def validate_role_assignment(self, actor, role_definition):
if isinstance(actor, User):
if actor.is_superuser or Organization.access_qs(actor, 'change').filter(id=self.organization.id).exists():
return
if isinstance(actor, Team):
if actor.organization == self.organization:
return
raise DRFValidationError({'detail': _(f"You cannot grant credential access to a {actor._meta.object_name} not in the credentials' organization")})
class CredentialType(CommonModelNameNotUnique):
"""

View File

@@ -4,11 +4,12 @@ import datetime
from datetime import timezone
import logging
from collections import defaultdict
import itertools
import time
from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist
from django.db import models, DatabaseError
from django.db import models, DatabaseError, transaction
from django.db.models.functions import Cast
from django.utils.dateparse import parse_datetime
from django.utils.text import Truncator
@@ -605,19 +606,23 @@ class JobEvent(BasePlaybookEvent):
def _update_host_metrics(updated_hosts_list):
from awx.main.models import HostMetric # circular import
# bulk-create
current_time = now()
HostMetric.objects.bulk_create(
[HostMetric(hostname=hostname, last_automation=current_time) for hostname in updated_hosts_list], ignore_conflicts=True, batch_size=100
)
# bulk-update
batch_start, batch_size = 0, 1000
while batch_start <= len(updated_hosts_list):
batched_host_list = updated_hosts_list[batch_start : (batch_start + batch_size)]
HostMetric.objects.filter(hostname__in=batched_host_list).update(
last_automation=current_time, automated_counter=models.F('automated_counter') + 1, deleted=False
)
batch_start += batch_size
# FUTURE:
# - Hand-rolled implementation of itertools.batched(), introduced in Python 3.12. Replace.
# - Ability to do ORM upserts *may* have been introduced in Django 5.0.
# See the entry about `create_defaults` in https://docs.djangoproject.com/en/5.0/releases/5.0/#models.
# Hopefully this will be fully ready for batch use by 5.2 LTS.
args = [iter(updated_hosts_list)] * 500
for hosts in itertools.zip_longest(*args):
with transaction.atomic():
HostMetric.objects.bulk_create(
[HostMetric(hostname=hostname, last_automation=current_time) for hostname in hosts if hostname is not None], ignore_conflicts=True
)
HostMetric.objects.filter(hostname__in=hosts).update(
last_automation=current_time, automated_counter=models.F('automated_counter') + 1, deleted=False
)
@property
def job_verbosity(self):

View File

@@ -1,6 +1,8 @@
from django.db import models
from django.utils.translation import gettext_lazy as _
from rest_framework.exceptions import ValidationError
from awx.api.versioning import reverse
from awx.main.models.base import CommonModel
from awx.main.validators import validate_container_image_name
@@ -12,6 +14,8 @@ __all__ = ['ExecutionEnvironment']
class ExecutionEnvironment(CommonModel):
class Meta:
ordering = ('-created',)
# Remove view permission, as a temporary solution, defer to organization read permission
default_permissions = ('add', 'change', 'delete')
PULL_CHOICES = [
('always', _("Always pull container before running.")),
@@ -53,3 +57,16 @@ class ExecutionEnvironment(CommonModel):
def get_absolute_url(self, request=None):
return reverse('api:execution_environment_detail', kwargs={'pk': self.pk}, request=request)
def validate_role_assignment(self, actor, role_definition):
if self.managed:
raise ValidationError({'object_id': _('Can not assign object roles to managed Execution Environments')})
if self.organization_id is None:
raise ValidationError({'object_id': _('Can not assign object roles to global Execution Environments')})
if actor._meta.model_name == 'user' and (not actor.has_obj_perm(self.organization, 'view')):
raise ValidationError({'user': _('User must have view permission to Execution Environment organization')})
if actor._meta.model_name == 'team':
organization_cls = self._meta.get_field('organization').related_model
if self.orgaanization not in organization_cls.access_qs(actor, 'view'):
raise ValidationError({'team': _('Team must have view permission to Execution Environment organization')})

View File

@@ -933,6 +933,7 @@ class InventorySourceOptions(BaseModel):
('controller', _('Red Hat Ansible Automation Platform')),
('insights', _('Red Hat Insights')),
('terraform', _('Terraform State')),
('openshift_virtualization', _('OpenShift Virtualization')),
]
# From the options of the Django management base command
@@ -1042,7 +1043,7 @@ class InventorySourceOptions(BaseModel):
def cloud_credential_validation(source, cred):
if not source:
return None
if cred and source not in ('custom', 'scm'):
if cred and source not in ('custom', 'scm', 'openshift_virtualization'):
# If a credential was provided, it's important that it matches
# the actual inventory source being used (Amazon requires Amazon
# credentials; Rackspace requires Rackspace credentials; etc...)
@@ -1051,12 +1052,14 @@ class InventorySourceOptions(BaseModel):
# Allow an EC2 source to omit the credential. If Tower is running on
# an EC2 instance with an IAM Role assigned, boto will use credentials
# from the instance metadata instead of those explicitly provided.
elif source in CLOUD_PROVIDERS and source != 'ec2':
elif source in CLOUD_PROVIDERS and source not in ['ec2', 'openshift_virtualization']:
return _('Credential is required for a cloud source.')
elif source == 'custom' and cred and cred.credential_type.kind in ('scm', 'ssh', 'insights', 'vault'):
return _('Credentials of type machine, source control, insights and vault are disallowed for custom inventory sources.')
elif source == 'scm' and cred and cred.credential_type.kind in ('insights', 'vault'):
return _('Credentials of type insights and vault are disallowed for scm inventory sources.')
elif source == 'openshift_virtualization' and cred and cred.credential_type.kind != 'kubernetes':
return _('Credentials of type kubernetes is requred for openshift_virtualization inventory sources.')
return None
def get_cloud_credential(self):
@@ -1660,7 +1663,7 @@ class terraform(PluginFileInjector):
credential = inventory_update.get_cloud_credential()
private_data = {'credentials': {}}
gce_cred = credential.get_input('gce_credentials')
gce_cred = credential.get_input('gce_credentials', default=None)
if gce_cred:
private_data['credentials'][credential] = gce_cred
return private_data
@@ -1669,7 +1672,7 @@ class terraform(PluginFileInjector):
env = super(terraform, self).get_plugin_env(inventory_update, private_data_dir, private_data_files)
credential = inventory_update.get_cloud_credential()
cred_data = private_data_files['credentials']
if cred_data[credential]:
if credential in cred_data:
env['GOOGLE_BACKEND_CREDENTIALS'] = to_container_path(cred_data[credential], private_data_dir)
return env
@@ -1693,6 +1696,16 @@ class insights(PluginFileInjector):
use_fqcn = True
class openshift_virtualization(PluginFileInjector):
plugin_name = 'kubevirt'
base_injector = 'template'
namespace = 'kubevirt'
collection = 'core'
downstream_namespace = 'redhat'
downstream_collection = 'openshift_virtualization'
use_fqcn = True
class constructed(PluginFileInjector):
plugin_name = 'constructed'
namespace = 'ansible'

View File

@@ -31,6 +31,7 @@ from awx.main.notifications.mattermost_backend import MattermostBackend
from awx.main.notifications.grafana_backend import GrafanaBackend
from awx.main.notifications.rocketchat_backend import RocketChatBackend
from awx.main.notifications.irc_backend import IrcBackend
from awx.main.notifications.awssns_backend import AWSSNSBackend
logger = logging.getLogger('awx.main.models.notifications')
@@ -40,6 +41,7 @@ __all__ = ['NotificationTemplate', 'Notification']
class NotificationTemplate(CommonModelNameNotUnique):
NOTIFICATION_TYPES = [
('awssns', _('AWS SNS'), AWSSNSBackend),
('email', _('Email'), CustomEmailBackend),
('slack', _('Slack'), SlackBackend),
('twilio', _('Twilio'), TwilioBackend),

View File

@@ -10,6 +10,9 @@ import re
# django-rest-framework
from rest_framework.serializers import ValidationError
# crum to impersonate users
from crum import impersonate
# Django
from django.db import models, transaction, connection
from django.db.models.signals import m2m_changed
@@ -553,17 +556,22 @@ def get_role_definition(role):
return
f = obj._meta.get_field(role.role_field)
action_name = f.name.rsplit("_", 1)[0]
rd_name = f'{type(obj).__name__} {action_name.title()} Compat'
model_print = type(obj).__name__
rd_name = f'{model_print} {action_name.title()} Compat'
perm_list = get_role_codenames(role)
defaults = {'content_type_id': role.content_type_id}
try:
rd, created = RoleDefinition.objects.get_or_create(name=rd_name, permissions=perm_list, defaults=defaults)
except ValidationError:
# This is a tricky case - practically speaking, users should not be allowed to create team roles
# or roles that include the team member permission.
# If we need to create this for compatibility purposes then we will create it as a managed non-editable role
defaults['managed'] = True
rd, created = RoleDefinition.objects.get_or_create(name=rd_name, permissions=perm_list, defaults=defaults)
defaults = {
'content_type_id': role.content_type_id,
'description': f'Has {action_name.title()} permission to {model_print} for backwards API compatibility',
}
with impersonate(None):
try:
rd, created = RoleDefinition.objects.get_or_create(name=rd_name, permissions=perm_list, defaults=defaults)
except ValidationError:
# This is a tricky case - practically speaking, users should not be allowed to create team roles
# or roles that include the team member permission.
# If we need to create this for compatibility purposes then we will create it as a managed non-editable role
defaults['managed'] = True
rd, created = RoleDefinition.objects.get_or_create(name=rd_name, permissions=perm_list, defaults=defaults)
return rd
@@ -583,14 +591,20 @@ def get_role_from_object_role(object_role):
role_name = role_name.lower()
model_cls = apps.get_model('main', target_model_name)
target_model_name = get_type_for_model(model_cls)
# exception cases completely specific to one model naming convention
if target_model_name == 'notification_template':
target_model_name = 'notification' # total exception
target_model_name = 'notification'
elif target_model_name == 'workflow_job_template':
target_model_name = 'workflow'
role_name = f'{target_model_name}_admin_role'
elif rd.name.endswith(' Admin'):
# cases like "project-admin"
role_name = 'admin_role'
elif rd.name == 'Organization Audit':
role_name = 'auditor_role'
else:
print(rd.name)
model_name, role_name = rd.name.split()
role_name = role_name.lower()
role_name += '_role'

View File

@@ -17,7 +17,7 @@ from collections import OrderedDict
# Django
from django.conf import settings
from django.db import models, connection
from django.db import models, connection, transaction
from django.core.exceptions import NON_FIELD_ERRORS
from django.utils.translation import gettext_lazy as _
from django.utils.timezone import now
@@ -31,6 +31,7 @@ from rest_framework.exceptions import ParseError
from polymorphic.models import PolymorphicModel
from ansible_base.lib.utils.models import prevent_search, get_type_for_model
from ansible_base.rbac import permission_registry
# AWX
from awx.main.models.base import CommonModelNameNotUnique, PasswordFieldsModel, NotificationFieldsModel
@@ -197,9 +198,7 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, ExecutionEn
@classmethod
def _submodels_with_roles(cls):
ujt_classes = [c for c in cls.__subclasses__() if c._meta.model_name not in ['inventorysource', 'systemjobtemplate']]
ct_dict = ContentType.objects.get_for_models(*ujt_classes)
return [ct.id for ct in ct_dict.values()]
return [c for c in cls.__subclasses__() if permission_registry.is_registered(c)]
@classmethod
def accessible_pk_qs(cls, accessor, role_field):
@@ -215,8 +214,16 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, ExecutionEn
action = to_permissions[role_field]
# Special condition for super auditor
role_subclasses = cls._submodels_with_roles()
role_cts = ContentType.objects.get_for_models(*role_subclasses).values()
all_codenames = {f'{action}_{cls._meta.model_name}' for cls in role_subclasses}
if not (all_codenames - accessor.singleton_permissions()):
qs = cls.objects.filter(polymorphic_ctype__in=role_cts)
return qs.values_list('id', flat=True)
return (
RoleEvaluation.objects.filter(role__in=accessor.has_roles.all(), codename__startswith=action, content_type_id__in=cls._submodels_with_roles())
RoleEvaluation.objects.filter(role__in=accessor.has_roles.all(), codename__in=all_codenames, content_type_id__in=[ct.id for ct in role_cts])
.values_list('object_id')
.distinct()
)
@@ -273,7 +280,14 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, ExecutionEn
if new_next_schedule:
if new_next_schedule.pk == self.next_schedule_id and new_next_schedule.next_run == self.next_job_run:
return # no-op, common for infrequent schedules
self.next_schedule = new_next_schedule
# If in a transaction, use select_for_update to lock the next schedule row, which
# prevents a race condition if new_next_schedule is deleted elsewhere during this transaction
if transaction.get_autocommit():
self.next_schedule = related_schedules.first()
else:
self.next_schedule = related_schedules.select_for_update().first()
self.next_job_run = new_next_schedule.next_run
self.save(update_fields=['next_schedule', 'next_job_run'])
@@ -823,7 +837,7 @@ class UnifiedJob(
update_fields.append(key)
if parent_instance:
if self.status in ('pending', 'waiting', 'running'):
if self.status in ('pending', 'running'):
if parent_instance.current_job != self:
parent_instance_set('current_job', self)
# Update parent with all the 'good' states of it's child
@@ -860,7 +874,7 @@ class UnifiedJob(
# If this job already exists in the database, retrieve a copy of
# the job in its prior state.
# If update_fields are given without status, then that indicates no change
if self.pk and ((not update_fields) or ('status' in update_fields)):
if self.status != 'waiting' and self.pk and ((not update_fields) or ('status' in update_fields)):
self_before = self.__class__.objects.get(pk=self.pk)
if self_before.status != self.status:
status_before = self_before.status
@@ -902,7 +916,8 @@ class UnifiedJob(
update_fields.append('elapsed')
# Ensure that the job template information is current.
if self.unified_job_template != self._get_parent_instance():
# unless status is 'waiting', because this happens in large batches at end of task manager runs and is blocking
if self.status != 'waiting' and self.unified_job_template != self._get_parent_instance():
self.unified_job_template = self._get_parent_instance()
if 'unified_job_template' not in update_fields:
update_fields.append('unified_job_template')
@@ -915,8 +930,9 @@ class UnifiedJob(
# Okay; we're done. Perform the actual save.
result = super(UnifiedJob, self).save(*args, **kwargs)
# If status changed, update the parent instance.
if self.status != status_before:
# If status changed, update the parent instance
# unless status is 'waiting', because this happens in large batches at end of task manager runs and is blocking
if self.status != status_before and self.status != 'waiting':
# Update parent outside of the transaction for Job w/ allow_simultaneous=True
# This dodges lock contention at the expense of the foreign key not being
# completely correct.

View File

@@ -0,0 +1,70 @@
# Copyright (c) 2016 Ansible, Inc.
# All Rights Reserved.
import json
import logging
import boto3
from botocore.exceptions import ClientError
from awx.main.notifications.base import AWXBaseEmailBackend
from awx.main.notifications.custom_notification_base import CustomNotificationBase
logger = logging.getLogger('awx.main.notifications.awssns_backend')
WEBSOCKET_TIMEOUT = 30
class AWSSNSBackend(AWXBaseEmailBackend, CustomNotificationBase):
init_parameters = {
"aws_region": {"label": "AWS Region", "type": "string", "default": ""},
"aws_access_key_id": {"label": "Access Key ID", "type": "string", "default": ""},
"aws_secret_access_key": {"label": "Secret Access Key", "type": "password", "default": ""},
"aws_session_token": {"label": "Session Token", "type": "password", "default": ""},
"sns_topic_arn": {"label": "SNS Topic ARN", "type": "string", "default": ""},
}
recipient_parameter = "sns_topic_arn"
sender_parameter = None
DEFAULT_BODY = "{{ job_metadata }}"
default_messages = CustomNotificationBase.job_metadata_messages
def __init__(self, aws_region, aws_access_key_id, aws_secret_access_key, aws_session_token, fail_silently=False, **kwargs):
session = boto3.session.Session()
client_config = {"service_name": 'sns'}
if aws_region:
client_config["region_name"] = aws_region
if aws_secret_access_key:
client_config["aws_secret_access_key"] = aws_secret_access_key
if aws_access_key_id:
client_config["aws_access_key_id"] = aws_access_key_id
if aws_session_token:
client_config["aws_session_token"] = aws_session_token
self.client = session.client(**client_config)
super(AWSSNSBackend, self).__init__(fail_silently=fail_silently)
def _sns_publish(self, topic_arn, message):
self.client.publish(TopicArn=topic_arn, Message=message, MessageAttributes={})
def format_body(self, body):
if isinstance(body, str):
try:
body = json.loads(body)
except json.JSONDecodeError:
pass
if isinstance(body, dict):
body = json.dumps(body)
# convert dict body to json string
return body
def send_messages(self, messages):
sent_messages = 0
for message in messages:
sns_topic_arn = str(message.recipients()[0])
try:
self._sns_publish(topic_arn=sns_topic_arn, message=message.body)
sent_messages += 1
except ClientError as error:
if not self.fail_silently:
raise error
return sent_messages

View File

@@ -32,3 +32,15 @@ class CustomNotificationBase(object):
"denied": {"message": DEFAULT_APPROVAL_DENIED_MSG, "body": None},
},
}
job_metadata_messages = {
"started": {"body": "{{ job_metadata }}"},
"success": {"body": "{{ job_metadata }}"},
"error": {"body": "{{ job_metadata }}"},
"workflow_approval": {
"running": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" needs review. This node can be viewed at: {{ workflow_url }}"}'},
"approved": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" was approved. {{ workflow_url }}"}'},
"timed_out": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" has timed out. {{ workflow_url }}"}'},
"denied": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" was denied. {{ workflow_url }}"}'},
},
}

View File

@@ -27,17 +27,7 @@ class WebhookBackend(AWXBaseEmailBackend, CustomNotificationBase):
sender_parameter = None
DEFAULT_BODY = "{{ job_metadata }}"
default_messages = {
"started": {"body": DEFAULT_BODY},
"success": {"body": DEFAULT_BODY},
"error": {"body": DEFAULT_BODY},
"workflow_approval": {
"running": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" needs review. This node can be viewed at: {{ workflow_url }}"}'},
"approved": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" was approved. {{ workflow_url }}"}'},
"timed_out": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" has timed out. {{ workflow_url }}"}'},
"denied": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" was denied. {{ workflow_url }}"}'},
},
}
default_messages = CustomNotificationBase.job_metadata_messages
def __init__(self, http_method, headers, disable_ssl_verification=False, fail_silently=False, username=None, password=None, **kwargs):
self.http_method = http_method

View File

@@ -63,6 +63,10 @@ websocket_urlpatterns = [
re_path(r'api/websocket/$', consumers.EventConsumer.as_asgi()),
re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),
]
if settings.OPTIONAL_API_URLPATTERN_PREFIX:
websocket_urlpatterns.append(re_path(r'api/{}/v2/websocket/$'.format(settings.OPTIONAL_API_URLPATTERN_PREFIX), consumers.EventConsumer.as_asgi()))
websocket_relay_urlpatterns = [
re_path(r'websocket/relay/$', consumers.RelayConsumer.as_asgi()),
]

View File

@@ -138,7 +138,8 @@ class TaskBase:
# Lock
with task_manager_bulk_reschedule():
with advisory_lock(f"{self.prefix}_lock", wait=False) as acquired:
lock_session_timeout_milliseconds = settings.TASK_MANAGER_LOCK_TIMEOUT * 1000 # convert to milliseconds
with advisory_lock(f"{self.prefix}_lock", lock_session_timeout_milliseconds=lock_session_timeout_milliseconds, wait=False) as acquired:
with transaction.atomic():
if acquired is False:
logger.debug(f"Not running {self.prefix} scheduler, another task holds lock")

View File

@@ -36,6 +36,9 @@ import ansible_runner.cleanup
# dateutil
from dateutil.parser import parse as parse_date
# django-ansible-base
from ansible_base.resource_registry.tasks.sync import SyncExecutor
# AWX
from awx import __version__ as awx_application_version
from awx.main.access import access_registry
@@ -712,7 +715,8 @@ def awx_k8s_reaper():
@task(queue=get_task_queuename)
def awx_periodic_scheduler():
with advisory_lock('awx_periodic_scheduler_lock', wait=False) as acquired:
lock_session_timeout_milliseconds = settings.TASK_MANAGER_LOCK_TIMEOUT * 1000
with advisory_lock('awx_periodic_scheduler_lock', lock_session_timeout_milliseconds=lock_session_timeout_milliseconds, wait=False) as acquired:
if acquired is False:
logger.debug("Not running periodic scheduler, another task holds lock")
return
@@ -964,3 +968,17 @@ def deep_copy_model_obj(model_module, model_name, obj_pk, new_obj_pk, user_pk, p
permission_check_func(creater, copy_mapping.values())
if isinstance(new_obj, Inventory):
update_inventory_computed_fields.delay(new_obj.id)
@task(queue=get_task_queuename)
def periodic_resource_sync():
if not getattr(settings, 'RESOURCE_SERVER', None):
logger.debug("Skipping periodic resource_sync, RESOURCE_SERVER not configured")
return
with advisory_lock('periodic_resource_sync', wait=False) as acquired:
if acquired is False:
logger.debug("Not running periodic_resource_sync, another task holds lock")
return
SyncExecutor().run()

View File

@@ -0,0 +1,5 @@
{
"K8S_AUTH_HOST": "https://foo.invalid",
"K8S_AUTH_API_KEY": "fooo",
"K8S_AUTH_VERIFY_SSL": "False"
}

View File

@@ -9,8 +9,8 @@ def test_user_role_view_access(rando, inventory, mocker, post):
role_pk = inventory.admin_role.pk
data = {"id": role_pk}
mock_access = mocker.MagicMock(can_attach=mocker.MagicMock(return_value=False))
with mocker.patch('awx.main.access.RoleAccess', return_value=mock_access):
post(url=reverse('api:user_roles_list', kwargs={'pk': rando.pk}), data=data, user=rando, expect=403)
mocker.patch('awx.main.access.RoleAccess', return_value=mock_access)
post(url=reverse('api:user_roles_list', kwargs={'pk': rando.pk}), data=data, user=rando, expect=403)
mock_access.can_attach.assert_called_once_with(inventory.admin_role, rando, 'members', data, skip_sub_obj_read_check=False)
@@ -21,8 +21,8 @@ def test_team_role_view_access(rando, team, inventory, mocker, post):
role_pk = inventory.admin_role.pk
data = {"id": role_pk}
mock_access = mocker.MagicMock(can_attach=mocker.MagicMock(return_value=False))
with mocker.patch('awx.main.access.RoleAccess', return_value=mock_access):
post(url=reverse('api:team_roles_list', kwargs={'pk': team.pk}), data=data, user=rando, expect=403)
mocker.patch('awx.main.access.RoleAccess', return_value=mock_access)
post(url=reverse('api:team_roles_list', kwargs={'pk': team.pk}), data=data, user=rando, expect=403)
mock_access.can_attach.assert_called_once_with(inventory.admin_role, team, 'member_role.parents', data, skip_sub_obj_read_check=False)
@@ -33,8 +33,8 @@ def test_role_team_view_access(rando, team, inventory, mocker, post):
role_pk = inventory.admin_role.pk
data = {"id": team.pk}
mock_access = mocker.MagicMock(return_value=False, __name__='mocked')
with mocker.patch('awx.main.access.RoleAccess.can_attach', mock_access):
post(url=reverse('api:role_teams_list', kwargs={'pk': role_pk}), data=data, user=rando, expect=403)
mocker.patch('awx.main.access.RoleAccess.can_attach', mock_access)
post(url=reverse('api:role_teams_list', kwargs={'pk': role_pk}), data=data, user=rando, expect=403)
mock_access.assert_called_once_with(inventory.admin_role, team, 'member_role.parents', data, skip_sub_obj_read_check=False)

View File

@@ -30,7 +30,7 @@ def test_idempotent_credential_type_setup():
@pytest.mark.django_db
def test_create_user_credential_via_credentials_list(post, get, alice, credentialtype_ssh):
def test_create_user_credential_via_credentials_list(post, get, alice, credentialtype_ssh, setup_managed_roles):
params = {
'credential_type': 1,
'inputs': {'username': 'someusername'},
@@ -81,7 +81,7 @@ def test_credential_validation_error_with_multiple_owner_fields(post, admin, ali
@pytest.mark.django_db
def test_create_user_credential_via_user_credentials_list(post, get, alice, credentialtype_ssh):
def test_create_user_credential_via_user_credentials_list(post, get, alice, credentialtype_ssh, setup_managed_roles):
params = {
'credential_type': 1,
'inputs': {'username': 'someusername'},

View File

@@ -1,22 +1,30 @@
import pytest
from unittest import mock
from awx.api.versioning import reverse
from django.test.utils import override_settings
from ansible_base.jwt_consumer.common.util import generate_x_trusted_proxy_header
from ansible_base.lib.testing.fixtures import rsa_keypair_factory, rsa_keypair # noqa: F401; pylint: disable=unused-import
class HeaderTrackingMiddleware(object):
def __init__(self):
self.environ = {}
def process_request(self, request):
pass
def process_response(self, request, response):
self.environ = request.environ
@pytest.mark.django_db
def test_proxy_ip_allowed(get, patch, admin):
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'system'})
patch(url, user=admin, data={'REMOTE_HOST_HEADERS': ['HTTP_X_FROM_THE_LOAD_BALANCER', 'REMOTE_ADDR', 'REMOTE_HOST']})
class HeaderTrackingMiddleware(object):
environ = {}
def process_request(self, request):
pass
def process_response(self, request, response):
self.environ = request.environ
# By default, `PROXY_IP_ALLOWED_LIST` is disabled, so custom `REMOTE_HOST_HEADERS`
# should just pass through
middleware = HeaderTrackingMiddleware()
@@ -45,6 +53,51 @@ def test_proxy_ip_allowed(get, patch, admin):
assert middleware.environ['HTTP_X_FROM_THE_LOAD_BALANCER'] == 'some-actual-ip'
@pytest.mark.django_db
class TestTrustedProxyAllowListIntegration:
@pytest.fixture
def url(self, patch, admin):
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'system'})
patch(url, user=admin, data={'REMOTE_HOST_HEADERS': ['HTTP_X_FROM_THE_LOAD_BALANCER', 'REMOTE_ADDR', 'REMOTE_HOST']})
patch(url, user=admin, data={'PROXY_IP_ALLOWED_LIST': ['my.proxy.example.org']})
return url
@pytest.fixture
def middleware(self):
return HeaderTrackingMiddleware()
def test_x_trusted_proxy_valid_signature(self, get, admin, rsa_keypair, url, middleware): # noqa: F811
# Headers should NOT get deleted
headers = {
'HTTP_X_TRUSTED_PROXY': generate_x_trusted_proxy_header(rsa_keypair.private),
'HTTP_X_FROM_THE_LOAD_BALANCER': 'some-actual-ip',
}
with mock.patch('ansible_base.jwt_consumer.common.cache.JWTCache.get_key_from_cache', lambda self: None):
with override_settings(ANSIBLE_BASE_JWT_KEY=rsa_keypair.public, PROXY_IP_ALLOWED_LIST=[]):
get(url, user=admin, middleware=middleware, **headers)
assert middleware.environ['HTTP_X_FROM_THE_LOAD_BALANCER'] == 'some-actual-ip'
def test_x_trusted_proxy_invalid_signature(self, get, admin, url, patch, middleware):
# Headers should NOT get deleted
headers = {
'HTTP_X_TRUSTED_PROXY': 'DEAD-BEEF',
'HTTP_X_FROM_THE_LOAD_BALANCER': 'some-actual-ip',
}
with override_settings(PROXY_IP_ALLOWED_LIST=[]):
get(url, user=admin, middleware=middleware, **headers)
assert middleware.environ['HTTP_X_FROM_THE_LOAD_BALANCER'] == 'some-actual-ip'
def test_x_trusted_proxy_invalid_signature_valid_proxy(self, get, admin, url, middleware):
# A valid explicit proxy SHOULD result in sensitive headers NOT being deleted, regardless of the trusted proxy signature results
headers = {
'HTTP_X_TRUSTED_PROXY': 'DEAD-BEEF',
'REMOTE_ADDR': 'my.proxy.example.org',
'HTTP_X_FROM_THE_LOAD_BALANCER': 'some-actual-ip',
}
get(url, user=admin, middleware=middleware, **headers)
assert middleware.environ['HTTP_X_FROM_THE_LOAD_BALANCER'] == 'some-actual-ip'
@pytest.mark.django_db
class TestDeleteViews:
def test_sublist_delete_permission_check(self, inventory_source, host, rando, delete):

View File

@@ -0,0 +1,66 @@
import pytest
from awx.api.versioning import reverse
from awx.main.models import Organization
@pytest.mark.django_db
class TestImmutableSharedFields:
@pytest.fixture(autouse=True)
def configure_settings(self, settings):
settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT = False
def test_create_raises_permission_denied(self, admin_user, post):
orgA = Organization.objects.create(name='orgA')
resp = post(
url=reverse('api:team_list'),
data={'name': 'teamA', 'organization': orgA.id},
user=admin_user,
expect=403,
)
assert "Creation of this resource is not allowed" in resp.data['detail']
def test_perform_delete_raises_permission_denied(self, admin_user, delete):
orgA = Organization.objects.create(name='orgA')
team = orgA.teams.create(name='teamA')
resp = delete(
url=reverse('api:team_detail', kwargs={'pk': team.id}),
user=admin_user,
expect=403,
)
assert "Deletion of this resource is not allowed" in resp.data['detail']
def test_perform_update(self, admin_user, patch):
orgA = Organization.objects.create(name='orgA')
team = orgA.teams.create(name='teamA')
# allow patching non-shared fields
patch(
url=reverse('api:team_detail', kwargs={'pk': team.id}),
data={"description": "can change this field"},
user=admin_user,
expect=200,
)
orgB = Organization.objects.create(name='orgB')
# prevent patching shared fields
resp = patch(url=reverse('api:team_detail', kwargs={'pk': team.id}), data={"organization": orgB.id}, user=admin_user, expect=403)
assert "Cannot change shared field" in resp.data['organization']
@pytest.mark.parametrize(
'role',
['admin_role', 'member_role'],
)
@pytest.mark.parametrize('resource', ['organization', 'team'])
def test_prevent_assigning_member_to_organization_or_team(self, admin_user, post, resource, role):
orgA = Organization.objects.create(name='orgA')
if resource == 'organization':
role = getattr(orgA, role)
elif resource == 'team':
teamA = orgA.teams.create(name='teamA')
role = getattr(teamA, role)
resp = post(
url=reverse('api:user_roles_list', kwargs={'pk': admin_user.id}),
data={'id': role.id},
user=admin_user,
expect=403,
)
assert f"Cannot directly modify user membership to {resource}." in resp.data['msg']

View File

@@ -32,13 +32,6 @@ def node_type_instance():
return fn
@pytest.fixture
def instance_group(job_factory):
ig = InstanceGroup(name="east")
ig.save()
return ig
@pytest.fixture
def containerized_instance_group(instance_group, kube_credential):
ig = InstanceGroup(name="container")

View File

@@ -131,11 +131,11 @@ def test_job_ignore_unprompted_vars(runtime_data, job_template_prompts, post, ad
mock_job = mocker.MagicMock(spec=Job, id=968, **runtime_data)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'):
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, admin_user, expect=201)
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ()
mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation')
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, admin_user, expect=201)
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ()
# Check that job is serialized correctly
job_id = response.data['job']
@@ -167,12 +167,12 @@ def test_job_accept_prompted_vars(runtime_data, job_template_prompts, post, admi
mock_job = mocker.MagicMock(spec=Job, id=968, **runtime_data)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'):
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, admin_user, expect=201)
assert JobTemplate.create_unified_job.called
called_with = data_to_internal(runtime_data)
JobTemplate.create_unified_job.assert_called_with(**called_with)
mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation')
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, admin_user, expect=201)
assert JobTemplate.create_unified_job.called
called_with = data_to_internal(runtime_data)
JobTemplate.create_unified_job.assert_called_with(**called_with)
job_id = response.data['job']
assert job_id == 968
@@ -187,11 +187,11 @@ def test_job_accept_empty_tags(job_template_prompts, post, admin_user, mocker):
mock_job = mocker.MagicMock(spec=Job, id=968)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'):
post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), {'job_tags': '', 'skip_tags': ''}, admin_user, expect=201)
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ({'job_tags': '', 'skip_tags': ''},)
mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation')
post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), {'job_tags': '', 'skip_tags': ''}, admin_user, expect=201)
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ({'job_tags': '', 'skip_tags': ''},)
mock_job.signal_start.assert_called_once()
@@ -203,14 +203,14 @@ def test_slice_timeout_forks_need_int(job_template_prompts, post, admin_user, mo
mock_job = mocker.MagicMock(spec=Job, id=968)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'):
response = post(
reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), {'timeout': '', 'job_slice_count': '', 'forks': ''}, admin_user, expect=400
)
assert 'forks' in response.data and response.data['forks'][0] == 'A valid integer is required.'
assert 'job_slice_count' in response.data and response.data['job_slice_count'][0] == 'A valid integer is required.'
assert 'timeout' in response.data and response.data['timeout'][0] == 'A valid integer is required.'
mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation')
response = post(
reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), {'timeout': '', 'job_slice_count': '', 'forks': ''}, admin_user, expect=400
)
assert 'forks' in response.data and response.data['forks'][0] == 'A valid integer is required.'
assert 'job_slice_count' in response.data and response.data['job_slice_count'][0] == 'A valid integer is required.'
assert 'timeout' in response.data and response.data['timeout'][0] == 'A valid integer is required.'
@pytest.mark.django_db
@@ -244,12 +244,12 @@ def test_job_accept_prompted_vars_null(runtime_data, job_template_prompts_null,
mock_job = mocker.MagicMock(spec=Job, id=968, **runtime_data)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'):
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, rando, expect=201)
assert JobTemplate.create_unified_job.called
expected_call = data_to_internal(runtime_data)
assert JobTemplate.create_unified_job.call_args == (expected_call,)
mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation')
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, rando, expect=201)
assert JobTemplate.create_unified_job.called
expected_call = data_to_internal(runtime_data)
assert JobTemplate.create_unified_job.call_args == (expected_call,)
job_id = response.data['job']
assert job_id == 968
@@ -641,18 +641,18 @@ def test_job_launch_unprompted_vars_with_survey(mocker, survey_spec_factory, job
job_template.survey_spec = survey_spec_factory('survey_var')
job_template.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'):
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={}):
response = post(
reverse('api:job_template_launch', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}),
admin_user,
expect=201,
)
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ({'extra_vars': {'survey_var': 4}},)
mocker.patch('awx.main.access.BaseAccess.check_license')
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={})
response = post(
reverse('api:job_template_launch', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}),
admin_user,
expect=201,
)
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ({'extra_vars': {'survey_var': 4}},)
job_id = response.data['job']
assert job_id == 968
@@ -670,22 +670,22 @@ def test_callback_accept_prompted_extra_var(mocker, survey_spec_factory, job_tem
job_template.survey_spec = survey_spec_factory('survey_var')
job_template.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'):
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
with mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={}):
with mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host]):
post(
reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user,
expect=201,
format='json',
)
assert UnifiedJobTemplate.create_unified_job.called
call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {'extra_vars': {'survey_var': 4, 'job_launch_var': 3}, 'limit': 'single-host'}
mocker.patch('awx.main.access.BaseAccess.check_license')
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={})
mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host])
post(
reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user,
expect=201,
format='json',
)
assert UnifiedJobTemplate.create_unified_job.called
call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {'extra_vars': {'survey_var': 4, 'job_launch_var': 3}, 'limit': 'single-host'}
mock_job.signal_start.assert_called_once()
@@ -697,22 +697,22 @@ def test_callback_ignore_unprompted_extra_var(mocker, survey_spec_factory, job_t
job_template.host_config_key = "foo"
job_template.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'):
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
with mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={}):
with mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host]):
post(
reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user,
expect=201,
format='json',
)
assert UnifiedJobTemplate.create_unified_job.called
call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {'limit': 'single-host'}
mocker.patch('awx.main.access.BaseAccess.check_license')
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={})
mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host])
post(
reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user,
expect=201,
format='json',
)
assert UnifiedJobTemplate.create_unified_job.called
call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {'limit': 'single-host'}
mock_job.signal_start.assert_called_once()
@@ -725,9 +725,9 @@ def test_callback_find_matching_hosts(mocker, get, job_template_prompts, admin_u
job_template.save()
host_with_alias = Host(name='localhost', inventory=job_template.inventory)
host_with_alias.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'):
r = get(reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), user=admin_user, expect=200)
assert tuple(r.data['matching_hosts']) == ('localhost',)
mocker.patch('awx.main.access.BaseAccess.check_license')
r = get(reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), user=admin_user, expect=200)
assert tuple(r.data['matching_hosts']) == ('localhost',)
@pytest.mark.django_db
@@ -738,6 +738,6 @@ def test_callback_extra_var_takes_priority_over_host_name(mocker, get, job_templ
job_template.save()
host_with_alias = Host(name='localhost', variables={'ansible_host': 'foobar'}, inventory=job_template.inventory)
host_with_alias.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'):
r = get(reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), user=admin_user, expect=200)
assert not r.data['matching_hosts']
mocker.patch('awx.main.access.BaseAccess.check_license')
r = get(reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), user=admin_user, expect=200)
assert not r.data['matching_hosts']

View File

@@ -1,4 +1,5 @@
import pytest
from unittest import mock
# AWX
from awx.api.serializers import JobTemplateSerializer
@@ -8,10 +9,15 @@ from awx.main.migrations import _save_password_keys as save_password_keys
# Django
from django.apps import apps
from django.test.utils import override_settings
# DRF
from rest_framework.exceptions import ValidationError
# DAB
from ansible_base.jwt_consumer.common.util import generate_x_trusted_proxy_header
from ansible_base.lib.testing.fixtures import rsa_keypair_factory, rsa_keypair # noqa: F401; pylint: disable=unused-import
@pytest.mark.django_db
@pytest.mark.parametrize(
@@ -369,3 +375,113 @@ def test_job_template_missing_inventory(project, inventory, admin_user, post):
)
assert r.status_code == 400
assert "Cannot start automatically, an inventory is required." in str(r.data)
@pytest.mark.django_db
class TestJobTemplateCallbackProxyIntegration:
"""
Test the interaction of provision job template callback feature and:
settings.PROXY_IP_ALLOWED_LIST
x-trusted-proxy http header
"""
@pytest.fixture
def job_template(self, inventory, project):
jt = JobTemplate.objects.create(name='test-jt', inventory=inventory, project=project, playbook='helloworld.yml', host_config_key='abcd')
return jt
@override_settings(REMOTE_HOST_HEADERS=['HTTP_X_FROM_THE_LOAD_BALANCER', 'REMOTE_ADDR', 'REMOTE_HOST'], PROXY_IP_ALLOWED_LIST=['my.proxy.example.org'])
def test_host_not_found(self, job_template, admin_user, post, rsa_keypair): # noqa: F811
job_template.inventory.hosts.create(name='foobar')
headers = {
'HTTP_X_FROM_THE_LOAD_BALANCER': 'baz',
'REMOTE_HOST': 'baz',
'REMOTE_ADDR': 'baz',
}
r = post(
url=reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), data={'host_config_key': 'abcd'}, user=admin_user, expect=400, **headers
)
assert r.data['msg'] == 'No matching host could be found!'
@pytest.mark.parametrize(
'headers, expected',
(
pytest.param(
{
'HTTP_X_FROM_THE_LOAD_BALANCER': 'foobar',
'REMOTE_HOST': 'my.proxy.example.org',
},
201,
),
pytest.param(
{
'HTTP_X_FROM_THE_LOAD_BALANCER': 'foobar',
'REMOTE_HOST': 'not-my-proxy.org',
},
400,
),
),
)
@override_settings(REMOTE_HOST_HEADERS=['HTTP_X_FROM_THE_LOAD_BALANCER', 'REMOTE_ADDR', 'REMOTE_HOST'], PROXY_IP_ALLOWED_LIST=['my.proxy.example.org'])
def test_proxy_ip_allowed_list(self, job_template, admin_user, post, headers, expected): # noqa: F811
job_template.inventory.hosts.create(name='my.proxy.example.org')
post(
url=reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
data={'host_config_key': 'abcd'},
user=admin_user,
expect=expected,
**headers
)
@override_settings(REMOTE_HOST_HEADERS=['HTTP_X_FROM_THE_LOAD_BALANCER', 'REMOTE_ADDR', 'REMOTE_HOST'], PROXY_IP_ALLOWED_LIST=[])
def test_no_proxy_trust_all_headers(self, job_template, admin_user, post):
job_template.inventory.hosts.create(name='foobar')
headers = {
'HTTP_X_FROM_THE_LOAD_BALANCER': 'foobar',
'REMOTE_ADDR': 'bar',
'REMOTE_HOST': 'baz',
}
post(url=reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), data={'host_config_key': 'abcd'}, user=admin_user, expect=201, **headers)
@override_settings(REMOTE_HOST_HEADERS=['HTTP_X_FROM_THE_LOAD_BALANCER', 'REMOTE_ADDR', 'REMOTE_HOST'], PROXY_IP_ALLOWED_LIST=['my.proxy.example.org'])
def test_trusted_proxy(self, job_template, admin_user, post, rsa_keypair): # noqa: F811
job_template.inventory.hosts.create(name='foobar')
headers = {
'HTTP_X_TRUSTED_PROXY': generate_x_trusted_proxy_header(rsa_keypair.private),
'HTTP_X_FROM_THE_LOAD_BALANCER': 'foobar, my.proxy.example.org',
}
with mock.patch('ansible_base.jwt_consumer.common.cache.JWTCache.get_key_from_cache', lambda self: None):
with override_settings(ANSIBLE_BASE_JWT_KEY=rsa_keypair.public):
post(
url=reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
data={'host_config_key': 'abcd'},
user=admin_user,
expect=201,
**headers
)
@override_settings(REMOTE_HOST_HEADERS=['HTTP_X_FROM_THE_LOAD_BALANCER', 'REMOTE_ADDR', 'REMOTE_HOST'], PROXY_IP_ALLOWED_LIST=['my.proxy.example.org'])
def test_trusted_proxy_host_not_found(self, job_template, admin_user, post, rsa_keypair): # noqa: F811
job_template.inventory.hosts.create(name='foobar')
headers = {
'HTTP_X_TRUSTED_PROXY': generate_x_trusted_proxy_header(rsa_keypair.private),
'HTTP_X_FROM_THE_LOAD_BALANCER': 'baz, my.proxy.example.org',
'REMOTE_ADDR': 'bar',
'REMOTE_HOST': 'baz',
}
with mock.patch('ansible_base.jwt_consumer.common.cache.JWTCache.get_key_from_cache', lambda self: None):
with override_settings(ANSIBLE_BASE_JWT_KEY=rsa_keypair.public):
post(
url=reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
data={'host_config_key': 'abcd'},
user=admin_user,
expect=400,
**headers
)

View File

@@ -165,8 +165,8 @@ class TestAccessListCapabilities:
def test_access_list_direct_access_capability(self, inventory, rando, get, mocker, mock_access_method):
inventory.admin_role.members.add(rando)
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method):
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), rando)
mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), rando)
mock_access_method.assert_called_once_with(inventory.admin_role, rando, 'members', **self.extra_kwargs)
self._assert_one_in_list(response.data)
@@ -174,8 +174,8 @@ class TestAccessListCapabilities:
assert direct_access_list[0]['role']['user_capabilities']['unattach'] == 'foobar'
def test_access_list_indirect_access_capability(self, inventory, organization, org_admin, get, mocker, mock_access_method):
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method):
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), org_admin)
mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), org_admin)
mock_access_method.assert_called_once_with(organization.admin_role, org_admin, 'members', **self.extra_kwargs)
self._assert_one_in_list(response.data, sublist='indirect_access')
@@ -185,8 +185,8 @@ class TestAccessListCapabilities:
def test_access_list_team_direct_access_capability(self, inventory, team, team_member, get, mocker, mock_access_method):
team.member_role.children.add(inventory.admin_role)
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method):
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), team_member)
mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), team_member)
mock_access_method.assert_called_once_with(inventory.admin_role, team.member_role, 'parents', **self.extra_kwargs)
self._assert_one_in_list(response.data)
@@ -198,8 +198,8 @@ class TestAccessListCapabilities:
def test_team_roles_unattach(mocker, team, team_member, inventory, mock_access_method, get):
team.member_role.children.add(inventory.admin_role)
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method):
response = get(reverse('api:team_roles_list', kwargs={'pk': team.id}), team_member)
mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:team_roles_list', kwargs={'pk': team.id}), team_member)
# Did we assess whether team_member can remove team's permission to the inventory?
mock_access_method.assert_called_once_with(inventory.admin_role, team.member_role, 'parents', skip_sub_obj_read_check=True, data={})
@@ -212,8 +212,8 @@ def test_user_roles_unattach(mocker, organization, alice, bob, mock_access_metho
organization.member_role.members.add(alice)
organization.member_role.members.add(bob)
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method):
response = get(reverse('api:user_roles_list', kwargs={'pk': alice.id}), bob)
mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:user_roles_list', kwargs={'pk': alice.id}), bob)
# Did we assess whether bob can remove alice's permission to the inventory?
mock_access_method.assert_called_once_with(organization.member_role, alice, 'members', skip_sub_obj_read_check=True, data={})

View File

@@ -43,9 +43,9 @@ def run_command(name, *args, **options):
],
)
def test_update_password_command(mocker, username, password, expected, changed):
with mocker.patch.object(UpdatePassword, 'update_password', return_value=changed):
result, stdout, stderr = run_command('update_password', username=username, password=password)
if result is None:
assert stdout == expected
else:
assert str(result) == expected
mocker.patch.object(UpdatePassword, 'update_password', return_value=changed)
result, stdout, stderr = run_command('update_password', username=username, password=password)
if result is None:
assert stdout == expected
else:
assert str(result) == expected

View File

@@ -16,9 +16,11 @@ from django.db.backends.sqlite3.base import SQLiteCursorWrapper
from django.db.models.signals import post_migrate
from awx.main.migrations._dab_rbac import setup_managed_role_definitions
# AWX
from awx.main.models.projects import Project
from awx.main.models.ha import Instance
from awx.main.models.ha import Instance, InstanceGroup
from rest_framework.test import (
APIRequestFactory,
@@ -90,6 +92,17 @@ def deploy_jobtemplate(project, inventory, credential):
return jt
@pytest.fixture()
def execution_environment():
return ExecutionEnvironment.objects.create(name="test-ee", description="test-ee", managed=True)
@pytest.fixture
def setup_managed_roles():
"Run the migration script to pre-create managed role definitions"
setup_managed_role_definitions(apps, None)
@pytest.fixture
def team(organization):
return organization.teams.create(name='test-team')
@@ -722,6 +735,11 @@ def jt_linked(organization, project, inventory, machine_credential, credential,
return jt
@pytest.fixture
def instance_group():
return InstanceGroup.objects.create(name="east")
@pytest.fixture
def workflow_job_template(organization):
wjt = WorkflowJobTemplate.objects.create(name='test-workflow_job_template', organization=organization)

View File

@@ -1,10 +0,0 @@
import pytest
from django.apps import apps
from awx.main.migrations._dab_rbac import setup_managed_role_definitions
@pytest.fixture
def managed_roles():
"Run the migration script to pre-create managed role definitions"
setup_managed_role_definitions(apps, None)

View File

@@ -109,3 +109,17 @@ def test_team_indirect_access(get, team, admin_user, inventory):
assert len(by_username['u1']['summary_fields']['indirect_access']) == 0
access_entry = by_username['u1']['summary_fields']['direct_access'][0]
assert sorted(access_entry['descendant_roles']) == sorted(['adhoc_role', 'use_role', 'update_role', 'read_role', 'admin_role'])
@pytest.mark.django_db
def test_workflow_access_list(workflow_job_template, alice, bob, setup_managed_roles, get, admin_user):
"""Basic verification that WFJT access_list is functional"""
workflow_job_template.admin_role.members.add(alice)
workflow_job_template.organization.workflow_admin_role.members.add(bob)
url = reverse('api:workflow_job_template_access_list', kwargs={'pk': workflow_job_template.pk})
for u in (alice, bob, admin_user):
response = get(url, user=u, expect=200)
user_ids = [item['id'] for item in response.data['results']]
assert alice.pk in user_ids
assert bob.pk in user_ids

View File

@@ -0,0 +1,41 @@
import pytest
from awx.main.access import InstanceGroupAccess, NotificationTemplateAccess
from ansible_base.rbac.models import RoleDefinition
@pytest.mark.django_db
def test_instance_group_object_role_delete(rando, instance_group, setup_managed_roles):
"""Basic functionality of IG object-level admin role function AAP-25506"""
rd = RoleDefinition.objects.get(name='InstanceGroup Admin')
rd.give_permission(rando, instance_group)
access = InstanceGroupAccess(rando)
assert access.can_delete(instance_group)
@pytest.mark.django_db
def test_notification_template_object_role_change(rando, notification_template, setup_managed_roles):
"""Basic functionality of NT object-level admin role function AAP-25493"""
rd = RoleDefinition.objects.get(name='NotificationTemplate Admin')
rd.give_permission(rando, notification_template)
access = NotificationTemplateAccess(rando)
assert access.can_change(notification_template, {'name': 'new name'})
@pytest.mark.django_db
def test_organization_auditor_role(rando, setup_managed_roles, organization, inventory, project, jt_linked):
obj_list = (inventory, project, jt_linked)
for obj in obj_list:
assert obj.organization == organization, obj # sanity
assert [rando.has_obj_perm(obj, 'view') for obj in obj_list] == [False for i in range(3)], obj_list
rd = RoleDefinition.objects.get(name='Organization Audit')
rd.give_permission(rando, organization)
codename_set = set(rd.permissions.values_list('codename', flat=True))
assert not ({'view_inventory', 'view_jobtemplate', 'audit_organization'} - codename_set) # sanity
assert [obj in type(obj).access_qs(rando) for obj in obj_list] == [True for i in range(3)], obj_list
assert [rando.has_obj_perm(obj, 'view') for obj in obj_list] == [True for i in range(3)], obj_list

View File

@@ -1,45 +0,0 @@
import pytest
from django.apps import apps
from django.test.utils import override_settings
from awx.main.migrations._dab_rbac import setup_managed_role_definitions
from ansible_base.rbac.models import RoleDefinition
INVENTORY_OBJ_PERMISSIONS = ['view_inventory', 'adhoc_inventory', 'use_inventory', 'change_inventory', 'delete_inventory', 'update_inventory']
@pytest.mark.django_db
def test_managed_definitions_precreate():
with override_settings(
ANSIBLE_BASE_ROLE_PRECREATE={
'object_admin': '{cls._meta.model_name}-admin',
'org_admin': 'organization-admin',
'org_children': 'organization-{cls._meta.model_name}-admin',
'special': '{cls._meta.model_name}-{action}',
}
):
setup_managed_role_definitions(apps, None)
rd = RoleDefinition.objects.get(name='inventory-admin')
assert rd.managed is True
# add permissions do not go in the object-level admin
assert set(rd.permissions.values_list('codename', flat=True)) == set(INVENTORY_OBJ_PERMISSIONS)
# test org-level object admin permissions
rd = RoleDefinition.objects.get(name='organization-inventory-admin')
assert rd.managed is True
assert set(rd.permissions.values_list('codename', flat=True)) == set(['add_inventory', 'view_organization'] + INVENTORY_OBJ_PERMISSIONS)
@pytest.mark.django_db
def test_managed_definitions_custom_obj_admin_name():
with override_settings(
ANSIBLE_BASE_ROLE_PRECREATE={
'object_admin': 'foo-{cls._meta.model_name}-foo',
}
):
setup_managed_role_definitions(apps, None)
rd = RoleDefinition.objects.get(name='foo-inventory-foo')
assert rd.managed is True
# add permissions do not go in the object-level admin
assert set(rd.permissions.values_list('codename', flat=True)) == set(INVENTORY_OBJ_PERMISSIONS)

View File

@@ -2,15 +2,17 @@ import pytest
from django.contrib.contenttypes.models import ContentType
from django.urls import reverse as django_reverse
from django.test.utils import override_settings
from awx.api.versioning import reverse
from awx.main.models import JobTemplate, Inventory, Organization
from awx.main.access import JobTemplateAccess, WorkflowJobTemplateAccess
from ansible_base.rbac.models import RoleDefinition
@pytest.mark.django_db
def test_managed_roles_created(managed_roles):
def test_managed_roles_created(setup_managed_roles):
"Managed RoleDefinitions are created in post_migration signal, we expect to see them here"
for cls in (JobTemplate, Inventory):
ct = ContentType.objects.get_for_model(cls)
@@ -22,7 +24,7 @@ def test_managed_roles_created(managed_roles):
@pytest.mark.django_db
def test_custom_read_role(admin_user, post, managed_roles):
def test_custom_read_role(admin_user, post, setup_managed_roles):
rd_url = django_reverse('roledefinition-list')
resp = post(
url=rd_url, data={"name": "read role made for test", "content_type": "awx.inventory", "permissions": ['view_inventory']}, user=admin_user, expect=201
@@ -40,10 +42,25 @@ def test_custom_system_roles_prohibited(admin_user, post):
@pytest.mark.django_db
def test_assign_managed_role(admin_user, alice, rando, inventory, post, managed_roles):
def test_assignment_to_invisible_user(admin_user, alice, rando, inventory, post, setup_managed_roles):
"Alice can not see rando, and so can not give them a role assignment"
rd = RoleDefinition.objects.get(name='Inventory Admin')
rd.give_permission(alice, inventory)
# Now that alice has full permissions to the inventory, she will give rando permission
url = django_reverse('roleuserassignment-list')
r = post(url=url, data={"user": rando.id, "role_definition": rd.id, "object_id": inventory.id}, user=alice, expect=400)
assert 'does not exist' in str(r.data)
assert not rando.has_obj_perm(inventory, 'change')
@pytest.mark.django_db
def test_assign_managed_role(admin_user, alice, rando, inventory, post, setup_managed_roles, organization):
rd = RoleDefinition.objects.get(name='Inventory Admin')
rd.give_permission(alice, inventory)
# When alice and rando are members of the same org, they can see each other
member_rd = RoleDefinition.objects.get(name='Organization Member')
for u in (alice, rando):
member_rd.give_permission(u, organization)
# Now that alice has full permissions to the inventory, and can see rando, she will give rando permission
url = django_reverse('roleuserassignment-list')
post(url=url, data={"user": rando.id, "role_definition": rd.id, "object_id": inventory.id}, user=alice, expect=201)
assert rando.has_obj_perm(inventory, 'change') is True
@@ -63,7 +80,7 @@ def test_assign_custom_delete_role(admin_user, rando, inventory, delete, patch):
@pytest.mark.django_db
def test_assign_custom_add_role(admin_user, rando, organization, post, managed_roles):
def test_assign_custom_add_role(admin_user, rando, organization, post, setup_managed_roles):
rd, _ = RoleDefinition.objects.get_or_create(
name='inventory-add', permissions=['add_inventory', 'view_organization'], content_type=ContentType.objects.get_for_model(Organization)
)
@@ -73,3 +90,63 @@ def test_assign_custom_add_role(admin_user, rando, organization, post, managed_r
inv_id = r.data['id']
inventory = Inventory.objects.get(id=inv_id)
assert rando.has_obj_perm(inventory, 'change')
@pytest.mark.django_db
def test_jt_creation_permissions(setup_managed_roles, inventory, project, rando):
"""This tests that if you assign someone required permissions in the new API
using the managed roles, then that works to give permissions to create a job template"""
inv_rd = RoleDefinition.objects.get(name='Inventory Admin')
proj_rd = RoleDefinition.objects.get(name='Project Admin')
# establish prior state
access = JobTemplateAccess(rando)
assert not access.can_add({'inventory': inventory.pk, 'project': project.pk, 'name': 'foo-jt'})
inv_rd.give_permission(rando, inventory)
proj_rd.give_permission(rando, project)
assert access.can_add({'inventory': inventory.pk, 'project': project.pk, 'name': 'foo-jt'})
@pytest.mark.django_db
def test_workflow_creation_permissions(setup_managed_roles, organization, workflow_job_template, rando):
"""Similar to JT, assigning new roles gives creator permissions"""
org_wf_rd = RoleDefinition.objects.get(name='Organization WorkflowJobTemplate Admin')
assert workflow_job_template.organization == organization # sanity
# establish prior state
access = WorkflowJobTemplateAccess(rando)
assert not access.can_add({'name': 'foo-flow', 'organization': organization.pk})
org_wf_rd.give_permission(rando, organization)
assert access.can_add({'name': 'foo-flow', 'organization': organization.pk})
@pytest.mark.django_db
def test_assign_credential_to_user_of_another_org(setup_managed_roles, credential, admin_user, rando, org_admin, organization, post):
'''Test that a credential can only be assigned to a user in the same organization'''
# cannot assign credential to rando, as rando is not in the same org as the credential
rd = RoleDefinition.objects.get(name="Credential Admin")
credential.organization = organization
credential.save(update_fields=['organization'])
assert credential.organization not in Organization.access_qs(rando, 'change')
url = django_reverse('roleuserassignment-list')
resp = post(url=url, data={"user": rando.id, "role_definition": rd.id, "object_id": credential.id}, user=admin_user, expect=400)
assert "You cannot grant credential access to a User not in the credentials' organization" in str(resp.data)
# can assign credential to superuser
rando.is_superuser = True
rando.save()
post(url=url, data={"user": rando.id, "role_definition": rd.id, "object_id": credential.id}, user=admin_user, expect=201)
# can assign credential to org_admin
assert credential.organization in Organization.access_qs(org_admin, 'change')
post(url=url, data={"user": org_admin.id, "role_definition": rd.id, "object_id": credential.id}, user=admin_user, expect=201)
@pytest.mark.django_db
@override_settings(ALLOW_LOCAL_RESOURCE_MANAGEMENT=False)
def test_team_member_role_not_assignable(team, rando, post, admin_user, setup_managed_roles):
member_rd = RoleDefinition.objects.get(name='Organization Member')
url = django_reverse('roleuserassignment-list')
r = post(url, data={'object_id': team.id, 'role_definition': member_rd.id, 'user': rando.id}, user=admin_user, expect=400)
assert 'Not managed locally' in str(r.data)

View File

@@ -0,0 +1,120 @@
import pytest
from django.apps import apps
from ansible_base.rbac.managed import SystemAuditor
from ansible_base.rbac import permission_registry
from awx.main.access import check_user_access, get_user_queryset
from awx.main.models import User, AdHocCommandEvent
from awx.api.versioning import reverse
@pytest.fixture
def ext_auditor_rd():
info = SystemAuditor(overrides={'name': 'Alien Auditor', 'shortname': 'ext_auditor'})
rd, _ = info.get_or_create(apps)
return rd
@pytest.fixture
def ext_auditor(ext_auditor_rd):
u = User.objects.create(username='external-auditor-user')
ext_auditor_rd.give_global_permission(u)
return u
@pytest.fixture
def obj_factory(request):
def _rf(fixture_name):
obj = request.getfixturevalue(fixture_name)
# special case to make obj organization-scoped
if obj._meta.model_name == 'executionenvironment':
obj.organization = request.getfixturevalue('organization')
obj.save(update_fields=['organization'])
return obj
return _rf
@pytest.mark.django_db
def test_access_qs_external_auditor(ext_auditor_rd, rando, job_template):
ext_auditor_rd.give_global_permission(rando)
jt_cls = apps.get_model('main', 'JobTemplate')
ujt_cls = apps.get_model('main', 'UnifiedJobTemplate')
assert job_template in jt_cls.access_qs(rando)
assert job_template.id in jt_cls.access_ids_qs(rando)
assert job_template.id in ujt_cls.accessible_pk_qs(rando, 'read_role')
@pytest.mark.django_db
@pytest.mark.parametrize('model', sorted(permission_registry.all_registered_models, key=lambda cls: cls._meta.model_name))
class TestExternalAuditorRoleAllModels:
def test_access_can_read_method(self, obj_factory, model, ext_auditor, rando):
fixture_name = model._meta.verbose_name.replace(' ', '_')
obj = obj_factory(fixture_name)
assert check_user_access(rando, model, 'read', obj) is False
assert check_user_access(ext_auditor, model, 'read', obj) is True
def test_access_get_queryset(self, obj_factory, model, ext_auditor, rando):
fixture_name = model._meta.verbose_name.replace(' ', '_')
obj = obj_factory(fixture_name)
assert obj not in get_user_queryset(rando, model)
assert obj in get_user_queryset(ext_auditor, model)
def test_global_list(self, obj_factory, model, ext_auditor, rando, get):
fixture_name = model._meta.verbose_name.replace(' ', '_')
obj_factory(fixture_name)
url = reverse(f'api:{fixture_name}_list')
r = get(url, user=rando, expect=200)
initial_ct = r.data['count']
r = get(url, user=ext_auditor, expect=200)
assert r.data['count'] == initial_ct + 1
if fixture_name in ('job_template', 'workflow_job_template'):
url = reverse('api:unified_job_template_list')
r = get(url, user=rando, expect=200)
initial_ct = r.data['count']
r = get(url, user=ext_auditor, expect=200)
assert r.data['count'] == initial_ct + 1
def test_detail_view(self, obj_factory, model, ext_auditor, rando, get):
fixture_name = model._meta.verbose_name.replace(' ', '_')
obj = obj_factory(fixture_name)
url = reverse(f'api:{fixture_name}_detail', kwargs={'pk': obj.pk})
get(url, user=rando, expect=403) # NOTE: should be 401
get(url, user=ext_auditor, expect=200)
@pytest.mark.django_db
class TestExternalAuditorNonRoleModels:
def test_ad_hoc_command_view(self, ad_hoc_command_factory, rando, ext_auditor, get):
"""The AdHocCommandAccess class references is_system_auditor
this is to prove it works with other system-level view roles"""
ad_hoc_command = ad_hoc_command_factory()
url = reverse('api:ad_hoc_command_list')
r = get(url, user=rando, expect=200)
assert r.data['count'] == 0
r = get(url, user=ext_auditor, expect=200)
assert r.data['count'] == 1
assert r.data['results'][0]['id'] == ad_hoc_command.id
event = AdHocCommandEvent.objects.create(ad_hoc_command=ad_hoc_command)
url = reverse('api:ad_hoc_command_ad_hoc_command_events_list', kwargs={'pk': ad_hoc_command.id})
r = get(url, user=rando, expect=403)
r = get(url, user=ext_auditor, expect=200)
assert r.data['count'] == 1
url = reverse('api:ad_hoc_command_event_detail', kwargs={'pk': event.id})
r = get(url, user=rando, expect=403)
r = get(url, user=ext_auditor, expect=200)
assert r.data['id'] == event.id

View File

@@ -0,0 +1,31 @@
import pytest
from ansible_base.rbac.models import RoleDefinition, DABPermission
@pytest.mark.django_db
def test_roles_to_not_create(setup_managed_roles):
assert RoleDefinition.objects.filter(name='Organization Admin').count() == 1
SHOULD_NOT_EXIST = ('Organization Organization Admin', 'Organization Team Admin', 'Organization InstanceGroup Admin')
bad_rds = RoleDefinition.objects.filter(name__in=SHOULD_NOT_EXIST)
if bad_rds.exists():
bad_names = list(bad_rds.values_list('name', flat=True))
raise Exception(f'Found RoleDefinitions that should not exist: {bad_names}')
@pytest.mark.django_db
def test_project_update_role(setup_managed_roles):
"""Role to allow updating a project on the object-level should exist"""
assert RoleDefinition.objects.filter(name='Project Update').count() == 1
@pytest.mark.django_db
def test_org_child_add_permission(setup_managed_roles):
for model_name in ('Project', 'NotificationTemplate', 'WorkflowJobTemplate', 'Inventory'):
rd = RoleDefinition.objects.get(name=f'Organization {model_name} Admin')
assert 'add_' in str(rd.permissions.values_list('codename', flat=True)), f'The {rd.name} role definition expected to contain add_ permissions'
# special case for JobTemplate, anyone can create one with use permission to project/inventory
assert not DABPermission.objects.filter(codename='add_jobtemplate').exists()

View File

@@ -2,19 +2,32 @@ from unittest import mock
import pytest
from django.contrib.contenttypes.models import ContentType
from crum import impersonate
from awx.main.models.rbac import get_role_from_object_role, give_creator_permissions
from awx.main.models import User, Organization, WorkflowJobTemplate, WorkflowJobTemplateNode, Team
from awx.api.versioning import reverse
from ansible_base.rbac.models import RoleUserAssignment
from ansible_base.rbac.models import RoleUserAssignment, RoleDefinition
@pytest.mark.django_db
@pytest.mark.parametrize(
'role_name',
['execution_environment_admin_role', 'project_admin_role', 'admin_role', 'auditor_role', 'read_role', 'execute_role', 'notification_admin_role'],
[
'execution_environment_admin_role',
'workflow_admin_role',
'project_admin_role',
'admin_role',
'auditor_role',
'read_role',
'execute_role',
'notification_admin_role',
],
)
def test_round_trip_roles(organization, rando, role_name, managed_roles):
def test_round_trip_roles(organization, rando, role_name, setup_managed_roles):
"""
Make an assignment with the old-style role,
get the equivelent new role
@@ -22,13 +35,44 @@ def test_round_trip_roles(organization, rando, role_name, managed_roles):
"""
getattr(organization, role_name).members.add(rando)
assignment = RoleUserAssignment.objects.get(user=rando)
print(assignment.role_definition.name)
old_role = get_role_from_object_role(assignment.object_role)
assert old_role.id == getattr(organization, role_name).id
@pytest.mark.django_db
def test_organization_level_permissions(organization, inventory, managed_roles):
def test_role_naming(setup_managed_roles):
qs = RoleDefinition.objects.filter(content_type=ContentType.objects.get(model='jobtemplate'), name__endswith='dmin')
assert qs.count() == 1 # sanity
rd = qs.first()
assert rd.name == 'JobTemplate Admin'
assert rd.description
assert rd.created_by is None
@pytest.mark.django_db
def test_action_role_naming(setup_managed_roles):
qs = RoleDefinition.objects.filter(content_type=ContentType.objects.get(model='jobtemplate'), name__endswith='ecute')
assert qs.count() == 1 # sanity
rd = qs.first()
assert rd.name == 'JobTemplate Execute'
assert rd.description
assert rd.created_by is None
@pytest.mark.django_db
def test_compat_role_naming(setup_managed_roles, job_template, rando, alice):
with impersonate(alice):
job_template.read_role.members.add(rando)
qs = RoleDefinition.objects.filter(content_type=ContentType.objects.get(model='jobtemplate'), name__endswith='ompat')
assert qs.count() == 1 # sanity
rd = qs.first()
assert rd.name == 'JobTemplate Read Compat'
assert rd.description
assert rd.created_by is None
@pytest.mark.django_db
def test_organization_level_permissions(organization, inventory, setup_managed_roles):
u1 = User.objects.create(username='alice')
u2 = User.objects.create(username='bob')
@@ -58,14 +102,14 @@ def test_organization_level_permissions(organization, inventory, managed_roles):
@pytest.mark.django_db
def test_organization_execute_role(organization, rando, managed_roles):
def test_organization_execute_role(organization, rando, setup_managed_roles):
organization.execute_role.members.add(rando)
assert rando in organization.execute_role
assert set(Organization.accessible_objects(rando, 'execute_role')) == set([organization])
@pytest.mark.django_db
def test_workflow_approval_list(get, post, admin_user, managed_roles):
def test_workflow_approval_list(get, post, admin_user, setup_managed_roles):
workflow_job_template = WorkflowJobTemplate.objects.create()
approval_node = WorkflowJobTemplateNode.objects.create(workflow_job_template=workflow_job_template)
url = reverse('api:workflow_job_template_node_create_approval', kwargs={'pk': approval_node.pk, 'version': 'v2'})
@@ -79,14 +123,14 @@ def test_workflow_approval_list(get, post, admin_user, managed_roles):
@pytest.mark.django_db
def test_creator_permission(rando, admin_user, inventory, managed_roles):
def test_creator_permission(rando, admin_user, inventory, setup_managed_roles):
give_creator_permissions(rando, inventory)
assert rando in inventory.admin_role
assert rando in inventory.admin_role.members.all()
@pytest.mark.django_db
def test_team_team_read_role(rando, team, admin_user, post, managed_roles):
def test_team_team_read_role(rando, team, admin_user, post, setup_managed_roles):
orgs = [Organization.objects.create(name=f'foo-{i}') for i in range(2)]
teams = [Team.objects.create(name=f'foo-{i}', organization=orgs[i]) for i in range(2)]
teams[1].member_role.members.add(rando)
@@ -105,3 +149,11 @@ def test_implicit_parents_no_assignments(organization):
with mock.patch('awx.main.models.rbac.give_or_remove_permission') as mck:
Team.objects.create(name='random team', organization=organization)
mck.assert_not_called()
@pytest.mark.django_db
def test_user_auditor_rel(organization, rando, setup_managed_roles):
assert rando not in organization.auditor_role
audit_rd = RoleDefinition.objects.get(name='Organization Audit')
audit_rd.give_permission(rando, organization)
assert list(rando.auditor_of_organizations) == [organization]

View File

@@ -21,13 +21,13 @@ class TestComputedFields:
def test_computed_fields_normal_use(self, mocker, inventory):
job = Job.objects.create(name='fake-job', inventory=inventory)
with immediate_on_commit():
with mocker.patch.object(update_inventory_computed_fields, 'delay'):
job.delete()
update_inventory_computed_fields.delay.assert_called_once_with(inventory.id)
mocker.patch.object(update_inventory_computed_fields, 'delay')
job.delete()
update_inventory_computed_fields.delay.assert_called_once_with(inventory.id)
def test_disable_computed_fields(self, mocker, inventory):
job = Job.objects.create(name='fake-job', inventory=inventory)
with disable_computed_fields():
with mocker.patch.object(update_inventory_computed_fields, 'delay'):
job.delete()
update_inventory_computed_fields.delay.assert_not_called()
mocker.patch.object(update_inventory_computed_fields, 'delay')
job.delete()
update_inventory_computed_fields.delay.assert_not_called()

View File

@@ -4,25 +4,19 @@ import pytest
# CRUM
from crum import impersonate
# Django
from django.contrib.contenttypes.models import ContentType
# AWX
from awx.main.models import UnifiedJobTemplate, Job, JobTemplate, WorkflowJobTemplate, WorkflowApprovalTemplate, Project, WorkflowJob, Schedule, Credential
from awx.main.models import UnifiedJobTemplate, Job, JobTemplate, WorkflowJobTemplate, Project, WorkflowJob, Schedule, Credential
from awx.api.versioning import reverse
from awx.main.constants import JOB_VARIABLE_PREFIXES
@pytest.mark.django_db
def test_subclass_types():
assert set(UnifiedJobTemplate._submodels_with_roles()) == set(
[
ContentType.objects.get_for_model(JobTemplate).id,
ContentType.objects.get_for_model(Project).id,
ContentType.objects.get_for_model(WorkflowJobTemplate).id,
ContentType.objects.get_for_model(WorkflowApprovalTemplate).id,
]
)
assert set(UnifiedJobTemplate._submodels_with_roles()) == {
JobTemplate,
Project,
WorkflowJobTemplate,
}
@pytest.mark.django_db

View File

@@ -21,13 +21,13 @@ def test_multi_group_basic_job_launch(instance_factory, controlplane_instance_gr
j2 = create_job(objects2.job_template)
with mock.patch('awx.main.models.Job.task_impact', new_callable=mock.PropertyMock) as mock_task_impact:
mock_task_impact.return_value = 500
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_has_calls([mock.call(j1, ig1, i1), mock.call(j2, ig2, i2)])
mocker.patch("awx.main.scheduler.TaskManager.start_task")
TaskManager().schedule()
TaskManager.start_task.assert_has_calls([mock.call(j1, ig1, i1), mock.call(j2, ig2, i2)])
@pytest.mark.django_db
def test_multi_group_with_shared_dependency(instance_factory, controlplane_instance_group, mocker, instance_group_factory, job_template_factory):
def test_multi_group_with_shared_dependency(instance_factory, controlplane_instance_group, instance_group_factory, job_template_factory):
i1 = instance_factory("i1")
i2 = instance_factory("i2")
ig1 = instance_group_factory("ig1", instances=[i1])
@@ -50,7 +50,7 @@ def test_multi_group_with_shared_dependency(instance_factory, controlplane_insta
objects2 = job_template_factory('jt2', organization=objects1.organization, project=p, inventory='inv2', credential='cred2')
objects2.job_template.instance_groups.add(ig2)
j2 = create_job(objects2.job_template, dependencies_processed=False)
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
DependencyManager().schedule()
TaskManager().schedule()
pu = p.project_updates.first()
@@ -73,10 +73,10 @@ def test_workflow_job_no_instancegroup(workflow_job_template_factory, controlpla
wfj = wfjt.create_unified_job()
wfj.status = "pending"
wfj.save()
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(wfj, None, None)
assert wfj.instance_group is None
mocker.patch("awx.main.scheduler.TaskManager.start_task")
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(wfj, None, None)
assert wfj.instance_group is None
@pytest.mark.django_db

View File

@@ -16,9 +16,9 @@ def test_single_job_scheduler_launch(hybrid_instance, controlplane_instance_grou
instance = controlplane_instance_group.instances.all()[0]
objects = job_template_factory('jt', organization='org1', project='proj', inventory='inv', credential='cred')
j = create_job(objects.job_template)
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance)
mocker.patch("awx.main.scheduler.TaskManager.start_task")
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance)
@pytest.mark.django_db

View File

@@ -46,6 +46,8 @@ def generate_fake_var(element):
def credential_kind(source):
"""Given the inventory source kind, return expected credential kind"""
if source == 'openshift_virtualization':
return 'kubernetes_bearer_token'
return source.replace('ec2', 'aws')

View File

@@ -1,6 +1,7 @@
import pytest
from django_test_migrations.plan import all_migrations, nodes_to_tuples
from django.utils.timezone import now
"""
Most tests that live in here can probably be deleted at some point. They are mainly
@@ -68,3 +69,33 @@ class TestMigrationSmoke:
bar_peers = bar.peers.all()
assert len(bar_peers) == 1
assert fooaddr in bar_peers
def test_migrate_DAB_RBAC(self, migrator):
old_state = migrator.apply_initial_migration(('main', '0190_alter_inventorysource_source_and_more'))
Organization = old_state.apps.get_model('main', 'Organization')
User = old_state.apps.get_model('auth', 'User')
org = Organization.objects.create(name='arbitrary-org', created=now(), modified=now())
user = User.objects.create(username='random-user')
org.read_role.members.add(user)
new_state = migrator.apply_tested_migration(
('main', '0192_custom_roles'),
)
RoleUserAssignment = new_state.apps.get_model('dab_rbac', 'RoleUserAssignment')
assert RoleUserAssignment.objects.filter(user=user.id, object_id=org.id).exists()
# Regression testing for bug that comes from current vs past models mismatch
RoleDefinition = new_state.apps.get_model('dab_rbac', 'RoleDefinition')
assert not RoleDefinition.objects.filter(name='Organization Organization Admin').exists()
# Test special cases in managed role creation
assert not RoleDefinition.objects.filter(name='Organization Team Admin').exists()
assert not RoleDefinition.objects.filter(name='Organization InstanceGroup Admin').exists()
# Test that a removed EE model permission has been deleted
new_state = migrator.apply_tested_migration(
('main', '0195_EE_permissions'),
)
DABPermission = new_state.apps.get_model('dab_rbac', 'DABPermission')
assert not DABPermission.objects.filter(codename='view_executionenvironment').exists()

View File

@@ -1,8 +1,6 @@
# -*- coding: utf-8 -*-
import pytest
from django.conf import settings
from awx.api.versioning import reverse
from awx.main.middleware import URLModificationMiddleware
from awx.main.models import ( # noqa
@@ -121,7 +119,7 @@ def test_notification_template(get, admin_user):
@pytest.mark.django_db
def test_instance(get, admin_user):
def test_instance(get, admin_user, settings):
test_instance = Instance.objects.create(uuid=settings.SYSTEM_UUID, hostname="localhost", capacity=100)
url = reverse('api:instance_detail', kwargs={'pk': test_instance.pk})
response = get(url, user=admin_user, expect=200)
@@ -205,3 +203,65 @@ def test_403_vs_404(get):
get(f'/api/v2/users/{cindy.pk}/', expect=401)
get('/api/v2/users/cindy/', expect=404)
@pytest.mark.django_db
class TestConvertNamedUrl:
@pytest.mark.parametrize(
"url",
(
"/api/",
"/api/v2/",
"/api/v2/hosts/",
"/api/v2/hosts/1/",
"/api/v2/organizations/1/inventories/",
"/api/foo/",
"/api/foo/v2/",
"/api/foo/v2/organizations/",
"/api/foo/v2/organizations/1/",
"/api/foo/v2/organizations/1/inventories/",
"/api/foobar/",
"/api/foobar/v2/",
"/api/foobar/v2/organizations/",
"/api/foobar/v2/organizations/1/",
"/api/foobar/v2/organizations/1/inventories/",
"/api/foobar/v2/organizations/1/inventories/",
),
)
def test_noop(self, url, settings):
settings.OPTIONAL_API_URLPATTERN_PREFIX = ''
assert URLModificationMiddleware._convert_named_url(url) == url
settings.OPTIONAL_API_URLPATTERN_PREFIX = 'foo'
assert URLModificationMiddleware._convert_named_url(url) == url
def test_named_org(self):
test_org = Organization.objects.create(name='test_org')
assert URLModificationMiddleware._convert_named_url('/api/v2/organizations/test_org/') == f'/api/v2/organizations/{test_org.pk}/'
def test_named_org_optional_api_urlpattern_prefix_interaction(self, settings):
settings.OPTIONAL_API_URLPATTERN_PREFIX = 'bar'
test_org = Organization.objects.create(name='test_org')
assert URLModificationMiddleware._convert_named_url('/api/bar/v2/organizations/test_org/') == f'/api/bar/v2/organizations/{test_org.pk}/'
@pytest.mark.parametrize("prefix", ['', 'bar'])
def test_named_org_not_found(self, prefix, settings):
settings.OPTIONAL_API_URLPATTERN_PREFIX = prefix
if prefix:
prefix += '/'
assert URLModificationMiddleware._convert_named_url(f'/api/{prefix}v2/organizations/does-not-exist/') == f'/api/{prefix}v2/organizations/0/'
@pytest.mark.parametrize("prefix", ['', 'bar'])
def test_named_sub_resource(self, prefix, settings):
settings.OPTIONAL_API_URLPATTERN_PREFIX = prefix
test_org = Organization.objects.create(name='test_org')
if prefix:
prefix += '/'
assert (
URLModificationMiddleware._convert_named_url(f'/api/{prefix}v2/organizations/test_org/inventories/')
== f'/api/{prefix}v2/organizations/{test_org.pk}/inventories/'
)

View File

@@ -187,7 +187,7 @@ def test_remove_role_from_user(role, post, admin):
@pytest.mark.django_db
@override_settings(ANSIBLE_BASE_ALLOW_TEAM_ORG_ADMIN=True)
@override_settings(ANSIBLE_BASE_ALLOW_TEAM_ORG_ADMIN=True, ANSIBLE_BASE_ALLOW_TEAM_ORG_MEMBER=True)
def test_get_teams_roles_list(get, team, organization, admin):
team.member_role.children.add(organization.admin_role)
url = reverse('api:team_roles_list', kwargs={'pk': team.id})

View File

@@ -0,0 +1,107 @@
import pytest
from django.contrib.contenttypes.models import ContentType
from awx.main.access import ExecutionEnvironmentAccess
from awx.main.models import ExecutionEnvironment, Organization
from awx.main.models.rbac import get_role_codenames
from awx.api.versioning import reverse
from django.urls import reverse as django_reverse
from ansible_base.rbac.models import RoleDefinition
@pytest.fixture
def ee_rd():
return RoleDefinition.objects.create_from_permissions(
name='EE object admin',
permissions=['change_executionenvironment', 'delete_executionenvironment'],
content_type=ContentType.objects.get_for_model(ExecutionEnvironment),
)
@pytest.fixture
def org_ee_rd():
return RoleDefinition.objects.create_from_permissions(
name='EE org admin',
permissions=['add_executionenvironment', 'change_executionenvironment', 'delete_executionenvironment', 'view_organization'],
content_type=ContentType.objects.get_for_model(Organization),
)
@pytest.mark.django_db
def test_old_ee_role_maps_to_correct_permissions(organization):
assert set(get_role_codenames(organization.execution_environment_admin_role)) == {
'view_organization',
'add_executionenvironment',
'change_executionenvironment',
'delete_executionenvironment',
}
@pytest.fixture
def org_ee(organization):
return ExecutionEnvironment.objects.create(name='some user ee', organization=organization)
@pytest.fixture
def check_user_capabilities(get, setup_managed_roles):
def _rf(user, obj, expected):
url = reverse('api:execution_environment_list')
r = get(url, user=user, expect=200)
for item in r.data['results']:
if item['id'] == obj.pk:
assert expected == item['summary_fields']['user_capabilities']
break
else:
raise RuntimeError(f'Could not find expected object ({obj}) in EE list result: {r.data}')
return _rf
# ___ begin tests ___
@pytest.mark.django_db
def test_managed_ee_not_assignable(control_plane_execution_environment, ee_rd, rando, admin_user, post):
url = django_reverse('roleuserassignment-list')
r = post(url, {'role_definition': ee_rd.pk, 'user': rando.id, 'object_id': control_plane_execution_environment.pk}, user=admin_user, expect=400)
assert 'Can not assign object roles to managed Execution Environment' in str(r.data)
@pytest.mark.django_db
def test_org_member_required_for_assignment(org_ee, ee_rd, rando, admin_user, post):
url = django_reverse('roleuserassignment-list')
r = post(url, {'role_definition': ee_rd.pk, 'user': rando.id, 'object_id': org_ee.pk}, user=admin_user, expect=400)
assert 'User must have view permission to Execution Environment organization' in str(r.data)
@pytest.mark.django_db
def test_give_object_permission_to_ee(org_ee, ee_rd, org_member, check_user_capabilities):
access = ExecutionEnvironmentAccess(org_member)
assert access.can_read(org_ee) # by virtue of being an org member
assert not access.can_change(org_ee, {'name': 'new'})
check_user_capabilities(org_member, org_ee, {'edit': False, 'delete': False, 'copy': False})
ee_rd.give_permission(org_member, org_ee)
assert access.can_change(org_ee, {'name': 'new'})
check_user_capabilities(org_member, org_ee, {'edit': True, 'delete': True, 'copy': False})
@pytest.mark.django_db
@pytest.mark.parametrize('style', ['new', 'old'])
def test_give_org_permission_to_ee(org_ee, organization, org_member, check_user_capabilities, style, org_ee_rd):
access = ExecutionEnvironmentAccess(org_member)
assert not access.can_change(org_ee, {'name': 'new'})
check_user_capabilities(org_member, org_ee, {'edit': False, 'delete': False, 'copy': False})
if style == 'new':
org_ee_rd.give_permission(org_member, organization)
assert org_member.has_obj_perm(org_ee.organization, 'add_executionenvironment') # sanity
else:
organization.execution_environment_admin_role.members.add(org_member)
assert access.can_change(org_ee, {'name': 'new'})
check_user_capabilities(org_member, org_ee, {'edit': True, 'delete': True, 'copy': True})

View File

@@ -165,7 +165,7 @@ class TestOrphanJobTemplate:
@pytest.mark.django_db
@pytest.mark.job_permissions
def test_job_template_creator_access(project, organization, rando, post):
def test_job_template_creator_access(project, organization, rando, post, setup_managed_roles):
project.use_role.members.add(rando)
response = post(
url=reverse('api:job_template_list'),
@@ -182,8 +182,14 @@ def test_job_template_creator_access(project, organization, rando, post):
@pytest.mark.django_db
@pytest.mark.job_permissions
@pytest.mark.parametrize('lacking', ['project', 'inventory'])
def test_job_template_insufficient_creator_permissions(lacking, project, inventory, organization, rando, post):
@pytest.mark.parametrize(
'lacking,reason',
[
('project', 'You do not have use permission on Project'),
('inventory', 'You do not have use permission on Inventory'),
],
)
def test_job_template_insufficient_creator_permissions(lacking, reason, project, inventory, organization, rando, post):
if lacking != 'project':
project.use_role.members.add(rando)
else:
@@ -192,12 +198,13 @@ def test_job_template_insufficient_creator_permissions(lacking, project, invento
inventory.use_role.members.add(rando)
else:
inventory.read_role.members.add(rando)
post(
response = post(
url=reverse('api:job_template_list'),
data=dict(name='newly-created-jt', inventory=inventory.id, project=project.pk, playbook='helloworld.yml'),
user=rando,
expect=403,
)
assert reason in response.data[lacking]
@pytest.mark.django_db

View File

@@ -99,7 +99,9 @@ def test_notification_template_access_org_user(notification_template, user):
@pytest.mark.django_db
def test_notificaiton_template_orphan_access_org_admin(notification_template, organization, org_admin):
notification_template.organization = None
notification_template.save(update_fields=['organization'])
access = NotificationTemplateAccess(org_admin)
assert not org_admin.has_obj_perm(notification_template, 'change')
assert not access.can_change(notification_template, {'organization': organization.id})

View File

@@ -35,6 +35,13 @@ class TestWorkflowJobTemplateAccess:
assert org_member in wfjt.execute_role
assert org_member in wfjt.read_role
def test_non_super_admin_no_add_without_org(self, wfjt, organization, rando):
organization.member_role.members.add(rando)
wfjt.admin_role.members.add(rando)
access = WorkflowJobTemplateAccess(rando, save_messages=True)
assert not access.can_add({'name': 'without org'})
assert 'An organization is required to create a workflow job template for normal user' in access.messages['organization']
@pytest.mark.django_db
class TestWorkflowJobTemplateNodeAccess:

View File

@@ -76,15 +76,15 @@ class TestJobTemplateSerializerGetRelated:
class TestJobTemplateSerializerGetSummaryFields:
def test_survey_spec_exists(self, test_get_summary_fields, mocker, job_template):
job_template.survey_spec = {'name': 'blah', 'description': 'blah blah'}
with mocker.patch.object(JobTemplateSerializer, '_recent_jobs') as mock_rj:
mock_rj.return_value = []
test_get_summary_fields(JobTemplateSerializer, job_template, 'survey')
mock_rj = mocker.patch.object(JobTemplateSerializer, '_recent_jobs')
mock_rj.return_value = []
test_get_summary_fields(JobTemplateSerializer, job_template, 'survey')
def test_survey_spec_absent(self, get_summary_fields_mock_and_run, mocker, job_template):
job_template.survey_spec = None
with mocker.patch.object(JobTemplateSerializer, '_recent_jobs') as mock_rj:
mock_rj.return_value = []
summary = get_summary_fields_mock_and_run(JobTemplateSerializer, job_template)
mock_rj = mocker.patch.object(JobTemplateSerializer, '_recent_jobs')
mock_rj.return_value = []
summary = get_summary_fields_mock_and_run(JobTemplateSerializer, job_template)
assert 'survey' not in summary
def test_copy_edit_standard(self, mocker, job_template_factory):
@@ -107,10 +107,10 @@ class TestJobTemplateSerializerGetSummaryFields:
view.kwargs = {}
serializer.context['view'] = view
with mocker.patch("awx.api.serializers.role_summary_fields_generator", return_value='Can eat pie'):
with mocker.patch("awx.main.access.JobTemplateAccess.can_change", return_value='foobar'):
with mocker.patch("awx.main.access.JobTemplateAccess.can_copy", return_value='foo'):
response = serializer.get_summary_fields(jt_obj)
mocker.patch("awx.api.serializers.role_summary_fields_generator", return_value='Can eat pie')
mocker.patch("awx.main.access.JobTemplateAccess.can_change", return_value='foobar')
mocker.patch("awx.main.access.JobTemplateAccess.can_copy", return_value='foo')
response = serializer.get_summary_fields(jt_obj)
assert response['user_capabilities']['copy'] == 'foo'
assert response['user_capabilities']['edit'] == 'foobar'

View File

@@ -189,8 +189,8 @@ class TestWorkflowJobTemplateNodeSerializerSurveyPasswords:
serializer = WorkflowJobTemplateNodeSerializer()
wfjt = WorkflowJobTemplate.objects.create(name='fake-wfjt')
serializer.instance = WorkflowJobTemplateNode(workflow_job_template=wfjt, unified_job_template=jt, extra_data={'var1': '$encrypted$foooooo'})
with mocker.patch('awx.main.models.mixins.decrypt_value', return_value='foo'):
attrs = serializer.validate({'unified_job_template': jt, 'workflow_job_template': wfjt, 'extra_data': {'var1': '$encrypted$'}})
mocker.patch('awx.main.models.mixins.decrypt_value', return_value='foo')
attrs = serializer.validate({'unified_job_template': jt, 'workflow_job_template': wfjt, 'extra_data': {'var1': '$encrypted$'}})
assert 'survey_passwords' in attrs
assert 'var1' in attrs['survey_passwords']
assert attrs['extra_data']['var1'] == '$encrypted$foooooo'

View File

@@ -191,16 +191,16 @@ class TestResourceAccessList:
def test_parent_access_check_failed(self, mocker, mock_organization):
mock_access = mocker.MagicMock(__name__='for logger', return_value=False)
with mocker.patch('awx.main.access.BaseAccess.can_read', mock_access):
with pytest.raises(PermissionDenied):
self.mock_view(parent=mock_organization).check_permissions(self.mock_request())
mock_access.assert_called_once_with(mock_organization)
mocker.patch('awx.main.access.BaseAccess.can_read', mock_access)
with pytest.raises(PermissionDenied):
self.mock_view(parent=mock_organization).check_permissions(self.mock_request())
mock_access.assert_called_once_with(mock_organization)
def test_parent_access_check_worked(self, mocker, mock_organization):
mock_access = mocker.MagicMock(__name__='for logger', return_value=True)
with mocker.patch('awx.main.access.BaseAccess.can_read', mock_access):
self.mock_view(parent=mock_organization).check_permissions(self.mock_request())
mock_access.assert_called_once_with(mock_organization)
mocker.patch('awx.main.access.BaseAccess.can_read', mock_access)
self.mock_view(parent=mock_organization).check_permissions(self.mock_request())
mock_access.assert_called_once_with(mock_organization)
def test_related_search_reverse_FK_field():

View File

@@ -66,7 +66,7 @@ class TestJobTemplateLabelList:
mock_request = mock.MagicMock()
super(JobTemplateLabelList, view).unattach(mock_request, None, None)
assert mixin_unattach.called_with(mock_request, None, None)
mixin_unattach.assert_called_with(mock_request, None, None)
class TestInventoryInventorySourcesUpdate:
@@ -108,15 +108,16 @@ class TestInventoryInventorySourcesUpdate:
mock_request = mocker.MagicMock()
mock_request.user.can_access.return_value = can_access
with mocker.patch.object(InventoryInventorySourcesUpdate, 'get_object', return_value=obj):
with mocker.patch.object(InventoryInventorySourcesUpdate, 'get_serializer_context', return_value=None):
with mocker.patch('awx.api.serializers.InventoryUpdateDetailSerializer') as serializer_class:
serializer = serializer_class.return_value
serializer.to_representation.return_value = {}
mocker.patch.object(InventoryInventorySourcesUpdate, 'get_object', return_value=obj)
mocker.patch.object(InventoryInventorySourcesUpdate, 'get_serializer_context', return_value=None)
serializer_class = mocker.patch('awx.api.serializers.InventoryUpdateDetailSerializer')
view = InventoryInventorySourcesUpdate()
response = view.post(mock_request)
assert response.data == expected
serializer = serializer_class.return_value
serializer.to_representation.return_value = {}
view = InventoryInventorySourcesUpdate()
response = view.post(mock_request)
assert response.data == expected
class TestSurveySpecValidation:

View File

@@ -155,35 +155,35 @@ def test_node_getter_and_setters():
class TestWorkflowJobCreate:
def test_create_no_prompts(self, wfjt_node_no_prompts, workflow_job_unit, mocker):
mock_create = mocker.MagicMock()
with mocker.patch('awx.main.models.WorkflowJobNode.objects.create', mock_create):
wfjt_node_no_prompts.create_workflow_job_node(workflow_job=workflow_job_unit)
mock_create.assert_called_once_with(
all_parents_must_converge=False,
extra_data={},
survey_passwords={},
char_prompts=wfjt_node_no_prompts.char_prompts,
inventory=None,
unified_job_template=wfjt_node_no_prompts.unified_job_template,
workflow_job=workflow_job_unit,
identifier=mocker.ANY,
execution_environment=None,
)
mocker.patch('awx.main.models.WorkflowJobNode.objects.create', mock_create)
wfjt_node_no_prompts.create_workflow_job_node(workflow_job=workflow_job_unit)
mock_create.assert_called_once_with(
all_parents_must_converge=False,
extra_data={},
survey_passwords={},
char_prompts=wfjt_node_no_prompts.char_prompts,
inventory=None,
unified_job_template=wfjt_node_no_prompts.unified_job_template,
workflow_job=workflow_job_unit,
identifier=mocker.ANY,
execution_environment=None,
)
def test_create_with_prompts(self, wfjt_node_with_prompts, workflow_job_unit, credential, mocker):
mock_create = mocker.MagicMock()
with mocker.patch('awx.main.models.WorkflowJobNode.objects.create', mock_create):
wfjt_node_with_prompts.create_workflow_job_node(workflow_job=workflow_job_unit)
mock_create.assert_called_once_with(
all_parents_must_converge=False,
extra_data={},
survey_passwords={},
char_prompts=wfjt_node_with_prompts.char_prompts,
inventory=wfjt_node_with_prompts.inventory,
unified_job_template=wfjt_node_with_prompts.unified_job_template,
workflow_job=workflow_job_unit,
identifier=mocker.ANY,
execution_environment=None,
)
mocker.patch('awx.main.models.WorkflowJobNode.objects.create', mock_create)
wfjt_node_with_prompts.create_workflow_job_node(workflow_job=workflow_job_unit)
mock_create.assert_called_once_with(
all_parents_must_converge=False,
extra_data={},
survey_passwords={},
char_prompts=wfjt_node_with_prompts.char_prompts,
inventory=wfjt_node_with_prompts.inventory,
unified_job_template=wfjt_node_with_prompts.unified_job_template,
workflow_job=workflow_job_unit,
identifier=mocker.ANY,
execution_environment=None,
)
@pytest.mark.django_db

View File

@@ -0,0 +1,26 @@
from unittest import mock
from django.core.mail.message import EmailMessage
import awx.main.notifications.awssns_backend as awssns_backend
def test_send_messages():
with mock.patch('awx.main.notifications.awssns_backend.AWSSNSBackend._sns_publish') as sns_publish_mock:
aws_region = 'us-east-1'
sns_topic = f"arn:aws:sns:{aws_region}:111111111111:topic-mock"
backend = awssns_backend.AWSSNSBackend(aws_region=aws_region, aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None)
message = EmailMessage(
'test subject',
{'body': 'test body'},
[],
[
sns_topic,
],
)
sent_messages = backend.send_messages(
[
message,
]
)
sns_publish_mock.assert_called_once_with(topic_arn=sns_topic, message=message.body)
assert sent_messages == 1

View File

@@ -137,10 +137,10 @@ def test_send_notifications_not_list():
def test_send_notifications_job_id(mocker):
with mocker.patch('awx.main.models.UnifiedJob.objects.get'):
system.send_notifications([], job_id=1)
assert UnifiedJob.objects.get.called
assert UnifiedJob.objects.get.called_with(id=1)
mocker.patch('awx.main.models.UnifiedJob.objects.get')
system.send_notifications([], job_id=1)
assert UnifiedJob.objects.get.called
assert UnifiedJob.objects.get.called_with(id=1)
@mock.patch('awx.main.models.UnifiedJob.objects.get')

View File

@@ -7,15 +7,15 @@ def test_produce_supervisor_command(mocker):
mock_process = mocker.MagicMock()
mock_process.communicate = communicate_mock
Popen_mock = mocker.MagicMock(return_value=mock_process)
with mocker.patch.object(reload.subprocess, 'Popen', Popen_mock):
reload.supervisor_service_command("restart")
reload.subprocess.Popen.assert_called_once_with(
[
'supervisorctl',
'restart',
'tower-processes:*',
],
stderr=-1,
stdin=-1,
stdout=-1,
)
mocker.patch.object(reload.subprocess, 'Popen', Popen_mock)
reload.supervisor_service_command("restart")
reload.subprocess.Popen.assert_called_once_with(
[
'supervisorctl',
'restart',
'tower-processes:*',
],
stderr=-1,
stdin=-1,
stdout=-1,
)

View File

@@ -2,9 +2,11 @@
# All Rights Reserved.
# Python
import base64
import logging
import sys
import traceback
import os
from datetime import datetime
# Django
@@ -15,6 +17,15 @@ from django.utils.encoding import force_str
# AWX
from awx.main.exceptions import PostRunError
# OTEL
from opentelemetry._logs import set_logger_provider
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporter as OTLPGrpcLogExporter
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter as OTLPHttpLogExporter
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.sdk.resources import Resource
class RSysLogHandler(logging.handlers.SysLogHandler):
append_nul = False
@@ -133,3 +144,39 @@ if settings.COLOR_LOGS is True:
pass
else:
ColorHandler = logging.StreamHandler
class OTLPHandler(LoggingHandler):
def __init__(self, endpoint=None, protocol='grpc', service_name=None, instance_id=None, auth=None, username=None, password=None):
if not endpoint:
raise ValueError("endpoint required")
if auth == 'basic' and (username is None or password is None):
raise ValueError("auth type basic requires username and passsword parameters")
self.endpoint = endpoint
self.service_name = service_name or (sys.argv[1] if len(sys.argv) > 1 else (sys.argv[0] or 'unknown_service'))
self.instance_id = instance_id or os.uname().nodename
logger_provider = LoggerProvider(
resource=Resource.create(
{
"service.name": self.service_name,
"service.instance.id": self.instance_id,
}
),
)
set_logger_provider(logger_provider)
headers = {}
if auth == 'basic':
secret = f'{username}:{password}'
headers['Authorization'] = "Basic " + base64.b64encode(secret.encode()).decode()
if protocol == 'grpc':
otlp_exporter = OTLPGrpcLogExporter(endpoint=self.endpoint, insecure=True, headers=headers)
elif protocol == 'http':
otlp_exporter = OTLPHttpLogExporter(endpoint=self.endpoint, headers=headers)
logger_provider.add_log_record_processor(BatchLogRecordProcessor(otlp_exporter))
super().__init__(level=logging.NOTSET, logger_provider=logger_provider)

View File

@@ -8,9 +8,22 @@ from django.db import connection
@contextmanager
def advisory_lock(*args, **kwargs):
def advisory_lock(*args, lock_session_timeout_milliseconds=0, **kwargs):
if connection.vendor == 'postgresql':
cur = None
idle_in_transaction_session_timeout = None
idle_session_timeout = None
if lock_session_timeout_milliseconds > 0:
with connection.cursor() as cur:
idle_in_transaction_session_timeout = cur.execute('SHOW idle_in_transaction_session_timeout').fetchone()[0]
idle_session_timeout = cur.execute('SHOW idle_session_timeout').fetchone()[0]
cur.execute(f"SET idle_in_transaction_session_timeout = {lock_session_timeout_milliseconds}")
cur.execute(f"SET idle_session_timeout = {lock_session_timeout_milliseconds}")
with django_pglocks_advisory_lock(*args, **kwargs) as internal_lock:
yield internal_lock
if lock_session_timeout_milliseconds > 0:
with connection.cursor() as cur:
cur.execute(f"SET idle_in_transaction_session_timeout = {idle_in_transaction_session_timeout}")
cur.execute(f"SET idle_session_timeout = {idle_session_timeout}")
else:
yield True

48
awx/main/utils/proxy.py Normal file
View File

@@ -0,0 +1,48 @@
# Copyright (c) 2024 Ansible, Inc.
# All Rights Reserved.
# DRF
from rest_framework.request import Request
"""
Note that these methods operate on request.environ. This data is from uwsgi.
It is the source data from which request.headers (read-only) is constructed.
"""
def is_proxy_in_headers(request: Request, proxy_list: list[str], headers: list[str]) -> bool:
"""
Determine if the request went through at least one proxy in the list.
Example:
request.environ = {
"HTTP_X_FOO": "8.8.8.8, 192.168.2.1",
"REMOTE_ADDR": "192.168.2.1",
"REMOTE_HOST": "foobar"
}
proxy_list = ["192.168.2.1"]
headers = ["HTTP_X_FOO", "REMOTE_ADDR", "REMOTE_HOST"]
The above would return True since 192.168.2.1 is a value for the header HTTP_X_FOO
request: The DRF/Django request. request.environ dict will be used for searching for proxies
proxy_list: A list of known and trusted proxies may be ip or hostnames
headers: A list of keys for which to consider values that may contain a proxy
"""
remote_hosts = set()
for header in headers:
for value in request.environ.get(header, '').split(','):
value = value.strip()
if value:
remote_hosts.add(value)
return bool(remote_hosts.intersection(set(proxy_list)))
def delete_headers_starting_with_http(request: Request, headers: list[str]):
for header in headers:
if header.startswith('HTTP_'):
request.environ.pop(header, None)

View File

@@ -285,8 +285,6 @@ class WebSocketRelayManager(object):
except asyncio.CancelledError:
# Handle the case where the task was already cancelled by the time we got here.
pass
except Exception as e:
logger.warning(f"Failed to cancel relay connection for {hostname}: {e}")
del self.relay_connections[hostname]
@@ -297,8 +295,6 @@ class WebSocketRelayManager(object):
self.stats_mgr.delete_remote_host_stats(hostname)
except KeyError:
pass
except Exception as e:
logger.warning(f"Failed to delete stats for {hostname}: {e}")
async def run(self):
event_loop = asyncio.get_running_loop()
@@ -306,91 +302,66 @@ class WebSocketRelayManager(object):
self.stats_mgr = RelayWebsocketStatsManager(event_loop, self.local_hostname)
self.stats_mgr.start()
# Set up a pg_notify consumer for allowing web nodes to "provision" and "deprovision" themselves gracefully.
database_conf = deepcopy(settings.DATABASES['default'])
database_conf['OPTIONS'] = deepcopy(database_conf.get('OPTIONS', {}))
for k, v in settings.LISTENER_DATABASES.get('default', {}).items():
database_conf[k] = v
if k != 'OPTIONS':
database_conf[k] = v
for k, v in settings.LISTENER_DATABASES.get('default', {}).get('OPTIONS', {}).items():
database_conf['OPTIONS'][k] = v
if 'PASSWORD' in database_conf:
database_conf['OPTIONS']['password'] = database_conf.pop('PASSWORD')
task = None
async_conn = await psycopg.AsyncConnection.connect(
dbname=database_conf['NAME'],
host=database_conf['HOST'],
user=database_conf['USER'],
port=database_conf['PORT'],
**database_conf.get("OPTIONS", {}),
)
# Managing the async_conn here so that we can close it if we need to restart the connection
async_conn = None
await async_conn.set_autocommit(True)
on_ws_heartbeat_task = event_loop.create_task(self.on_ws_heartbeat(async_conn))
# Establishes a websocket connection to /websocket/relay on all API servers
try:
while True:
if not task or task.done():
try:
# Try to close the connection if it's open
if async_conn:
try:
await async_conn.close()
except Exception as e:
logger.warning(f"Failed to close connection to database for pg_notify: {e}")
while True:
if on_ws_heartbeat_task.done():
raise Exception("on_ws_heartbeat_task has exited")
# and re-establish the connection
async_conn = await psycopg.AsyncConnection.connect(
dbname=database_conf['NAME'],
host=database_conf['HOST'],
user=database_conf['USER'],
port=database_conf['PORT'],
**database_conf.get("OPTIONS", {}),
)
await async_conn.set_autocommit(True)
future_remote_hosts = self.known_hosts.keys()
current_remote_hosts = self.relay_connections.keys()
deleted_remote_hosts = set(current_remote_hosts) - set(future_remote_hosts)
new_remote_hosts = set(future_remote_hosts) - set(current_remote_hosts)
# before creating the task that uses the connection
task = event_loop.create_task(self.on_ws_heartbeat(async_conn), name="on_ws_heartbeat")
logger.info("Creating `on_ws_heartbeat` task in event loop.")
# This loop handles if we get an advertisement from a host we already know about but
# the advertisement has a different IP than we are currently connected to.
for hostname, address in self.known_hosts.items():
if hostname not in self.relay_connections:
# We've picked up a new hostname that we don't know about yet.
continue
except Exception as e:
logger.warning(f"Failed to connect to database for pg_notify: {e}")
if address != self.relay_connections[hostname].remote_host:
deleted_remote_hosts.add(hostname)
new_remote_hosts.add(hostname)
future_remote_hosts = self.known_hosts.keys()
current_remote_hosts = self.relay_connections.keys()
deleted_remote_hosts = set(current_remote_hosts) - set(future_remote_hosts)
new_remote_hosts = set(future_remote_hosts) - set(current_remote_hosts)
# Delete any hosts with closed connections
for hostname, relay_conn in self.relay_connections.items():
if not relay_conn.connected:
deleted_remote_hosts.add(hostname)
# This loop handles if we get an advertisement from a host we already know about but
# the advertisement has a different IP than we are currently connected to.
for hostname, address in self.known_hosts.items():
if hostname not in self.relay_connections:
# We've picked up a new hostname that we don't know about yet.
continue
if deleted_remote_hosts:
logger.info(f"Removing {deleted_remote_hosts} from websocket broadcast list")
await asyncio.gather(*[self.cleanup_offline_host(h) for h in deleted_remote_hosts])
if address != self.relay_connections[hostname].remote_host:
deleted_remote_hosts.add(hostname)
new_remote_hosts.add(hostname)
if new_remote_hosts:
logger.info(f"Adding {new_remote_hosts} to websocket broadcast list")
# Delete any hosts with closed connections
for hostname, relay_conn in self.relay_connections.items():
if not relay_conn.connected:
deleted_remote_hosts.add(hostname)
for h in new_remote_hosts:
stats = self.stats_mgr.new_remote_host_stats(h)
relay_connection = WebsocketRelayConnection(name=self.local_hostname, stats=stats, remote_host=self.known_hosts[h])
relay_connection.start()
self.relay_connections[h] = relay_connection
if deleted_remote_hosts:
logger.info(f"Removing {deleted_remote_hosts} from websocket broadcast list")
await asyncio.gather(*[self.cleanup_offline_host(h) for h in deleted_remote_hosts])
if new_remote_hosts:
logger.info(f"Adding {new_remote_hosts} to websocket broadcast list")
for h in new_remote_hosts:
stats = self.stats_mgr.new_remote_host_stats(h)
relay_connection = WebsocketRelayConnection(name=self.local_hostname, stats=stats, remote_host=self.known_hosts[h])
relay_connection.start()
self.relay_connections[h] = relay_connection
await asyncio.sleep(settings.BROADCAST_WEBSOCKET_NEW_INSTANCE_POLL_RATE_SECONDS)
finally:
if async_conn:
logger.info("Shutting down db connection for wsrelay.")
try:
await async_conn.close()
except Exception as e:
logger.info(f"Failed to close connection to database for pg_notify: {e}")
await asyncio.sleep(settings.BROADCAST_WEBSOCKET_NEW_INSTANCE_POLL_RATE_SECONDS)

View File

@@ -114,6 +114,7 @@ MEDIA_ROOT = os.path.join(BASE_DIR, 'public', 'media')
MEDIA_URL = '/media/'
LOGIN_URL = '/api/login/'
LOGOUT_ALLOWED_HOSTS = None
# Absolute filesystem path to the directory to host projects (with playbooks).
# This directory should not be web-accessible.
@@ -261,6 +262,7 @@ START_TASK_LIMIT = 100
# We have the grace period so the task manager can bail out before the timeout.
TASK_MANAGER_TIMEOUT = 300
TASK_MANAGER_TIMEOUT_GRACE_PERIOD = 60
TASK_MANAGER_LOCK_TIMEOUT = TASK_MANAGER_TIMEOUT + TASK_MANAGER_TIMEOUT_GRACE_PERIOD
# Number of seconds _in addition to_ the task manager timeout a job can stay
# in waiting without being reaped
@@ -277,6 +279,9 @@ SESSION_COOKIE_SECURE = True
# Note: This setting may be overridden by database settings.
SESSION_COOKIE_AGE = 1800
# Option to change userLoggedIn cookie SameSite policy.
USER_COOKIE_SAMESITE = 'Lax'
# Name of the cookie that contains the session information.
# Note: Changing this value may require changes to any clients.
SESSION_COOKIE_NAME = 'awx_sessionid'
@@ -488,6 +493,7 @@ CELERYBEAT_SCHEDULE = {
'cleanup_images': {'task': 'awx.main.tasks.system.cleanup_images_and_files', 'schedule': timedelta(hours=3)},
'cleanup_host_metrics': {'task': 'awx.main.tasks.host_metrics.cleanup_host_metrics', 'schedule': timedelta(hours=3, minutes=30)},
'host_metric_summary_monthly': {'task': 'awx.main.tasks.host_metrics.host_metric_summary_monthly', 'schedule': timedelta(hours=4)},
'periodic_resource_sync': {'task': 'awx.main.tasks.system.periodic_resource_sync', 'schedule': timedelta(minutes=15)},
}
# Django Caching Configuration
@@ -652,6 +658,10 @@ AWX_ANSIBLE_CALLBACK_PLUGINS = ""
# Automatically remove nodes that have missed their heartbeats after some time
AWX_AUTO_DEPROVISION_INSTANCES = False
# If False, do not allow creation of resources that are shared with the platform ingress
# e.g. organizations, teams, and users
ALLOW_LOCAL_RESOURCE_MANAGEMENT = True
# Enable Pendo on the UI, possible values are 'off', 'anonymous', and 'detailed'
# Note: This setting may be overridden by database settings.
PENDO_TRACKING_STATE = "off"
@@ -774,6 +784,11 @@ INSIGHTS_EXCLUDE_EMPTY_GROUPS = False
TERRAFORM_INSTANCE_ID_VAR = 'id'
TERRAFORM_EXCLUDE_EMPTY_GROUPS = True
# ------------------------
# OpenShift Virtualization
# ------------------------
OPENSHIFT_VIRTUALIZATION_EXCLUDE_EMPTY_GROUPS = True
# ---------------------
# ----- Custom -----
# ---------------------
@@ -876,6 +891,7 @@ LOGGING = {
'address': '/var/run/awx-rsyslog/rsyslog.sock',
'filters': ['external_log_enabled', 'dynamic_level_filter', 'guid'],
},
'otel': {'class': 'logging.NullHandler'},
},
'loggers': {
'django': {'handlers': ['console']},
@@ -1145,13 +1161,8 @@ ANSIBLE_BASE_CUSTOM_VIEW_PARENT = 'awx.api.generics.APIView'
# Settings for the ansible_base RBAC system
# Only used internally, names of the managed RoleDefinitions to create
ANSIBLE_BASE_ROLE_PRECREATE = {
'object_admin': '{cls.__name__} Admin',
'org_admin': 'Organization Admin',
'org_children': 'Organization {cls.__name__} Admin',
'special': '{cls.__name__} {action}',
}
# This has been moved to data migration code
ANSIBLE_BASE_ROLE_PRECREATE = {}
# Name for auto-created roles that give users permissions to what they create
ANSIBLE_BASE_ROLE_CREATOR_NAME = '{cls.__name__} Creator'
@@ -1162,9 +1173,6 @@ ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED = True
# Permissions a user will get when creating a new item
ANSIBLE_BASE_CREATOR_DEFAULTS = ['change', 'delete', 'execute', 'use', 'adhoc', 'approve', 'update', 'view']
# This is a stopgap, will delete after resource registry integration
ANSIBLE_BASE_SERVICE_PREFIX = "awx"
# Temporary, for old roles API compatibility, save child permissions at organization level
ANSIBLE_BASE_CACHE_PARENT_PERMISSIONS = True
@@ -1178,6 +1186,3 @@ ANSIBLE_BASE_ALLOW_SINGLETON_ROLES_API = False # Do not allow creating user-def
# system username for django-ansible-base
SYSTEM_USERNAME = None
# Use AWX base view, to give 401 on unauthenticated requests
ANSIBLE_BASE_CUSTOM_VIEW_PARENT = 'awx.api.generics.APIView'

File diff suppressed because it is too large Load Diff

View File

@@ -7,18 +7,18 @@ from django.core.cache import cache
def test_ldap_default_settings(mocker):
from_db = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=from_db):
settings = LDAPSettings()
assert settings.ORGANIZATION_MAP == {}
assert settings.TEAM_MAP == {}
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=from_db)
settings = LDAPSettings()
assert settings.ORGANIZATION_MAP == {}
assert settings.TEAM_MAP == {}
def test_ldap_default_network_timeout(mocker):
cache.clear() # clearing cache avoids picking up stray default for OPT_REFERRALS
from_db = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=from_db):
settings = LDAPSettings()
assert settings.CONNECTION_OPTIONS[ldap.OPT_NETWORK_TIMEOUT] == 30
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=from_db)
settings = LDAPSettings()
assert settings.CONNECTION_OPTIONS[ldap.OPT_NETWORK_TIMEOUT] == 30
def test_ldap_filter_validator():

View File

@@ -38,7 +38,9 @@ class CompleteView(BaseRedirectView):
response = super(CompleteView, self).dispatch(request, *args, **kwargs)
if self.request.user and self.request.user.is_authenticated:
logger.info(smart_str(u"User {} logged in".format(self.request.user.username)))
response.set_cookie('userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False))
response.set_cookie(
'userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False), samesite=getattr(settings, 'USER_COOKIE_SAMESITE', 'Lax')
)
response.setdefault('X-API-Session-Cookie-Name', getattr(settings, 'SESSION_COOKIE_NAME', 'awx_sessionid'))
return response

View File

@@ -62,7 +62,7 @@ function CredentialLookup({
? { credential_type: credentialTypeId }
: {};
const typeKindParams = credentialTypeKind
? { credential_type__kind: credentialTypeKind }
? { credential_type__kind__in: credentialTypeKind }
: {};
const typeNamespaceParams = credentialTypeNamespace
? { credential_type__namespace: credentialTypeNamespace }
@@ -125,7 +125,7 @@ function CredentialLookup({
? { credential_type: credentialTypeId }
: {};
const typeKindParams = credentialTypeKind
? { credential_type__kind: credentialTypeKind }
? { credential_type__kind__in: credentialTypeKind }
: {};
const typeNamespaceParams = credentialTypeNamespace
? { credential_type__namespace: credentialTypeNamespace }

View File

@@ -190,6 +190,7 @@ function NotificationList({
name: t`Notification type`,
key: 'or__notification_type',
options: [
['awssns', t`AWS SNS`],
['email', t`Email`],
['grafana', t`Grafana`],
['hipchat', t`Hipchat`],

View File

@@ -12,7 +12,7 @@ const Inner = styled.div`
border-radius: 2px;
color: white;
left: 10px;
max-width: 300px;
max-width: 500px;
padding: 5px 10px;
position: absolute;
top: 10px;

View File

@@ -12,6 +12,7 @@ const GridDL = styled.dl`
column-gap: 15px;
display: grid;
grid-template-columns: max-content;
overflow-wrap: anywhere;
row-gap: 0px;
dt {
grid-column-start: 1;

View File

@@ -56,6 +56,10 @@ describe('<InventorySourceAdd />', () => {
['satellite6', 'Red Hat Satellite 6'],
['openstack', 'OpenStack'],
['rhv', 'Red Hat Virtualization'],
[
'openshift_virtualization',
'Red Hat OpenShift Virtualization',
],
['controller', 'Red Hat Ansible Automation Platform'],
],
},

View File

@@ -22,7 +22,9 @@ const ansibleDocUrls = {
constructed:
'https://docs.ansible.com/ansible/latest/collections/ansible/builtin/constructed_inventory.html',
terraform:
'https://github.com/ansible-collections/cloud.terraform/blob/stable-statefile-inventory/plugins/inventory/terraform_state.py',
'https://github.com/ansible-collections/cloud.terraform/blob/main/docs/cloud.terraform.terraform_state_inventory.rst',
openshift_virtualization:
'https://kubevirt.io/kubevirt.core/latest/plugins/kubevirt.html',
};
const getInventoryHelpTextStrings = () => ({
@@ -121,7 +123,7 @@ const getInventoryHelpTextStrings = () => ({
<br />
{value && (
<div>
{t`If you want the Inventory Source to update on launch , click on Update on Launch,
{t`If you want the Inventory Source to update on launch , click on Update on Launch,
and also go to `}
<Link to={`/projects/${value.id}/details`}> {value.name} </Link>
{t`and click on Update Revision on Launch.`}
@@ -140,7 +142,7 @@ const getInventoryHelpTextStrings = () => ({
<br />
{value && (
<div>
{t`If you want the Inventory Source to update on launch , click on Update on Launch,
{t`If you want the Inventory Source to update on launch , click on Update on Launch,
and also go to `}
<Link to={`/projects/${value.id}/details`}> {value.name} </Link>
{t`and click on Update Revision on Launch`}

View File

@@ -26,6 +26,7 @@ import {
TerraformSubForm,
VMwareSubForm,
VirtualizationSubForm,
OpenShiftVirtualizationSubForm,
} from './InventorySourceSubForms';
const buildSourceChoiceOptions = (options) => {
@@ -231,6 +232,15 @@ const InventorySourceFormFields = ({
sourceOptions={sourceOptions}
/>
),
openshift_virtualization: (
<OpenShiftVirtualizationSubForm
autoPopulateCredential={
!source?.id ||
source?.source !== 'openshift_virtualization'
}
sourceOptions={sourceOptions}
/>
),
}[sourceField.value]
}
</FormColumnLayout>

View File

@@ -0,0 +1,64 @@
import React, { useCallback } from 'react';
import { useField, useFormikContext } from 'formik';
import { t } from '@lingui/macro';
import { useConfig } from 'contexts/Config';
import getDocsBaseUrl from 'util/getDocsBaseUrl';
import CredentialLookup from 'components/Lookup/CredentialLookup';
import { required } from 'util/validators';
import {
OptionsField,
VerbosityField,
EnabledVarField,
EnabledValueField,
HostFilterField,
SourceVarsField,
} from './SharedFields';
import getHelpText from '../Inventory.helptext';
const OpenShiftVirtualizationSubForm = ({ autoPopulateCredential }) => {
const helpText = getHelpText();
const { setFieldValue, setFieldTouched } = useFormikContext();
const [credentialField, credentialMeta, credentialHelpers] =
useField('credential');
const config = useConfig();
const handleCredentialUpdate = useCallback(
(value) => {
setFieldValue('credential', value);
setFieldTouched('credential', true, false);
},
[setFieldValue, setFieldTouched]
);
const docsBaseUrl = getDocsBaseUrl(config);
return (
<>
<CredentialLookup
credentialTypeNamespace="kubernetes_bearer_token"
label={t`Credential`}
helperTextInvalid={credentialMeta.error}
isValid={!credentialMeta.touched || !credentialMeta.error}
onBlur={() => credentialHelpers.setTouched()}
onChange={handleCredentialUpdate}
value={credentialField.value}
required
autoPopulate={autoPopulateCredential}
validate={required(t`Select a value for this field`)}
/>
<VerbosityField />
<HostFilterField />
<EnabledVarField />
<EnabledValueField />
<OptionsField />
<SourceVarsField
popoverContent={helpText.sourceVars(
docsBaseUrl,
'openshift_virtualization'
)}
/>
</>
);
};
export default OpenShiftVirtualizationSubForm;

Some files were not shown because too many files have changed in this diff Show More