Compare commits

...

741 Commits

Author SHA1 Message Date
Peter Braun
543d3f940b update licenses and embedded sources 2025-04-15 11:14:10 +02:00
Peter Braun
ee7edb9179 update sqlparse dependency 2025-04-14 23:21:16 +02:00
Alan Rominger
49240ca8e8 Fix environment-specific rough edges of logging setup (#15193) 2025-04-14 12:12:06 -04:00
Alan Rominger
5ff3d4b2fc Reduce log noise from next run being in past (#15670) 2025-04-14 09:04:16 -04:00
Fabio Alessandro Locati
3f96ea17d6 Links in README.md should use HTTPS (#15940) 2025-04-12 18:38:17 +00:00
TVo
f59ad4f39c Remove cgi deprecation exception from pytest.ini (#15939)
Remove deprecation exception from pytest.ini
2025-04-11 14:07:23 -07:00
Alan Rominger
c3ee0c2d8a Sensible log behavior when redis is unavailable (#15466)
* Sensible log behavior when redis is unavailable

* Consistent behavior with dispatcher and callback
2025-04-10 13:45:05 -07:00
Alan Rominger
7a3010f0e6 Bring WFJT job access to parity with UnifiedJobAccess (#15344)
* Bring WFJT job access to parity with UnifiedJobAccess

* Run linters
2025-04-10 15:29:47 -04:00
Lila Yasin
05dc9bad1c AAP-39365 facts are unintentionally deleted when the inventory is modified during a job execution (#15910)
* Added test_jobs.py to the model unit test folder in orther to show the undesired behaviour with fact cache

Signed-off-by: onetti7 <davonebat@gmail.com>

* Added test_jobs.py to the model unit test folder in orther to show the undesired behaviour with fact cache

Signed-off-by: onetti7 <davonebat@gmail.com>

* Solved undesired behaviour with fact_cache

Signed-off-by: onetti7 <davonebat@gmail.com>

* Solved bug with slices

Signed-off-by: onetti7 <davonebat@gmail.com>

* Remove unused imports

Remove now unused line of code which was commented out by the contributor

Revert "Remove now unused line of code which was commented out by the contributor"

This reverts commit f1a056a2356d56bc7256957a18503bd14dcfd8aa.

* Add back line that had been commented out as this line makes hosts specific to the particular slice when applicable

Revise private_data_dir fixture to see if it improves code coverage

Checked out awx/main/tests/unit/models/test_jobs.py in devel to see if it resolves git diff issue

* Fix formatting in awx/main/tests/unit/models/test_jobs.py

Rename for loop from host in hosts to hosts in hosts_cahced and remove unneeded continue

Revise finish_fact_cache to utilize inventory rather than hosts

Remove local var hosts that was assigned but unused

Revert change in start_fact_cache hosts_cached back to hosts

Revise the way we are handling hosts_cached and joining the file

Revert "Revise the way we are handling hosts_cached and joining the file"

This reverts commit e6e3d2f09c1b79a9bce3647a72e3dad97fe0aed8.

Reapply "Revise the way we are handling hosts_cached and joining the file"

This reverts commit a42b7ae69133fee24d3a5f1b456d9c343d111df9.

Revert some of my changes to get back to a better working state

Rename for loop to host in hosts_cached and remove unneeded continue

Remove jobs job.get_hosts_for_fact_cache() from post run hook, fix if statement after continue block, and revise how we are calling hosts in finish for loop

Add test_invalid_host_facts to test_jobs to increase code coverage

Update method signature to use hosts_cached and updated other references to hosts in finish_facts_cached

Rename hosts iterator to hosts_cached to agree with naming elsewhere

Revise test_invalid_host_facts to get more code coverage

Revise test_invalid_host_facts to increase codecov

Revise test_pre_post_run_hook_facts_deleted_sliced to ensure we are hitting the assertionerror for code cov

Revise  mock_inventory.hosts. to hit assert failure

Add revision of hosts and facts to force failure to satisfy code cov

Fix failure in test_pre_post_run_hook_facts_deleted_sliced

Add back for loop to create failures and add assert to hit them

Remove hosts.iterator() from both start_fact_cache and finish_fact_cache

Remove unused import of Queryset to satisfy api-lint

Fix typo in docstring hasnot to has not

Move hosts_cached.append(host) to outer loop in start_fact_cache

Add class to help support cached hosts resolving host.name issue with hosts_cached

* Add live tests for ansible facts

Remove fixture needed for local work only maybe

Revert "Add class to help support cached hosts resolving host.name issue with hosts_cached"

This reverts commit 99d998cfb9960baafe887de80bd9b01af50513ec.

* Move hosts_cached.append(host) outside of try except

* Move hosts_cached.append(host) to the beginning of start_fact_cache

---------

Signed-off-by: onetti7 <davonebat@gmail.com>
Co-authored-by: onetti7 <davonebat@gmail.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-04-10 11:46:41 -04:00
Alan Rominger
38f0f8d45f Remove pbr dependency (#15806)
* Remove pbr dependency

* Review comment, remove comment
2025-04-09 17:20:12 -04:00
Alan Rominger
d3ee9a1bfd AAP-27502 Try removing coreapi for deprecation warning (#15804)
Try removing coreapi for deprecation warning
2025-04-09 10:50:07 -07:00
TVo
438aa463d5 Remove L10N deprecation exception (#15925)
* Remove L10N deprecation exception

* Remove L10N from default settings file.
2025-04-09 06:22:01 -07:00
jessicamack
51f9160654 Fix CVE 2025-26699 (#15924)
fix CVE 2025-26699
2025-04-08 12:07:22 -04:00
Dave
ac3123a2ac Error reporting and handling in GH14575/GH12682 (#14802)
Bug Error reporting and handling in GH14575/GH12682

This targets a bug that tries to parse blank string as None for panelid
and dashboardid.

It also prints more errors reporting to the console to diagnose
reporting issues

Co-authored-by: Lila Yasin <lyasin@redhat.com>
2025-04-08 15:27:19 +00:00
Hao Liu
c4ee5127c5 Prevent automountServiceAccountToken in containergroup pod sepc (#15586)
* Prevent job pod from mounting serviceaccount token

* Add serializer validation for cg pod_spec_override

Prevent automountServiceAccountToken to be set to true and provide an error message when automountServiceAccountToken is being set to true
2025-04-03 12:58:16 -04:00
Hao Liu
9ec7540c4b Common setup-python in github action (#15901) 2025-04-03 11:14:52 -04:00
Hao Liu
2389fc691e Common action for setup ssh agent in GHA (#15902) 2025-04-03 11:14:33 -04:00
Konstantin
567f5a2476 Credentials: accept empty description (#15857)
Accept empty description
2025-04-02 17:05:38 +00:00
Jan-Piet Mens
e837535396 Indicate tower_cli.cfg can also be in current directory (#15092)
Update readme to provide more details on how tower_cli.cfg is used.

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2025-04-02 11:32:11 -04:00
Konstantin Kuminsky
1d57f1c355 Ability to remove credentials owned by a user 2025-04-02 16:48:27 +02:00
Konstantin
7676f14114 Accept empty description 2025-04-02 16:01:38 +02:00
Dirk Jülich
182e5cfaa4 AAP-37381 Apply password validators from settings.AUTH_PASSWORD_VALIDATORS correctly. (#15897)
* Move call to django_validate_password to the correct method were the user object is available.

* Added tests for the Django password validation functionality.
2025-04-01 12:03:11 +02:00
Fabio Alessandro Locati
99be91e939 Add notice of paused releases (#15900)
* Add notice of suspended releases

* Improve following suggestions
2025-03-27 19:14:21 +00:00
Chris Meyers
9ff163b919 Remove AsgiHandler deprecation exception
* Time has passed. Channels (4.2.0) no longer raises a deprecation
  warning for this case. It used to (4.1.0).
* We do NOT serve http requests over daphne, this is the default
  behavior of ProtocolTypeRouter() when the http param is NOT included
2025-03-27 11:40:40 -04:00
Chris Meyers
5d0d0404c7 Remove ProtocolTypeRouter deprecation exception
* Time has passed. Channels (4.2.0) no longer raises a deprecation
  warning for this case. It used to (4.1.0).
* All is good. No code changes needed for this. We do NOT service http
  requests over daphne, just websockets. We, correctly, do NOT supply
  the http key so daphne does NOT service http requests.
2025-03-27 11:17:56 -04:00
Chris Meyers
5d53821ce5 Aap 41580 indirect host count wildcard query (#15893)
* Support <collection_namespace>.<collection_name>.* indirect host query
  to match ANY module in the <collection_namespace>.<collection_name>
* Add tests for new wildcard indirect host count
* error checking of ansible event name
* error checking of ansible event query
2025-03-24 08:15:44 -04:00
Seth Foster
39cd09ce19 Remove django settings module env var (#15896)
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-03-18 20:20:26 -04:00
Jake Jackson
cd0e27446a Update bug scrub docs (#15894)
* Update docs with a few more things

* update about use of PAT
* update around managing output from the script

* Fix spacing and empty line

* finish run on sentence
* update requirements with extra dep needed
2025-03-18 11:06:37 -04:00
Hao Liu
628a0e6a36 Add opa_query_path to Organization/Inventory/JobTemplate (#15863) 2025-03-18 09:06:14 -04:00
Eric C Chong
8fb5862223 Allow lookup_organization to find teams and resources from different orgs 2025-03-12 13:32:51 -04:00
Sasa Jovicic
6f7d5ca8a3 Implement an option to choose a job type on relaunch (issue #14177) (#15249)
Allows changing the job type (run, check) when relaunching
a job by adding a "job_type" to the relaunch POST payload
2025-03-12 13:27:05 -04:00
Seth Foster
0f0f5aa289 Pass in private_data_dir when project update is on K8S
In OCP/K8S, projects run in the task pod's ee container. The private_data_dir is not extracted to /runner. Instead, the project update runs directly from the mounted in private_data_dir, e.g. /tmp/awx_1_abcd.

When injecting a credential that uses extra vars, we pass the private_data_dir as as the container_root, so that the correct command line argument is generated, e.g. "-e /tmp/awx_1_abcd/env/extra_var_file".

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-03-11 23:12:10 -04:00
Alan Rominger
bc12fa2283 Fix indirect host counting task test race condition (#15871) 2025-03-11 14:46:39 -04:00
Pablo H.
03b37037d6 feat: awx community bug scrub script (#15869)
* feat: awx community bug scrub script

---------

Co-authored-by: thedoubl3j <jljacks93@gmail.com>
2025-03-11 14:07:54 -04:00
Alan Rominger
5668973d70 Allow schema generation on-demand (#15885) 2025-03-11 13:49:02 -04:00
Hao Liu
e6434454ce Fix CI schema gen (#15886)
Fix schema gen failure

```
ERROR: invalid empty ssh agent socket: make sure SSH_AUTH_SOCK is set
```
2025-03-11 16:57:36 +00:00
Alan Rominger
3ba9c026ea Pin drf-yasg to make api-test pass (#15887)
Ping drf-yasg to make api-test pass
2025-03-11 16:39:06 +00:00
Konstantin
a206ca22ec Change collection name back to awx 2025-03-11 10:14:30 +01:00
Konstantin Kuminsky
e961cbe46f Few minor changes in the lookup description 2025-03-11 10:14:30 +01:00
Bruno Rocha
0ffe04ed9c feat: Manage Django Settings with Dynaconf
Dynaconf is being added from DAB factory to load Django Settings
2025-03-07 10:18:27 -05:00
Alan Rominger
ee739b5fd9 Fix test flake due to host metric id enumeration (#15875) 2025-03-06 14:35:39 -05:00
Peter Braun
abc04e5c88 feat: do not count dark hosts as updated (#15872)
* feat: do not count dark hosts as updated

* update functional tests
2025-03-06 09:41:12 +01:00
Peter Braun
5b17e5c9c3 update: use singular form ANSIBLE_COLLECTIONS_PATH (#15841)
* update: use singular form ANSIBLE_COLLECTIONS_PATH

* update functional tests
2025-03-05 16:39:34 +01:00
Peter Braun
7b8b37d9a8 fix: audit record name should not be the hostname (#15864)
* fix: audit record name should not be the hostname

* fix: update tests
2025-02-27 13:43:59 +01:00
Hao Liu
43b72161ce Remove requirements_git.credentials.txt (#15862)
Switched to ssh based auth for requirements_git in https://github.com/ansible/awx/pull/15838
2025-02-25 18:44:36 +00:00
Marc Hassan
de4a971cb3 cli: set non-zero return code for canceled status (#15678) 2025-02-25 11:11:47 -05:00
Alan Rominger
fb4879b2c9 Add missing slash breaking image builds (#15861) 2025-02-24 20:40:09 -05:00
Alan Rominger
7d30dff075 Feature indirect host counting (#15802)
* AAP-37282 Add parse JQ data and test it for a `job` object in isolation (#15774)

* Add jq dependency

* Add file in progress

* Add license for jq

* Write test and get it passing

* Successfully test collection of `event_query.yml` data (#15761)

* Callback plugin method from cmeyers adapted to global collection list

Get tests passing

Mild rebranding

Put behind feature flag, flip true in dev

Add noqa flag

* Add missing wait_for_events

* feat: try grabbing query files from artifacts directory (#15776)

* Contract changes for the event_query collection callback plugin (#15785)

* Minor import changes to collection processing in callback plugin

* Move agreed location of event_query file

* feat: remaining schema changes for indirect host audits (#15787)

* Re-organize test file and move artifacts processing logic to callback (#15784)

* Rename the indirect host counting test file

* Combine artifacts saving logic

* Connect host audit model to jq logic via new task

* Add unit tests for indirect host counting (#15792)

* Do not get django flags from database (#15794)

* Document, implement, and test remaining indirect host audit fields (#15796)

* Document, implement, and test remaining indirect host audit fields

* Fix hashing

* AAP-39559 Wait for all event processing to finish, add fallback task (#15798)

* Wait for all event processing to finish, add fallback task

* Add flag check to periodic task

* feat: cleanup of old indirect host audit records (#15800)

* By default, do not count indirect hosts (#15801)

* By default, do not count indirect hosts

* Fix copy paste goof

* Fix linter issue from base branch

* prevent multiple tasks from processing the same job events, prevent p… (#15805)

prevent multiple tasks from processing the same job events, prevent periodic task from spawning another task per job

* Fix typos and other bugs found by Pablo review

* fix: rely on resolved_action instead of task, adapt to proposed query… (#15815)

* fix: rely on resolved_action instead of task, adapt to proposed query structure

* tests: update indirect host tests

* update remaining queries to new format

* update live test

* Remove polling loop for job finishing event processing (#15811)

* Remove polling loop for job finishing event processing

* Make awx/main/tests/live dramatically faster (#15780)

* AAP-37282 Add parse JQ data and test it for a `job` object in isolation (#15774)

* Add jq dependency

* Add file in progress

* Add license for jq

* Write test and get it passing

* Successfully test collection of `event_query.yml` data (#15761)

* Callback plugin method from cmeyers adapted to global collection list

Get tests passing

Mild rebranding

Put behind feature flag, flip true in dev

Add noqa flag

* Add missing wait_for_events

* feat: try grabbing query files from artifacts directory (#15776)

* Contract changes for the event_query collection callback plugin (#15785)

* Minor import changes to collection processing in callback plugin

* Move agreed location of event_query file

* feat: remaining schema changes for indirect host audits (#15787)

* Re-organize test file and move artifacts processing logic to callback (#15784)

* Rename the indirect host counting test file

* Combine artifacts saving logic

* Connect host audit model to jq logic via new task

* Document, implement, and test remaining indirect host audit fields (#15796)

* Document, implement, and test remaining indirect host audit fields

* Fix hashing

* AAP-39559 Wait for all event processing to finish, add fallback task (#15798)

* Wait for all event processing to finish, add fallback task

* Add flag check to periodic task

* feat: cleanup of old indirect host audit records (#15800)

* prevent multiple tasks from processing the same job events, prevent p… (#15805)

prevent multiple tasks from processing the same job events, prevent periodic task from spawning another task per job

* Remove polling loop for job finishing event processing (#15811)

* Remove polling loop for job finishing event processing

* Make awx/main/tests/live dramatically faster (#15780)

* temp

* remove test

* reorder migrations to allow indirect instances backport

* cleanup for rebase and merge into devel

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
Co-authored-by: jessicamack <jmack@redhat.com>
Co-authored-by: Peter Braun <pbranu@redhat.com>
2025-02-24 16:39:51 +00:00
Alan Rominger
0ba9fc6980 More PyGithub dep and license housekeeping (#15853) 2025-02-24 08:41:54 -05:00
Andrea Restle-Lay
70ea0a785b revert change made to allow UI to accept x-access-token, just use htt… (#15851)
revert change made to allow UI to accept x-access-token, just use https:// instead
2025-02-21 21:10:22 +00:00
Jake Jackson
fa099fe737 Add Github dep for new cred support if used (#15850)
* Add pygithub for new app token support

* fixed git requirements file with new
* added new github dep and relevant deps it needs

* add required licenses

* Add artifacts to satisfy license check

* Remove duplicated license

---------

Co-authored-by: Andrea Restle-Lay <arestlel@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-02-20 21:16:02 +00:00
Andrea Restle-Lay
bf4d45452c feat: 38589 GitHub App Authentication (#15807)
* feat: 38589 GitHub App Authentication

Allows both git@<personal-token> and x-access-token@<github-access-token> when authenticating using git.
This allows GitHub App tokens to work without interfering with existing authentication types.

---------

Co-authored-by: Jake Jackson <thedoubl3j@Jakes-MacBook-Pro.local>
2025-02-19 23:13:45 +00:00
jessicamack
e56752d55b Ship analytics data using service account token (#15812)
Use oidc client to ship analytics data
2025-02-19 16:38:47 -05:00
Alan Rominger
3495c421c1 Fix: do not use current apps in migrations (#15839) 2025-02-19 07:58:24 -05:00
Hao Liu
8145de3917 Switch to ssh key for private requirements_git (#15838) 2025-02-17 23:58:12 -05:00
Hao Liu
4487f2afa7 Use correct devel image for docker-compose (#15836) 2025-02-13 21:16:40 -05:00
Hao Liu
c886f57119 Continue if pre-warm cache fail in container build (#15835)
Continue if pre-warm cache fail
2025-02-13 21:16:25 -05:00
Hao Liu
2e9fd7bd67 Fix git credential for devel_image build (#15834) 2025-02-13 16:08:57 -05:00
Hao Liu
f8ff48fe5c Add ability to provide token for private repo for requirements_git in container build (#15831)
Add ability to provide auth to private repo for requirements_git
2025-02-12 19:20:13 +00:00
Hao Liu
69a60493a3 Update feature flag list test (#15830) 2025-02-12 18:29:52 +00:00
Hao Liu
e7440c6074 Publish image base on git repo name instead of hard coded to AWX (#15828) 2025-02-10 20:12:06 +00:00
Alan Rominger
7d2b2d672c Make awx/main/tests/live dramatically faster (#15780)
* Make awx/main/tests/live dramatically faster

* Add new setting to exclude list
2025-02-08 21:07:56 -05:00
Alan Rominger
26346d237d Fix rsyslog permission error in github ubuntu tests from apparmor (#15717)
* Add test to detect rsyslog config problems

* Get dmesg output

* Disable apparmor for rsyslogd
2025-02-05 08:38:55 -05:00
Seth Foster
c2e5425d93 Fix rrule fast forwarding across DST boundaries (#15809)
Fixes an issue where schedules were not running at
the correct time.

Details:

DST is Daylights Saving Time

If the rrule dtstart is "in" a DST period (i.e., March to November)
and the current date is outside of the DST, then the fast forwarding
is not correct.

This is because datetime timedeltas do not honor DST boundaries

The Fix:

Convert the rrule dtstart to UTC before doing operations. Then,
convert back to the original timezone at the end.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-02-04 15:19:49 -05:00
Hao Liu
15932e3f7c Set feature flag base on setting (#15808) 2025-02-04 11:00:05 -05:00
Zack Kayyali
a74e7301cd [AAP-39138] - Add DAB Feature Flag common API (#15786)
* Add DAB Feature Flag common API

* Use updated API /feature_flags_state/

* fix git reference

* organization updates
2025-02-03 11:40:16 +01:00
Alan Rominger
30b0c19e72 AAP-38528 Make default state passing for coverage targets (#15772)
* Make default state passing for coverage targets

* Implement Pablo suggestion for patch vs project

* bump up based on current values
2025-01-31 08:40:40 -05:00
Chris Meyers
b53c576944 Add helper to proxy analytics requests
* Handles OIDC token creation and usage transparently
2025-01-30 19:15:41 -05:00
Alan Rominger
c418bc034f Put duplicate plugin location in error message (#15781) 2025-01-29 17:15:02 -05:00
Alan Rominger
d639953a4c Removing some (but not all) dead pytest fixtures (#15782)
* Try removing dead fixtures

* Add back in EE fixture
2025-01-29 17:13:31 -05:00
Jake Jackson
c6930bdf32 Address Lookup Plugins AttributeError (#15770)
* fix backend attribute error

* managedcredential may now contain 2 different classes
* managedcredentialType and one that represents a lookup plugin

* conditionalize creation params

* added a conditional statement to filter our external types

* all external credentials are managed by awx/aap
2025-01-29 10:27:51 -05:00
Peter Braun
d36cd6c6ab fix: compatibility with black v25+ (#15789) 2025-01-29 15:06:14 +00:00
Jake Jackson
aa0f2e362b Remove old jwcrypto tar from licenses since we included the upgraded version (#15783)
remove old jwcrypto tar file since we updated

* remove old version since we updated to a later version, old version is
  no longer needed
2025-01-29 09:48:27 -05:00
Lila Yasin
00238850f4 Remove Docker Desktop if statement (#15778)
Remove docker desktop if statement
2025-01-28 13:43:00 -05:00
Alan Rominger
1f503645fd Move some more tests out of root functional folder (#15753) 2025-01-24 15:04:59 -05:00
Chris Meyers
ad706d67c2 Add ee cleanup tests
* Adds cleanup tests to the live test.
2025-01-24 10:55:30 -05:00
Alan Rominger
534c312328 Use upload artifact v4
unique-ify name

psh, who needs loops

Folder management

Extracts into current path
2025-01-23 16:56:19 -05:00
thedoubl3j
a270b9b474 remove old psycopg tar file that we do not use 2025-01-23 16:56:19 -05:00
Adrià Sala
ada42d7d7c fix: azure credential awxkit client_id collision 2025-01-22 09:47:35 +01:00
jessicamack
4eed454ed7 Establish a feature flag for indirect host counting feature (#15759)
* add feature flag for indirect node counting

* fix name of flag
2025-01-21 20:00:57 +01:00
Alan Rominger
c43dfde45a Create test for using manual & file projects (#15754)
* Create test for using a manual project

* Chang default project factory to git, remove project files monkeypatch

* skip update of factory project

* Initial file scaffolding for feature

* Fill in galaxy and names

* Add README, describe project folders and dependencies
2025-01-20 17:06:15 -05:00
Adrià Sala
46403e4312 fix: invalid f-string and oidc url for insights plugin 2025-01-20 17:48:19 +01:00
Adrià Sala
492c7a1af6 feat: support insights service account credentials for project update 2025-01-17 16:30:07 +01:00
Adrià Sala
a19e1ba28f feat: update insights action plugin to handle oauth (#15742) 2025-01-16 10:44:40 +01:00
Jake Jackson
f05173cb65 Add new credential entry point discovery (#15685)
* - add new entry points
- add logic to check what version of the project is running

* remove former discovery method

* update custom_injectors and remove unused import

* fix how  we load external creds

* remove stale code to match devel

* fix cloudforms test and move credential loading

* add load credentials method to get tests passing

* Conditionalize integration tests if the cred is present

* remove inventory source test

* inventory source is covered in the workflow job template target
2025-01-15 16:10:28 -05:00
Chris Meyers
e106e10b49 Add changelog to awx collection 2025-01-15 15:09:28 -05:00
Alan Rominger
f57a9863d6 Use advisory_lock from DAB (#15676)
* Use advisory_lock from DAB

* Remove the django-pglocks dep

* Re-run updater script

* Move the import in new location
2025-01-15 14:06:59 -05:00
Chris Meyers
bb8d878a36 Bump awx collection ansible required version 2025-01-15 13:49:09 -05:00
Chris Meyers
885cb8846f Remove coarse grain unused import
* It would seem that fine-grain noqa pylint ignores do the job and are
  already in place. Prefer that over the coarse entire file ignore.
2025-01-15 13:49:09 -05:00
Chris Meyers
d51d4eb392 Fix ansible-lint empty lines in module docstrings 2025-01-15 13:49:09 -05:00
Chris Meyers
ae0d6b70a0 Fix ansible-lint truthy in module docstrings 2025-01-15 13:49:09 -05:00
Chris Meyers
cc6337b344 Fix ansible-lint indentation in module docstrings 2025-01-15 13:49:09 -05:00
Chris Meyers
c185ff51a7 Fix editable dependencies volume name
Spelling of docker volume fix.
2025-01-15 13:48:57 -05:00
Lila Yasin
211339ce73 Add client_secret and client_id to credential_input_fields (#15734) 2025-01-15 13:32:33 -05:00
Alan Rominger
c45eb43d63 Update logstash container image and remove ELK stack (#15744)
* Migrate to new image for logstash container

* Remove ELK stack tooling I will not maintain
2025-01-15 07:43:38 -05:00
Hao Liu
f89be5ec8b Switch from dockerhub to gcr mirror (#15743) 2025-01-13 17:03:24 -05:00
Chris Meyers
8ab89d29ca bust the cache 2025-01-13 15:01:18 -05:00
Chris Meyers
ec2966225b Add insights service account support to collection 2025-01-13 15:01:18 -05:00
Alan Rominger
fb12c834eb Add test to ensure bootstrap reqs are good (#15733)
* Add test to ensure bootstrap reqs are good

* Give full diff list in assert
2025-01-13 14:31:19 -05:00
rev3r4nt
6228fe9b66 Add input_inventories to ordered_associations (#15710) 2025-01-13 12:30:16 +01:00
Alan Rominger
c1572af1d4 Fix dependency upgrades (#15740)
* Update dependencies to fix offline build

* Downgrade cryptography due to compatibility issue with openssl

* Downgrade setuptools

* Run update script to assure constraints work

* Maintain pin on cryptography

* Small adjustment to comment

---------

Co-authored-by: Satoe Imaishi <simaishi@redhat.com>
2025-01-10 21:18:48 +00:00
Alan Rominger
3e50b019e0 Delete test file that should have been removed and fix checks (#15739)
* Delete test file that should have been removed

* Add more insights env variables
2025-01-10 13:10:24 -05:00
Alan Rominger
2d7bbc4ec8 AAP-37080 Delete the cleanup_tokens system job template (#15711)
Delete the cleanup_tokens system job template
2025-01-10 10:33:35 -05:00
Roman Petrakov
56079612c8 Fix API documentation rendering (#15116) (#15726)
* Fix API documentation rendering related #15116

* Fix tests and formatting issues #15116
2025-01-07 15:09:18 -05:00
Alan Rominger
2186c24c8f General upgrade of dependencies (#15705)
* General upgrade of dependencies

* adjust licenses to match requirements

* add missing licenses

* another pass to fix licenses

* Try easy for for psycopg encoding pattern change

---------

Co-authored-by: jessicamack <jmack@redhat.com>
2025-01-07 15:03:43 -05:00
Alan Rominger
9a5ed20ed5 AAP-37989 Tests for exclude list with multiple jobs (#15722)
* Tests for exclude list with multiple jobs
2025-01-02 16:38:09 -05:00
Alan Rominger
2657ea840b Disable color logs in CI (#15719)
* Disable color logs in CI

* Disable management command color
2025-01-02 16:19:59 -05:00
Sasa Jovicic
7835e39bac Bugfix: adjust incorrectly passed keywords with exclude-strings argument (#15721)
* Fix incorrectly passed keywords with exclude-strings arg to ansible-runner worker cleanup command

Signed-off-by: Sasa Jovicic <jovicic.sasa@hotmail.com>

* Keep the quotes for each arg and adjust test_receptor

---------

Signed-off-by: Sasa Jovicic <jovicic.sasa@hotmail.com>
2025-01-02 16:16:14 -05:00
Alan Rominger
14808cb99b Move RBAC functional tests into folder (#15723) 2024-12-20 14:54:52 -05:00
Chris Meyers
cf9e6796ea Move cred type unite tests to awx-plugins 2024-12-19 14:03:27 -05:00
Chris Meyers
bd96000494 Remove inject_credential from awx
* Consume inject_credential from its new home, awx_plugins.interfaces
2024-12-19 09:48:47 -05:00
Chris Meyers
ac34e14228 Point at inject credentials 2024-12-19 09:48:47 -05:00
Andrea Restle-Lay
1b418f75e6 AAP-36604 (analytics) Thousands of zombie/orphaned Slow/Stuck DB queries in controller querying active host count (#15715)
* lint

* change timeout to 5 minutes

* change timeout to 5 minutes
2024-12-18 22:12:52 +00:00
Alan Rominger
288e8d78d3 Cleanup in-memory data from test that randomly causes other failures (#15716) 2024-12-18 16:59:42 -05:00
Alan Rominger
c0158181c3 Fix test warnings that escaped somehow (#15714) 2024-12-18 15:21:53 -05:00
Alan Rominger
65b104e1f9 Upload container logs for live tests (#15713)
* Upload container logs for live tests

* Get rid of dash that does nothing
2024-12-18 20:17:01 +00:00
Elijah DeLee
29f36793de Min value should be Decimal (#15413)
This hopefully resolves error message seen in logs sometimes about "should be Decimal type"
2024-12-17 12:45:39 -05:00
Alan Rominger
36c75a2c62 AAP-36536 Send job_lifecycle logs to external loggers (#15701)
* Send job_lifecycle logs to external loggers

* Include structured data in message

* Attach the organization_id of the job
2024-12-16 15:49:16 -05:00
Andrea Restle-Lay
86d202456a host_metrics date fix to make summary dates (datetime.datetime) comparable to month: datetime.date (#15704)
* host_metrics date fix

* AAP-36839 Remove excess comments

* fix extra date() conversion

* actual fix

* datetime is a library, use datetime.datetime

---------

Co-authored-by: Andrea Restle-Lay <arestlel@arestlel-thinkpadx1carbongen9.rht.csb>
2024-12-16 12:10:59 -05:00
jessicamack
c1f0a831ff Pull the correct collection plugin for the product (#15658)
* pull the correct collection plugin for the product

* remove unused import and logging line

* refactor code to load entry points

* reformat method

* lint fix

* renames for clarity and a lint fix

* move function to utils

* move the rest of the code into load_inventory_plugins

* temp - confirm that tests will pass

* revert change caught in merge

* change back requirement

the related PR has been merged
2024-12-16 11:05:21 -05:00
Seth Foster
e605883592 Do not fast forward rrule if count is set (#15696)
Fixes a bug where a schedule that was created
to run only once will continue to run repeatedly.

e.g. an rrule with
dtstart 20240730; count 1; freq MINUTELY

This job will run on 20240730, and should never
run again.

However, the next time the schedule
update_computed_fields runs, the dtstart
will fast forward to today's date, and
next_run will be computed from that. This will trigger
the job to run again, which is not intended.

If count is set, we just should not fast forward the
rrule and always calculate next_run based on original
dtstart.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-12-11 11:14:58 -05:00
Alan Rominger
f377b5fdde Use runtime log utility moved to DAB (#15675)
* Use runtime log utility moved to DAB
2024-12-11 10:38:24 -05:00
Seth Foster
efbe729c42 bump sqlparse to meet DAB requirement (#15697)
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-12-10 18:20:14 -05:00
Alan Rominger
32122e6822 Fix misused project cache identifier (#15690)
Fix project cache identifiers for new updates

Finish test and discover viable solution

Add comment on related task code
2024-12-10 15:26:17 -05:00
Chris Meyers
a129bc860b Flake8 fix 2024-12-10 13:02:09 -05:00
Chris Meyers
c82a8f4b9c Add custom_injectors to test code path
* Unit tests do not create CredentialType records for Credential
  plugins. Instead, they explicitly instantiate CredentialType(s) for
  Credential plugins. They rely on CredentialType.defaults[key] to do
  so. This change makes sure custom_injectors get bolted onto the
  created CredentialType.
2024-12-10 13:02:09 -05:00
Chris Meyers
99c18b681d Load all plugins in order to test them 2024-12-10 13:02:09 -05:00
Chris Meyers
aeca9db470 Rename post_injectors to custom_injectors 2024-12-10 13:02:09 -05:00
Chris Meyers
4b85e7e25a Adopt post_injectors change from awx-plugins 2024-12-10 13:02:09 -05:00
Alan Rominger
325b6d3997 Fix missing exception catch in create_partition (#15691)
Fix error creating partition due to uncaught exception

the primary fix is to simply add an exception class
  to those caught in the except block

This also adds live tests for the general scenario
  although this does not hit the new exception type
2024-12-09 20:57:14 -05:00
Alan Rominger
91d92a6636 Make dev script work in combined environment (#15684) 2024-12-09 09:07:17 -05:00
Alan Rominger
1a35775c25 Create a new pytest folder for live system testing with normal services (#15688)
* PoC for running dev env tests

* Replace in github actions

* Try non interactive

* Move folder to better location

* Further streamlining of new test folders

* Consolidate fixture, add writeup docs

* Use star import

* Push the wait-for-job to the conftest
2024-12-06 09:20:26 -05:00
linuxonfire
698a8aeb62 Update defaults.py receptor typo (#15682)
Update defaults.py

fixing typo for  RECEPTOR_KEEP_WORK_ON_ERROR
2024-12-04 17:11:58 +00:00
Peter Braun
055d853c54 fix: reset state before evaluating named urls (#15683) 2024-12-04 15:53:24 +01:00
Hao Liu
cb04ad8ef5 Fix receptor work unit release after completion (#15679)
Fix bug introduced by https://github.com/ansible/awx/pull/15392 that cause workunit to NOT be auto released after job completes
2024-12-03 11:42:09 -05:00
Peter Braun
f62dfdad2d feat: enable django flags support (#15660)
* feat: enable django flags support

* add django flags license

* re-run updater script
2024-12-03 14:33:10 +01:00
Don Naro
3ceca1b4c7 use subproject url prefix (#15681)
* use subproject url prefix

* add version details
2024-12-03 12:01:28 +00:00
Don Naro
cdb294c5c7 Add the Sphinx notfound page extension (#15669)
* add sphinx notfound extension
* add notfound conf
* upgrade requirements
* use double backticks
* add urls prefix
2024-12-03 10:26:06 +00:00
Alan Rominger
c64b5eb462 Fix missing dependencies due to extras - vs _ (#15677)
Fix missing dependencies
2024-12-02 13:32:27 -05:00
Alan Rominger
adc2162bac Ignore warnings so people can run tests on python 3.12 (#15663) 2024-12-02 11:53:04 -05:00
Chris Meyers
e411f3534f Decouple inject_credentials from dynamic inputs
* Preparation for moving inject_credentials out of this repo
2024-12-02 11:32:46 -05:00
Don Naro
699c0c769d add custom 404 page (#15668)
* add custom 404 page

* cowsay 404
2024-11-26 12:04:55 -07:00
Pablo H.
268ca7c78a Remove oauth provider (#15666)
* Remove oauth provider

This removes the oauth provider functionality from awx. The
oauth2_provider app and all references to it have been removed.
Migrations to delete the two tables that locally overwrote
oauth2_provider tables are included. This change does not include
migrations to delete the tables provided by the oauth2_provider app.

Also not included here are changes to awxkit, awx_collection or the ui.

* Fix linters

* Update migrations after rebase

* Update collection tests for auth changes

The changes in https://github.com/ansible/awx/pull/15554 will cause a
few collection tests to fail, depending on what the test configuration
is. This changes the tests to look for a specific warning rather than
counting the number of warnings emitted.

* Update migration

* Removed unused oauth_scopes references

---------

Co-authored-by: Mike Graves <mgraves@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-11-26 18:59:37 +01:00
Alan Rominger
789a43077f Address unclosed fd warnings 2024-11-25 14:01:21 -05:00
Sviatoslav Sydorenko
d8e87da898 🧪 Make pytest notify us about future warnings
In essence, this configures Python to turn any warnings emitted in
runtime into errors[[1]]. This is the best practice that allows
reacting to future deprecation announcements that are coming from the
dependencies (direct, or transitive, or even CPython itself)[[2]].

The typical workflow looks like this:

  1. If a dependency is updated an a warning is hit in tests, the
     deprecated thing should be replaced with newer APIs.

  2. If a dependency is transitive or we have no control over it
     otherwise, the specific warning and a regex matching its message,
     plus the module reference (where possible) can be added to the
     list of temporary ignores in `pytest.ini`.

  3. The list of temporary ignores should be reevaluated periodically,
     including when dependency re-pinning in lockfile is happening.

[1]: https://docs.python.org/3/using/cmdline.html#cmdoption-W
[2]: https://pytest-with-eric.com/configuration/pytest-ignore-warnings/
2024-11-25 14:01:21 -05:00
Lila Yasin
4bbcb34ae3 Add descriptions for plugin names (#15643)
* Add descriptions for plugin names

* Update serializers to display plugin and plugin description

* Add function to extract plugin name descriptions

* Add description for scm

* Conditionalize scm and file descriptions
2024-11-25 09:20:22 -05:00
TVo
790875ceef Removed UI-focused user docs from AWX. (#15641)
* Replaced with larger graphic.

* Revert "Replaced with larger graphic."

This reverts commit 1214b00052.

* Removed UI-focused user docs from AWX.

* Fixed indentation for release notes

* Removed/updated image files no longer needed.
2024-11-22 07:43:43 -07:00
Alan Rominger
d2cd4e08c5 Do not check error state if null (#15655) 2024-11-22 07:47:48 -05:00
Alan Rominger
ce7911e578 Revive the logstash container for testing (#15654)
* Revive the logstash container for testing

* yamllint
2024-11-21 20:11:16 +00:00
Seth Foster
51896f0e1b Make rrule fast forwarding stable (#15601)
By stable, we mean future occurrences of the rrule
should be the same before and after the fast forward
operation.

The problem before was that we were fast forwarding to
7 days ago. For some rrules, this does not retain the old
occurrences. Thus, jobs would launch at unexpected times.

This change makes sure we fast forward in increments of
the rrule INTERVAL, thus the new dtstart should be in the
occurrence list of the old rrule.

Additionally, code is updated to fast forward
EXRULE (exclusion rules) in addition to RRULE

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: 🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>
2024-11-21 14:05:49 -05:00
Pablo H.
3ba6e2e394 feat: remove collection support for oauth (#15623)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-11-20 11:18:52 -05:00
Alan Rominger
6599f3f827 Removal of OAuth2 stuff from CLI
also from awxkit generally

Remove login command
2024-11-20 11:18:52 -05:00
Alan Rominger
670b7e7754 Fix server error from system job detail view (#15640) 2024-11-19 12:53:32 -05:00
Chris Meyers
108cf843d4 Add option to skip credential type discovery
* Option to avoid database operations in django init path. Useful for
  running collectstatic, or other management commands, without a database.
2024-11-18 14:18:24 -05:00
Alan Rominger
d26396ce74 Fix error with CLI monitor of ad hoc output (#15642) 2024-11-18 13:52:20 -05:00
Alan Rominger
3dbcfb138c Add test that resource list does not server error (#15635) 2024-11-15 11:12:18 -05:00
Peter Braun
54487573f3 fix: invalid response type on post request (#15609) 2024-11-14 12:58:55 +01:00
Alan Rominger
989a4387df Set coverage limits so we do not have current failures (#15629)
* Set coverage limits so we do not have current failures

* Fix to reflect changed number of jobs
2024-11-12 13:50:12 -05:00
Alan Rominger
c9f880414c Make lookup plugins return lists to fix failures (#15625)
* Make lookup plugins return lists to fix failures

* Update unit tests

* Use lookup for test failures, update docs

* Grammar fix from review

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

---------

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>
2024-11-12 12:37:38 -05:00
Lila Yasin
6f184e3f76 Fix for 'relation "social_auth_usersocialauth" does not exist' error (#15626)
* Ran updater.sh

* Remove uneeded licenses
2024-11-11 14:40:33 -05:00
Peter Braun
69baa739fa feat: install awx collection from source (#15617) 2024-11-11 12:30:39 +01:00
Chris Meyers
d388f91bcd Metrics dispatcher callback receiver swaparoo 2024-11-08 00:06:17 -05:00
Chris Meyers
51b1fa412d Install awx collection from branch for operator ci 2024-11-07 15:17:06 -05:00
TVo
dfee5a1821 Updated Authentication section to reflect AWX only method. (#15602)
* Updated Authentication section to reflect AWX only method.

* Update awxkit/awxkit/cli/docs/source/authentication.rst

---------

Co-authored-by: Helen Bailey <hakbailey@gmail.com>
2024-11-07 16:02:48 +00:00
TVo
aa162c6128 Removed oAuth methods from collection docs. (#15606)
* Removed oAuth methods from collection docs.
2024-11-07 15:58:31 +00:00
Alan Rominger
f4cbb9f9a8 Fix bug where unrelated jobs were linked as dependencies (#15610) 2024-11-06 14:43:36 -05:00
Peter Braun
6195e8e879 fix: increase max verbosity level for constructed inventory (#15604) 2024-11-05 16:44:21 +01:00
Alan Rominger
68055bb89f Add back git requirements as comments & re-run script (#15317)
* Add back git requirements as comments

* Add comment to commented out git lines for clarity

* Re run the updater script

* Add new licenses

* Fix library name
2024-10-28 19:44:06 -04:00
Lila Yasin
e21dd0a093 Make cloud providers dynamic (#15537)
* Add dynamic pull for cloud inventory plugins and update corresponding tests

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

* Create third dictionary to preserve current functionality and add 'file' there

* Migrations for corresponding change

---------

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>
2024-10-23 11:30:00 -04:00
Seth Foster
c85fa70745 bump django 4.2.16 to be in line with DAB (#15596)
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-10-22 15:40:18 -04:00
Mike Graves
764dcbf94b Add gateway support to awxkit (#15576)
* Add gateway support to awxkit

This updates awxkit to add support for gateway when fetching oauth
tokens, which is used during the `login` subcommand. awxkit will first
try fetching a token from gateway and if that fails, fallback to
existing behavior. This change is backwards compatible.

Signed-off-by: Mike Graves <mgraves@redhat.com>

* Address review feedback

This:
  * adds coverage for the get_oauth2_token() method
  * changes AuthUrls to a TypedDict
  * changes the url used for personal token access in gateway

* Address review feedback

This is just minor stylistic changes.

---------

Signed-off-by: Mike Graves <mgraves@redhat.com>
2024-10-16 12:01:30 -04:00
jessicamack
42420ebde6 remove oauth use 2024-10-16 10:50:34 -04:00
Hao Liu
31e47706b9 3rd party auth removal cleanup
- Sequentiallize auth config removal migrations
- Remove references to third party auth
- update license files
- lint fix
- Remove unneeded docs
- Remove unreferenced file
- Remove social auth references from docs
- Remove rest of sso dir
- Remove references to third part auth in docs
- Removed screenshots of UI listing removed settings
- Remove AuthView references
- Remove unused imports
...

Co-Authored-By: jessicamack <21223244+jessicamack@users.noreply.github.com>
2024-10-15 17:43:32 -04:00
Djebran Lezzoum
4c7697465b Remove sso app (#15550)
Remove sso app.
2024-10-15 17:43:32 -04:00
jessicamack
1ca034b0a7 Remove SAML authentication (#15568)
* remove saml

* remove license file and management command

* update requirements, add migrations

* remove unused imports
2024-10-15 17:43:32 -04:00
jessicamack
bf09b95b61 Remove OIDC (#15569)
* remove oidc

* remove test fields, linting fix

* merge commit
2024-10-15 17:43:32 -04:00
TVo
65817d4fa4 Removed more mentions about SAML. (#15565)
* Removed docs associated with SAML auth.

* Removed more mentions about SAML.

---------

Co-authored-by: jessicamack <jmack@redhat.com>
2024-10-15 17:43:32 -04:00
jessicamack
0f0919937d Remove Keycloak (#15567)
remove keycloak
2024-10-15 17:43:32 -04:00
Djebran Lezzoum
bcd006f1a5 Remove social oauth (Azure, Github, Google) (#15549)
Remove social oauth (Azure, Github, Google)

Co-authored-by: jessicamack <jmack@redhat.com>
2024-10-15 17:43:32 -04:00
Djebran Lezzoum
2c2694ce89 Remove RADIUS authentication (#15548)
Remove RADIUS authentication from AWX

Do not remove models fields and tables let it for a stage where all the work of removing external auth finished AAP-27707

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-10-15 17:43:32 -04:00
Djebran Lezzoum
e4c11561cc Remove TACACS+ authentication (#15547)
Remove TACACS+ authentication from AWX.

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-10-15 17:43:32 -04:00
Djebran Lezzoum
f22b192fb4 Remove LDAP authentication (#15546)
Remove LDAP authentication from AWX
2024-10-15 17:43:32 -04:00
Rick Elrod
6dea7bfe17 Prettier DRF pages when using trusted proxy (#15579)
This is a rather hacky, but fixes the DRF pages when going through a
trusted proxy.

Notably: This is meant to primarily fix the DRF pages on downstream
builds while leaving the upstream to function as-is.

When using a trusted proxy, the DRF login and logout endpoints now
redirect to the Platform login page (which respects ?next) and logout
endpoint respectively.

The CSS and JS is inlined because the trusted proxy might only proxy
to /api/ and not /static/ which is a harder problem to solve.

Signed-off-by: Rick Elrod <rick@elrod.me>
2024-10-15 15:50:11 -05:00
Hao Liu
1acf8cfde6 Add splitted up inventory source plugins (#15584)
* Add splitted up inventory source plugins

Fix CI failure introduced by
7d83b7dfdb
2024-10-15 15:55:33 -04:00
Rick Elrod
dbe6fcc4e7 Fix CI for newer debian image (#15583)
* Fix CI for newer debian image

Signed-off-by: Rick Elrod <rick@elrod.me>

* Missed one

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
2024-10-14 17:49:48 -04:00
Justin Downie
825a02c86a Adding podAntiAffinity (#15578) 2024-10-08 16:29:57 -04:00
Djebran Lezzoum
579c2b7229 Update AWX collection to use basic authentication (#15554)
Update AWX collection to use basic authentication when oauth token not provided,
and when username and password provided.
2024-10-08 10:42:22 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
ece21b15d0 Use awx-plugins-shared code from awx_plugins.interfaces (#15566)
* Add `awx_plugins.interfaces` runtime dependency

* Use `awx_plugins.interfaces` for runtime detection

The original function name was `server_product_name()` but it didn't
really represent what it did. So it was renamed into
`detect_server_product_name()` in an attempt of disambiguation.

* Use `awx_plugins.interfaces` to map container path

The original function `to_container_path` has been renamed into
`get_incontainer_path()` to represent what it does better and make
the imports more obvious.

* Add license file for awx_plugins.interfaces

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-10-02 18:40:16 +00:00
Alan Rominger
97ececa8b4 Fix 500 error due to None data in DAB response (#15562)
* Fix 500 error due to None data in DAB response

* NOQA for flake8 failures
2024-10-02 11:27:08 -04:00
TVo
a1ad320622 Removed docs associated with SAML auth. (#15563) 2024-09-30 17:30:46 -04:00
Hao Liu
48e3afbb00 Filter out ANSIBLE_BASE_ from job env var (#15558) 2024-09-30 13:50:04 +00:00
TVo
486a1264d5 Removed docs associated with OIDC auth (#15557)
Removed docs associated with OIDC auth.
2024-09-30 09:23:27 -04:00
Alan Rominger
5b7a0504f4 Enable service redirect auth and reverse-sync from DAB (#15489)
* Update settings from DAB features

* Move to the end of the list more correctly
2024-09-23 08:52:06 -04:00
jessicamack
7db7abcd65 Upload the test results for awx-collection to dashboard (#15543)
upload the results for awx-collection separately

the rest of the tests can stay under awx
2024-09-20 15:44:39 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
14698b177b 🧪 Publish awxkit's coverage to Codecov (#15525)
It's already being generated, just not uploaded. This patch
addresses that.
2024-09-18 11:01:24 -04:00
Chris Meyers
1881c26ac4 Make analytics job ts settings hidden
* There isn't a great reason to allow the UI to edit these meta-data
  fields that denote the last time an analytics job ran.
* The only reason I hesitate to mark them uneditable in the API is that
  they are useful to change in order to influence when the jobs run.
  Mostly for debug purposes or 1-off.
2024-09-18 07:23:15 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
2fdb776ce7 🧪 Run sanity tests w/ ansible-test-gh-action (#15539)
* 🧪 Run sanity tests w/ `ansible-test-gh-action`

* 🧪 Upload sanity results to unified dashboard
2024-09-17 19:06:36 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
ce2b8e9a9e 🧪🚑 Fix escaping EOLs in curl invocation (#15538)
This is a follow up for #15532.
2024-09-17 20:58:58 +00:00
Sviatoslav Sydorenko (Святослав Сидоренко)
cf25a09323 🧪 Upload ansible-test coverage to Codecov (#15527) 2024-09-17 16:45:06 -04:00
jessicamack
eccc32cbad Upload API unit test results to dashboard (#15532)
* update ci to upload test report

* Update .github/workflows/ci.yml

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

* Update .github/workflows/ci.yml

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

* Update .github/workflows/ci.yml

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

* Update .github/workflows/ci.yml

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

* Update .github/workflows/ci.yml

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

---------

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>
2024-09-17 15:23:11 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
66e7210ba4 🧪 Upload coverage from the rest of CI jobs (#15526) 2024-09-17 09:53:56 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
0a4370acf0 🧪 Delegate source filtering to coverage.py (#15528)
This drops the coverage source spec from the `pytest` args as it's
already configured in `coveragerc` which is a better place for
keeping it.
2024-09-17 09:53:44 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
9fbbe3cba0 🧪 Use xunit1 in pytest by default (#15524)
This format is contains file paths unlike the newer implementation.
2024-09-17 09:52:43 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
2ca5fcae2f 🧪💅 Unignore errors in coveragerc (#15523)
This setting does not seem necessary so there is no reason for it to
be listed.
2024-09-17 09:52:25 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
69c1d2f64d 🧪 Include coverage measurement @ site-packages (#15521) 2024-09-17 09:52:02 -04:00
Andrew Klychkov
c4500cfc3d Docs: change Getting started EE guide reference to point to the relevant location (#15502) 2024-09-17 09:17:43 -04:00
Peter Braun
af900c8370 fix: maintain order of insertions into m2m relationship tables (#15536) 2024-09-17 13:37:35 +02:00
TVo
ef8cb892cb Plugin removals for docs (#15505)
* Removed files from AWX that were moved to awx-plugins.

* Removed credential plugins file from AWX.

* Resolved broken build: added back missing graphics and removed obsolete xrefs.
2024-09-16 15:27:58 -06:00
Andrew Klychkov
c9ae36804a Remove ML remnants from docs (#15500) 2024-09-16 09:31:16 +01:00
Peter Braun
1140981c64 update remaining urls for new UI (#15529) 2024-09-15 09:31:12 -04:00
Peter Braun
6fd483698a fix workflow job url (#15522) 2024-09-14 15:06:05 +02:00
Sviatoslav Sydorenko (Святослав Сидоренко)
5315a2b194 🧪 Use modern source_pkgs @ coveragerc (#15519)
This helps disambiguate main project code from the tests.
2024-09-13 21:11:28 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
abdc669e50 🧪 Pass specific report files to codecov-cli (#15520)
The automatic discovery is currently unreliable.

Ref: https://github.com/codecov/codecov-cli/issues/500
2024-09-13 21:11:11 -04:00
Seth Foster
3baea0f206 Validate org-user membership from gateway (#15508)
Adding credential and execution environment roles
validates that the user belongs to the same org
as the credential or EE.

In some situations, the user-org membership has not
yet been synced from gateway to controller.

In this case, controller will make a request to
gateway to check if the user is part of the org.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-09-13 17:56:43 -04:00
Hao Liu
acd6b2eb22 Fix instance UI URL generated by API (#15517) 2024-09-13 16:57:23 -04:00
Peter Braun
cc6a0612da fix: change to url in platform ui (#15518) 2024-09-13 22:40:45 +02:00
Sviatoslav Sydorenko (Святослав Сидоренко)
ea7ca3d32d 🧪💅 Categorize the Codecov status checks (#15516)
Make codecov identify metrics for tests and awx modules separately.
2024-09-13 20:26:25 +00:00
Sviatoslav Sydorenko (Святослав Сидоренко)
31ae3f25e7 🧪💅 Migrate to exclude_also @ coveragerc (#15513)
This is an option that appeared in Coverage.py v7.2.0.
2024-09-13 16:20:51 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
d0cc2a1658 🧪🚑 Fix checking schema in CI on merge (#15514)
This is a variation of #15510, this time fixing the
`detect-schema-change` make target.
2024-09-13 16:19:35 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
1b1975a93b 🧪💄 Order settings in coveragerc (#15515) 2024-09-13 20:13:50 +00:00
Sviatoslav Sydorenko (Святослав Сидоренко)
b722f7003d 🧪 Unmeasure coverage in tests expected to fail (#15512)
These tests are known to only be executed partially or not at all. So
we always get incomplete, missing, and sometimes flaky, coverage in
the test functions that are expected to fail.

This change updates the ``coverage.py`` config to prevent said tests
from influencing the coverage level measurement.

Ref https://github.com/pytest-dev/pytest/pull/12531
2024-09-13 15:57:06 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
6bfe76d6d1 🧪🚑 Fix running awx image in CI on merge (#15510)
This is a variation of #15509, fixing the `run_awx_devel` in-tree
action this time.
2024-09-13 18:42:48 +00:00
Sviatoslav Sydorenko (Святослав Сидоренко)
a9b0d9f2e5 🧪🚑 Fix fetching the CI image on merges (#15509) 2024-09-13 18:18:19 +00:00
Sviatoslav Sydorenko (Святослав Сидоренко)
e68370f2aa Replace pkg_resources with importlib.metadata (#15441) 2024-09-13 17:39:14 +00:00
Sviatoslav Sydorenko (Святослав Сидоренко)
a5de4652b9 🧪 Upload the devel branch coverage to Codecov (#15507) 2024-09-13 17:25:30 +00:00
Hao Liu
38719405c3 Add OPTIONAL_UI_URL_PREFIX (#15506)
# Add a postfix to the UI URL patterns for UI URL generated by the API
# example if set to '' UI URL generated by the API for jobs would be $TOWER_URL/jobs
# example if set to 'execution' UI URL generated by the API for jobs would be $TOWER_URL/execution/jobs
2024-09-13 19:20:00 +02:00
Sviatoslav Sydorenko (Святослав Сидоренко)
090511e65b 🧪 Gather coverage @ CI and upload to Codecov (#15499) 2024-09-13 10:46:48 -04:00
Peter Braun
1c170c3a12 fix: avoid race conditions when removing multiple instance (#15495)
* fix: avoid race conditions when removing multiple instance groups at once

* remove unused imports
2024-09-13 09:35:45 +02:00
Chris Meyers
490db08224 Register CredentialType(s) every time Django loads
* Register all discovered CredentialType(s) after Django finishes
  loading
* Protect parallel registrations using shared postgres advisory lock
* The down-side of this is that this will run when it does not need to,
  adding overhead to the init process.
* Only register discovered credential types in the database IF
  migrations have ran and are up-to-date.
2024-09-12 14:11:19 -04:00
Ladislav Smola
71856d61c9 Removes collection of unpartitioned_events table (#15501)
Fixes: https://issues.redhat.com/browse/AAP-30995
2024-09-11 14:16:11 -04:00
Hao Liu
011733ad06 Hide AUTOMATION_ANALYTICS_LAST_GATHER (#15497)
Not user configurable
2024-09-10 10:17:40 +02:00
Hao Liu
82b8f7d4c0 Unpin OpenSSL (#15498)
Remove OpenSSL pin
2024-09-09 20:55:46 +00:00
Hao Liu
5a0080658c Fix analytic ship (#15496)
REDHAT_USERNAME and REDHAT_PASSWORD are default to empty string instead of None
2024-09-09 14:14:32 -04:00
Seth Foster
c4d8fdb197 Translate new RBAC to old RBAC (#15490)
User and Team assignments using the DAB
RBAC system will be translated back to the old
Role system.

This ensures better backward compatibility and
addresses some inconsistences in the UI that were
relying on older RBAC endpoints.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-09-06 12:13:48 -04:00
Hao Liu
3da9e322b7 Fix subscription username password setting name (#15493)
used in analytic
2024-09-05 19:59:45 +00:00
Andrew Klychkov
79684ab603 CONTRIBUTING.md: remove IRC remnants (#15492) 2024-09-05 14:11:37 +01:00
Chris Meyers
1d89e1a019 Move credential code up a dir
* There is only __init__.py in awx/main/models/credential/ now. So let's
  simplify things and move init up a dir.
2024-09-04 14:46:22 -04:00
Chris Meyers
a4346a667c Fix awx-plugins to use #egg=<package_name>
* #egg _could_ be awx-plugins.some.other.provided.package
* Also point at ansible devel instead of a forked branch since the
  entrypoints PR has now merged to devel
2024-09-04 14:46:22 -04:00
Chris Meyers
4328093c05 Use awx-plugins instead
* Instead of sourcing cred and inv plugins from the awx repo awx_plugins
  local directory, source them from the python package awx-plugins-core.
2024-09-04 14:46:22 -04:00
Chris Meyers
16d1f34179 Delete cred and inv plugins 2024-09-04 14:46:22 -04:00
Chris Meyers
376cc35a92 move inv and cred plugins into awx_plugins 2024-09-04 14:46:22 -04:00
John Barker
8a1d1e9c12 Remove references to IRC & Google Groups (#15480)
Signed-off-by: John Barker <john@johnrbarker.com>
2024-08-30 09:21:45 -04:00
David Newswanger
c59c64c915 Fix SAMLAuth backend to correctly return social auth pipeline results (#15457) 2024-08-30 09:13:31 -04:00
Hao Liu
ac6c5630f1 Fallback to use subscription cred for analytic upload (#15479)
* Fallback to use subscription cred for analytic

Fall back to use SUBSCRIPTION_USERNAME/PASSWORD to upload analytic to if REDHAT_USERNAME/PASSWORD are not set

* Improve error message

* Guard against request with no query or data

* Add test for _send_to_analytics

Focus on credentials

* Supress sonarcloud warning about password

* Add test for analytic ship
2024-08-30 10:39:53 +02:00
Elijah DeLee
444af2b500 catch harakiri graceful signal in middlware and log debug info
Middleware is from django_ansible_base
2024-08-29 09:24:35 -04:00
Alan Rominger
50db80182b Remove archaic monkey patches (#15338) 2024-08-28 21:50:00 -04:00
Andrew Klychkov
79c1921ea4 Docs: add Communication guide (#15469)
* Docs: add Communication guide

* Update docs/docsite/rst/contributor/communication.rst

Co-authored-by: Don Naro <dnaro@redhat.com>

* Update docs/docsite/rst/contributor/communication.rst

---------

Co-authored-by: Don Naro <dnaro@redhat.com>
2024-08-28 11:43:16 +01:00
Seth Foster
d6493fd4df Rename System Auditor to Controller System Auditor (#15470)
This is to emphasize that this role is specific
to controller component. That is, not an auditor
for the entire AAP platform.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-08-27 15:35:46 -04:00
Alan Rominger
9cf66de454 Pin DAB to devel again (#15467) 2024-08-27 11:18:09 -04:00
Alan Rominger
f5760b149d Fix 500 error when ordinary user viewed system JTs (#15465) 2024-08-26 11:51:16 -04:00
Seth Foster
7ed0eee60c Make controller specific team and org roles (#15445)
Adds the following managed Role Definitions

Controller Team Admin
Controller Team Member
Controller Organization Admin
Controller Organization Member

These have the same permission set as the
platform roles (without the Controller prefix)

Adding members to teams and orgs via the legacy RBAC system
will use these role definitions.

Other changes:
- Bump DAB to 2024.08.22
- Set ALLOW_LOCAL_ASSIGNING_JWT_ROLES to False in defaults.py.
This setting prevents assignments to the platform roles (e.g. Team Member).

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-08-22 15:41:54 -04:00
Hao Liu
78f345c486 Remove old UI (#15414)
* Remove source code for old UI
* Rename ui-next to ui
* Remove license scan for javascript dependencies
2024-08-22 13:48:56 -04:00
Peter Braun
3f8274d371 fix: avoid calling undefined method for anonymous users (#15440) 2024-08-22 18:01:31 +02:00
Peter Braun
c6223c076f fix: catch correct exception when parsing filter (#15458) 2024-08-22 16:12:54 +02:00
jessicamack
1b5cdf6bef Replace ansiconv with ansi2html (#15328)
* replace ansiconv with ansi2html

The ansiconv package is archived so I'm replacing it with a similar package that's still actively being worked on.

* remove minimum version

The version minimum was used to get the latest version while running the upgrader

* set minimum version for ansi2html

* provide usage info
2024-08-22 09:38:57 -04:00
Alan Rominger
5a8429deed Update django-ansible-base version to 2024.8.19 (#15454)
Update django-ansible-base version to 2024.8.9

Co-authored-by: chrismeyersfsu <722880+chrismeyersfsu@users.noreply.github.com>
2024-08-21 14:16:49 -04:00
Alan Rominger
af537b5261 Rewrite more access logic in terms of permissions instead of roles (#15453)
* Rewrite more access logic in terms of permissions instead of roles

* Cut down supported logic because that would not work anyway

* Remove methods not needed anymore

* Create managed roles in test before delegating permissions
2024-08-21 13:14:40 -04:00
Seth Foster
500b1c47ba SSO login should redirect to new UI index (#15456)
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-08-21 16:31:45 +00:00
Elijah DeLee
c5c617b178 Guard around race condition (#15452)
I had the luck of running into this race condition that broke my deployment. No instance was ever able to register because on running "awx-manage" in some check of a setting, it would end up failing here with

```
  File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/conf/license.py", line 10, in _get_validated_license_data
    return get_licenser().validate()
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/main/utils/licensing.py", line 453, in validate
    automated_since = int(Instance.objects.order_by('id').first().created.timestamp())
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'created'
```
2024-08-20 16:24:56 -04:00
Jake Jackson
39d1922b80 Update editable deps docs (#15451)
update editable deps docs
2024-08-20 14:21:27 -04:00
jessicamack
6b462cdfdb Unpin django-guid and update license (#15381)
unpin django-guid and update license

there's no reason listed for the pin and the changelog doesn't describe any changes that should block a full upgrade. they changed licenses to MIT
2024-08-16 18:54:59 -04:00
jessicamack
ca3e899c2c Unpin django-split-settings (#15379)
* unpin django-split-settings

blocker is 2 years old. upgrading to see if the previous issue is still present. upgrading to a version with Python 3.11 support

* remove UPGRADE BLOCKER in README
2024-08-16 15:07:54 -04:00
Chris Meyers
43a3d4a394 Fixes pytest CI error
```
  /var/lib/awx/venv/awx/lib64/python3.11/site-packages/_pytest/python.py:163:
  PytestReturnNotNoneWarning: Expected None, but
  awx/main/tests/unit/test_tasks.py::TestJobCredentials::test_custom_environment_injectors_with_boolean_extra_vars
  returned ['successful', 0], which will be an error in a future version
  of pytest.  Did you mean to use `assert` instead of `return`?
```

* Dug into the git blame for this one
  060585434a is the commit for any
  historians. It was wrongfully carried over from a mock pexpect
  implementation. Our new tests are nice. They don't go as far as trying
  to run the task so they do not need to mock pexpect. That is why it is
  safe to remove this code without finding it a new home.
2024-08-10 10:29:34 -04:00
Alan Rominger
af02ab46e3 Bump DAB version manually because bot is on vacation (#15434) 2024-08-09 15:42:18 -04:00
jessicamack
1ef77abdc3 Remove 'AWX' from setting endpoint (#15432)
* Remove AWX from display text

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-08-09 11:44:40 -04:00
Rick Elrod
cb2ad41d8d Fix a test in preparation for syncing description
Refs ansible/django-ansible-base#447

Signed-off-by: Rick Elrod <rick@elrod.me>
2024-08-08 16:35:17 +02:00
Seth Foster
73b1536356 Only refresh session if updating own password (#15426)
Fixes bug where creating a new user will
request a new awx_sessionid cookie, invalidating
the previous session.

Do not refresh session if updating or
creating a password for a different user.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-08-07 17:04:19 -04:00
jessicamack
37b7a69303 Unpin channels-redis (#15329)
* unpin channels-redis

The bug that initially caused the upgrade block has been resolved https://github.com/django/channels_redis/issues/332

* replace aioredis Exception with a redis Exception

Version 4.0.0 of channel-redis migrated the underlying Redis library from aioredis to redis-py. The Exception has been changed to an equivalent

* remove unused license

* remove UPGRADE BLOCKER in README

* remove hiredis

it was an indirect dependency from aioredis which was removed

* remove unused license

* add back hiredis

it's potentially providing a performance boost. install explicitly as a part of redis. upgrade to more recent version

* remove UPGRADE BLOCKER for hiredis

it was also addressed as a part of this PR
2024-08-07 13:44:24 +00:00
TVo
6d0c47fdd0 Re-do PR #14685 for alt-text inventories. (#15394) 2024-07-31 09:40:28 +01:00
TVo
54b4acbdfc Added docs for OTel - awx integration (#15408) 2024-07-29 14:36:22 -06:00
Seth Foster
a41766090e Make ui_next the default UI (#15405)
Change django url dispatcher to serve up ui_next files instead of old ui files

Old UI will not be served with this change

Github CI still runs old ui tests (to be removed in another PR)

Remove the Github workflows that build old UI

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-07-29 15:13:09 -04:00
github-actions[bot]
34fa897dda Bump django-ansible-base to 2024.7.17 (#15373)
Update django-ansible-base version to devel

Co-authored-by: chrismeyersfsu <722880+chrismeyersfsu@users.noreply.github.com>
2024-07-26 09:11:12 -04:00
Hao Liu
32df114e41 Improve asyncio debugging (#15398)
- use asyncio.get_running_loop() instead of passing around event_loops
- add name to all of the asyncio tasks for easier debugging

we are trying to figure out which task is
```
Task was destroyed but it is pending!
task: <Task pending name='Task-<id>' coro=<RedisConnection._read_data() done, defined at /var/lib/awx/venv/awx/lib64/python3.9/site-packages/aioredis/connection.py:180> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7fba77bf1700>()]> cb=[RedisConnection.__init__.<locals>.<lambda>() at /var/lib/awx/venv/awx/lib64/python3.9/site-packages/aioredis/connection.py:168]>
```
is referring to
2024-07-24 12:05:02 -04:00
TVo
018f235a64 Replaced all references of downstream docs to upstream docs (#15388)
* Replaced all references of downstream docs to upstream docs.

* Update README.md

Co-authored-by: Don Naro <dnaro@redhat.com>

* Update README.md.j2

Co-authored-by: Don Naro <dnaro@redhat.com>

* Update README.md.j2

Co-authored-by: Don Naro <dnaro@redhat.com>

* Incorpor'd review feedback from @oraNod and @samccann

* Updated with agreed link (for now) until further change is needed.

---------

Co-authored-by: Don Naro <dnaro@redhat.com>
2024-07-24 07:54:43 -06:00
Hao Liu
7e77235d5e Add UI for RECEPTOR_KEEP_WORK_ON_ERROR
In Troubleshooting settings
2024-07-22 17:02:37 -04:00
Hao Liu
139d8f0ae2 Add RECEPTOR_KEEP_WORK_ON_ERROR setting
If RECEPTOR_KEEP_WORK_ON_ERROR is set to true receptor work unit will not be automatically released

Co-Authored-By: Chris Meyers <chrismeyersfsu@users.noreply.github.com>
2024-07-22 17:02:37 -04:00
Hao Liu
7691365aea Fix depends_on for awx devel when editable dependencies is enabled (#15393)
Fix depends_on for awx devel...

when editable dependencies is on

bug introduced by https://github.com/ansible/awx/pull/15386
2024-07-22 16:48:53 -04:00
Alan Rominger
59f61517d4 Loosen up team EE restrictions (#15384)
* Try to loosen up team EE restrictions

* Fix missed permission case of nulling EE org
2024-07-22 14:51:32 -04:00
Alan Rominger
fa670e2d7f Upgrade to v4 checkout, hide output (#15322) 2024-07-19 15:33:11 -04:00
Jake Jackson
a87a044d64 Update test to conform with new DAB change (#15385)
* update tests to not fail with new version of DAB

* comment out conditional for now and add TODOs to fix it
2024-07-19 13:58:09 -04:00
Alan Rominger
381ade1148 Fix create_preload_data to allow running without an admin user created (#15356)
* Allow create_preload_data without having superuser created first

* Temporarily change the DAB requirement

* Put DAB branch back to devel
2024-07-19 10:35:33 -04:00
Sandra McCann
864a30e3d4 Remove remnants of controller terms from quickstart docs (#15350)
Remove remnants of controller terms from quickstart

Signed-off-by: Sandra McCann <samccann@redhat.com>
2024-07-18 22:42:35 -06:00
Sandra McCann
5f42db67e6 Remove references to translated versions of the docs (#15354)
remove references to translated versions of the docs

Signed-off-by: Sandra McCann <samccann@redhat.com>
Co-authored-by: TVo <thavo@redhat.com>
2024-07-19 01:53:06 +00:00
Hao Liu
ddf4f288d4 Remove links from docker-compose template (#15386)
Links are use to indicate network connectivity and optionally provide alias

it is not needed for communication since all the container are on the awx network

in the prometheus container case since awx_ container now have valid hostname it's no longer required (also i think the link is missing a `-` anyway...)

links also implicitly imply dependency between services in this i see awx container depends on redis and postgres so i switch to depends_on to retain that

Making this change to be podman compatible
because i get
```
Error response from daemon: bad parameter: link is not supported
```
2024-07-18 21:19:50 -04:00
Lila Yasin
e75bc8bc1e Fix test_url_base_defaults_to_request to reference local host instead… (#15367)
* Update all references to towerhost to platformhost

* Run prettier on failing ui files
2024-07-18 15:28:54 -04:00
Roberto Chaud
bb533287b8 Create receptor group if missing (#15276) 2024-07-18 17:59:23 +00:00
Sandra McCann
9979fc659e Update docs replacements to AWX (#15349)
Update replacements to AWX

Signed-off-by: Sandra McCann <samccann@redhat.com>
2024-07-18 13:36:00 -04:00
Hao Liu
9e5babc093 Fix ui-next build for release staging GHA (#15383) 2024-07-18 13:26:07 -04:00
Hao Liu
c71e2524ed Disable dab-release GHA on fork unless explicitly triggered (#15382)
Disable dab-release GHA on fork Unless explicitly triggered
2024-07-18 14:55:52 +00:00
Hao Liu
48b4c62186 Update DAB update automation PR template (#15376)
To pass PR Check github action
2024-07-18 10:24:47 -04:00
Seth Foster
853730acb9 Allow deleting org of a running workflow job (#15374)
Old RBAC system hits DOESNOTEXIST query errors
if a user deletes an org while a workflow job is active.

The error is triggered by
1. starting workflow job
2. delete the org that the workflow job is a part of
3. The workflow changes status (e.g. pending to waiting)

This error message would surface
awx.main.models.rbac.Role.DoesNotExist: Role matching
query does not exist.

The fix is wrap the query in a try catch, and skip
over some logic if the roles don't exist.

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-07-18 09:40:58 -04:00
Don Naro
f1448fced1 update terminology (#15357)
* update terminology

Replace some instances of Tower with AWX and remove some references to
enterprise left over from the migration of RST content from the
Automation Controller docs.

* Update docs/docsite/rst/userguide/overview.rst

Co-authored-by: TVo <thavo@redhat.com>

---------

Co-authored-by: TVo <thavo@redhat.com>
2024-07-18 10:29:21 +01:00
Hao Liu
7697b6a69b Pin 3rd party action at SHA
For safety
2024-07-17 22:58:15 +02:00
Chris Meyers
22a491c32c Put DAB version in the PR title 2024-07-17 15:10:25 -04:00
Chris Meyers
cbd9dce940 Run at 6 am every day 2024-07-17 15:10:25 -04:00
Chris Meyers
a4fdcc1cca Update .github/workflows/dab-release.yml
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-07-17 15:10:25 -04:00
Chris Meyers
df95439008 Update .github/workflows/dab-release.yml
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-07-17 15:10:25 -04:00
Chris Meyers
acd834df8b Check and update django-ansible-base
* Check upstream django-ansible-base releases. If the version upstream
  does not match the version we are pinned to then submit a PR with the
  upstream version.
2024-07-17 15:10:25 -04:00
TVo
587f0ecf98 Updated the api file to reflect 2024 date (#15369) 2024-07-16 19:58:55 +00:00
Hao Liu
5a2091f7bf Build new/old UI with different nodejs version (#15368) 2024-07-16 13:18:47 -04:00
Hao Liu
fa7423819a Fix minor docker build warning (#15362)
Fix docker build warning

Fix
```
WARN: FromAsCasing: 'as' and 'FROM' keywords' casing do not match (line 8)
```
2024-07-15 13:34:46 +00:00
Alan Rominger
fde8af9f11 Fix task ending in error due to bad iterator (#15355) 2024-07-12 13:20:39 -04:00
Seth Foster
209e7e27b1 Check member of org when granting cred (#15353)
A user needs to be a member of the org
in order to use a credential in that org.

We were incorrectly checking for "change"
permission of the org, instead of "member".

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-07-10 21:46:26 -04:00
Hao Liu
6c7d29a982 Fix command to set db session timeout for locks (#15352)
Fix command to set db session timeout

Add quote around the value of the setting

Example failures
```
2024-07-10 13:33:29,237 ERROR    [a7e55a64e6744a0e920bb1fd78615e5f] awx.main.dispatch Worker failed to run task awx.main.tasks.system.awx_periodic_scheduler(*[], **{}
Traceback (most recent call last):
  File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/django/db/backends/utils.py", line 87, in _execute
    return self.cursor.execute(sql)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/psycopg/cursor.py", line 732, in execute
    raise ex.with_traceback(None)
psycopg.errors.SyntaxError: trailing junk after numeric literal at or near "1d"
LINE 1: SET idle_in_transaction_session_timeout = 1d
                                                  ^
```
2024-07-10 11:11:12 -04:00
Alan Rominger
282ba36839 Fix EE admin not being able to PATCH/PUT object while providing organization (#15348)
* Fix bug where EE object-level admin could not set organization

* Finish polishing up test
2024-07-09 16:55:09 -04:00
Alan Rominger
b727d2c3b3 Log conflicts and created items by the periodic resource sync (#15337)
* Initial lazy logging of periodic sync results

* Add desired polish to log

* Add debug log
2024-07-09 15:13:08 -04:00
Hao Liu
7fc3d5c7c7 Update ActivityStream UI query to order by id (#15346)
Timestamp for activity stream is not indexed result in slow query. switching to ID (which effectively will is order by created time) to improve performance
2024-07-09 14:38:07 -04:00
TVo
4e055f46c4 Added note to API guide for filtering exact matches (#15332) 2024-07-09 10:55:19 -06:00
Seth Foster
f595985b7c Callback for role assignment (#15339)
Validate role assignment if org defined

Check that organization is defined on credential
before running queries.

Fixes a "None type does not have attribute id" error.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-07-09 09:44:27 -04:00
Alan Rominger
ea232315bf Do not reference self.messages when it does not exist (#15331) 2024-07-03 20:07:46 +00:00
Alan Rominger
ee251812b5 Add complete test that we have analogs to old versions of roles, fix some mismatches (#15321)
* Add test that we got all permissions right for every role

* Fix missing Org execute role and missing adhoc role permission

* Add in missing Organization Approval Role as well

* Remove Role from role names
2024-07-03 15:40:55 -04:00
Alan Rominger
00ba1ea569 Suppress docker pull output in checks (#15323)
Supress docker pull output in checks
2024-07-03 15:04:59 -04:00
Alan Rominger
d91af132c1 Fix server error assigning teams EE object roles (#15320) 2024-07-03 14:07:03 -04:00
Seth Foster
94e5795dfc Prevent assigning credential to user of other org (#15296)
Utilizes the `validate_role_assignment` callback
from dab (see dab PR #490) to prevent granting credential
access to a user of another organization.

This logic will work for role_user_assignments
and role_team_assignments endpoints.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-07-02 21:05:22 +00:00
Alan Rominger
c4688d6298 Add in missing read permissions for organization audit role (#15318)
* Add in missing read permissions for organization audit role

* Add missing audit permission, special case name handling
2024-07-02 15:20:40 -04:00
TVo
6763badea3 Added new OpenShift Virtualization inventory source to docs. (#15299)
* Added new OpenShift Virtualization inventory source to docs.

* Incorporated review feedback from @fosterseth and @TheRealHaoLiu.

* Fixed link to correct kubevirt.core.kubevirt documentation.
2024-07-01 11:47:39 -06:00
Hao Liu
2c4ad6ef0f Add better 403 error message for Job template create (#15307)
* Add better 403 error message for Job template create

To create Job template u need access to projects and inventory

---------

Co-authored-by: Chris Meyers <chris.meyers.fsu@gmail.com>
2024-07-01 15:02:07 +00:00
Hao Liu
37f44d7214 Add better error message for wfjt create 403 (#15309) 2024-07-01 10:50:49 -04:00
Alan Rominger
98bbc836a6 Fix server error from DAB ValidationError with strings (#15312) 2024-07-01 10:11:22 -04:00
Alan Rominger
b59aff50dc Update ExecutionEnvironment model so object-level roles work with DAB RBAC system (#15289)
* Add initial test for deletion of stale permission

* Delete existing EE view permission

* Hypothetically complete update of EE model permissions setup

* Tests passing locally

* Issue with user_capabilities was a test bug, fixed
2024-06-28 16:09:42 -04:00
Alan Rominger
a70b0c1ddc Do not use cache in github image build action (#15308)
* Do not use cache in actual image build action

* Add cache args to kube prod builds
2024-06-28 09:52:59 -04:00
Alan Rominger
db72c9d5b8 Fix permissions that come from an external auditor role (#15291)
* Add tests for external auditor

* Add assertion for unified JTs which fails

* Fix UJT listing bug

* Add test for ad hoc commands just to be sure
2024-06-27 15:57:39 -04:00
jamesmarshall24
4e0d19914f LISTENER_DATABASES clobbers DATABASES OPTIONS (#15306)
Do not overwrite DATABASES OPTIONS with LISTENER_DATABASES
2024-06-27 13:26:30 -04:00
Hao Liu
6f2307f50e Add TASK_MANAGER_LOCK_TIMEOUT (#15300)
* Add TASK_MANAGER_LOCK_TIMEOUT

`TASK_MANAGER_LOCK_TIMEOUT` controls the `idle_in_transaction_session_timeout` and `idle_session_timeout` configuration for task manager connections and lock in database

hope to prevent the situation that the task instance that holds the lock becomes unresponsive and preventing other instance to be able to run task manager

* Add session timeout to periodic scheduler and all sub task manager locks
2024-06-27 09:42:41 -04:00
Alan Rominger
dbc2215bb6 Make attached user models adhere to new API assignments (#15298) 2024-06-26 23:00:25 -04:00
Hao Liu
7c08b29827 Temporary workaround for CI failure (#15305)
Workaround
```
ERROR awx/main/tests/functional/test_licenses.py - pip._vendor.distlib.DistlibException: Unable to locate finder for 'pip._vendor.distlib'
```
2024-06-26 15:29:22 -04:00
TVo
407194d320 Added troubleshooting and tips tricks content (#15212)
* Added troubleshooting and tips tricks content

* Added troubleshooting and tips tricks content

* Moved DNS host entry override info to customize pod spec section of CG chapter.

* Added troubleshooting and tips tricks content

* Moved DNS host entry override info to customize pod spec section of CG chapter.

* Update docs/docsite/rst/administration/containers_instance_groups.rst

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Update docs/docsite/rst/administration/containers_instance_groups.rst

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Update docs/docsite/rst/administration/containers_instance_groups.rst

Co-authored-by: Sandra McCann <samccann@redhat.com>

* Incorp'd review feedback from @fosterseth and @samccann

* Update docs/docsite/rst/administration/containers_instance_groups.rst

Co-authored-by: Sandra McCann <samccann@redhat.com>

* Final revisions based on @fosterseth's inputs.

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
Co-authored-by: Sandra McCann <samccann@redhat.com>
2024-06-24 12:17:31 -06:00
Alan Rominger
853af295d9 Various RBAC fixes related to managed RoleDefinitions (#15287)
* Add migration testing for certain managed roles

* Fix managed role bugs

* Add more tests

* Fix another bug with org workflow admin role reference

* Add test because another issue is fixed

* Mark reason for test

* Remove internal markers

* Reword failure message

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2024-06-21 09:29:34 -04:00
Alan Rominger
4738c8333a Fix object-level permission bugs with DAB RBAC system (#15284)
* Fix object-level permission bugs with DAB RBAC system

* Fix NT organization change regression

* Mark tests to AAP number
2024-06-20 16:34:34 -04:00
Seth Foster
13dcea0afd Check for admin_role in role_check.py (#15283)
Script was falsely identifying cross-linked
parents. It needs to check if parent roles if
content type is Team and role_field is
member_role OR admin_role.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-06-20 14:04:04 -04:00
Chris Meyers
bc2d339981 Clarify the search for a proxy 2024-06-18 16:41:45 -04:00
Chris Meyers
bef9ef10bb Rename delete
* Include a bit of context into the name of the delete function. The
  HTTP_ added prepended string may be unexpected if Django's header
  transformation isn't top of mind.
2024-06-18 16:41:45 -04:00
Chris Meyers
8645fe5c57 Add support for x-trusted-proxy
* Increase the surface area of the set of headers that the proxy list
  feature looks at for the remote proxy IF x-trusted-proxy is valid.
2024-06-18 16:41:45 -04:00
Chris Meyers
b93aa20362 Revert "Trust proxy headers for host provision callback"
This reverts commit 49e3971cd577127705fc0fd1d3b4ab7e3a3c3c2b.
2024-06-18 16:41:45 -04:00
Chris Meyers
4bbfc8a946 Tests for trust proxy and existing explicit proxy
* Integration tests to ensure the integration of the two features.
2024-06-18 16:41:45 -04:00
Chris Meyers
2c8eef413b Trust proxy headers for host provision callback
* Do not remove special header list if request is from a trusted proxy.
* Continue to remove headers if request if from a non-trusted proxy.
2024-06-18 16:41:45 -04:00
Alan Rominger
d5bad1a533 Pass the Makefile python exe to ansible-playbook (#15282) 2024-06-18 13:03:01 -04:00
Alan Rominger
f6c0effcb2 Use public methods to reference registered models (#15277) 2024-06-17 11:45:44 -04:00
Chad Ferman
31a086b11a Add OpenShift Virtualization Inventory source option (#15047)
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-06-14 13:38:37 -04:00
a_nackov
d94f766fcb Fix notification name search (#15231)
Signed-off-by: Adrian Nackov <adrian.nackov@mail.schwarz>
2024-06-13 14:49:54 +00:00
Viktor Varga
a7113549eb Add 'Terraform State' inventory source support for collection (#15258) 2024-06-12 19:22:21 +00:00
Jake Jackson
bfd811f408 Upgrade aiohttp for cve 2024-23829 (#15257) 2024-06-12 19:20:40 +00:00
Jeff Bradberry
030704a9e1 Change all uses of ImplicitRoleField to do on_delete=SET_NULL
This will mitigate the problem where if any Role gets deleted for some
weird reason it could previously cascade delete important objects.
2024-06-12 13:08:03 -04:00
Seth Foster
c312d9bce3 Rename setting to allow local resource management (#15269)
rename AWX_DIRECT_SHARED_RESOURCE_MANAGEMENT_ENABLED
to
ALLOW_LOCAL_RESOURCE_MANAGEMENT

- clearer meaning
- drop prefix so the same setting is used across the platform

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-06-11 12:50:18 -04:00
Jeff Bradberry
aadcc217eb This should deal correctly with the ancestor list mismatches 2024-06-10 16:36:22 -04:00
Jeff Bradberry
345c1c11e9 Guard against the role field not being populated
when doing the final reset of Role.implicit_parents.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
2c3a7fafc5 Add a new test scenario
to trigger the implicit parent not being in the parents and ancestors lists.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
dbcd32a1d9 Mark and rebuild the implicit_parents field for all affected roles 2024-06-10 16:36:22 -04:00
Jeff Bradberry
d45e258a78 Wait until the end of the fix script to clean up orphaned roles 2024-06-10 16:36:22 -04:00
Jeff Bradberry
d16b69a102 Add output of the update and deletion counts to fix.py 2024-06-10 16:36:22 -04:00
Jeff Bradberry
8b4efbc973 Do not throw away the container of cross-linked parents
Since we use it twice, the second time to get the id field of each.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
4cb061e7db Add a readme file with instructions 2024-06-10 16:36:22 -04:00
Jeff Bradberry
31db6a1447 Fix another instance where a bad resource->Role fk could throw a traceback 2024-06-10 16:36:22 -04:00
Jeff Bradberry
ad9d5904d8 Adjusted foreignkeys.sql for correctness
Some relationships known to be handled by the special mapping sql file
were being caught as false positives.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
b837d549ff Split the foreign key sql script into an 'into' and 'from' portion
Also, make use of up-front defined arrays of the tables involved, for
ease of editing in the future.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
9e22865d2e Filter out the relations within the known topology tables 2024-06-10 16:36:22 -04:00
Jeff Bradberry
ee3e3e1516 First cut at detecting which foreign keys enter and exit the topology tables 2024-06-10 16:36:22 -04:00
Jeff Bradberry
4a8f6e45f8 Move the "test" files into their own directory 2024-06-10 16:36:22 -04:00
Jeff Bradberry
6a317cca1b Remove the role_chain.py module
it wound up being unworkable, and I think ultimately we only need to
check the immediate parentage of each role.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
d67af79451 Attempt to correct any crosslinked parents
I think that rebuild_role_ancestor_list() will then correctly update
all of the affected Role.ancestors.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
fe77fda7b2 Exclude more files in the .gitignore 2024-06-10 16:36:22 -04:00
Jeff Bradberry
f613b76baa Modify the role parent check logic to stay in the roles as much as possible
since the foreign keys to the roles from the resources can make us go
wrong almost immediately.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
054cbe69d7 Exclude the team grant false positives
The results in my test now look correct.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
87e9dcb6d7 Attempt to more thoroughly check the parents of each Role
This version, however, has false positives because Roles become
children of Team.member_role when a Role is granted to a Team.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
c8829b057e First cut at checking the role hierarchy
Checking if parents and implicit_parents are consistent with ancestors.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
a0b376a6ca Set up a scenario where IG.use_role_id points to something no longer there
This is actually happening for one customer, though it seems like it
shouldn't be if the foreign key constraint is set back up properly.
In order to recreate it, I had to add the constraint back with 'NOT
VALID' added on to prevent the check.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
d675207f99 Handle the case where a resource points to a Role which isn't in the db 2024-06-10 16:36:22 -04:00
Jeff Bradberry
20504042c9 Graph out only the parent/child chains from a given Role
Doing the entire graph is too much on any system with real amounts of Roles.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
0e87e97820 Check for a broken ContentType -> model and log and skip
Apparently this has happened to a customer, per Nate Becker.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
1f154742df Make the role_chain.py script emit a Graphviz file
of the Role relationships.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
85fc81aab1 Start a new script that can be used to examine a Role's ancestry 2024-06-10 16:36:22 -04:00
Jeff Bradberry
5cfeeb3e87 Treat resources with null role fks differently
The underlying role should be re-linked, instead of treated as orphaned.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
a8c07b06d8 Set up an enhanced version of Seth's bad role scenario 2024-06-10 16:36:22 -04:00
Jeff Bradberry
53c5feaf6b Set up Seth's bad role scenario 2024-06-10 16:36:22 -04:00
Jeff Bradberry
6f57aaa8f5 When checking reverse links, treat duplicate Roles different from bad ones
Also, null out the generic foreign key on orphaned roles before deleting.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
bea74a401d Attempt to be more efficient about grouping the content types
Also, attempt to rebuild the role ancestors in the fixup script.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
54e85813c8 First full check script
This version emits the first fix-up script as its output.
2024-06-10 16:36:22 -04:00
Jeff Bradberry
b69ed08fe5 Specifically examine the InstanceGroup roles 2024-06-10 16:36:22 -04:00
Jeff Bradberry
de25408a23 Print out details of all of the crosslinked roles 2024-06-10 16:36:22 -04:00
Jeff Bradberry
b17f0a188b Initial check 2024-06-10 16:36:22 -04:00
Hao Liu
fb860d76ce Add receptor work list command to sosreport (#15207) 2024-06-10 19:39:24 +00:00
Artsiom Musin
451f20ce0f Use patch to update users in awx cli 2024-06-10 14:54:10 -04:00
Hao Liu
c1dc0c7b86 Periodically sync from share resource provider (#15264)
* Periodically sync from share resource provider

- add periodic task `periodic_resource_sync` run once every 15 min
- if `RESOURCE_SERVER` is not configured sync will not run
- only 1 node

example RESOURCE_SERVER configuration
```
RESOURCE_SERVER = {
    "URL": "<resource server url>",
    "SECRET_KEY": "<resource server auth token>",
    "VALIDATE_HTTPS": <True/False>,
}
RESOURCE_SERVICE_PATH = <resource_service_path>
```
2024-06-10 18:10:57 +00:00
Seth Foster
d65ea2a3d5 Fix race condition when deleting schedules (#15259)
If more than one schedule for a unified job template
is removed at once, a race condition can arise.

example scenario:  delete schedules with ids 7 and 8
- unified job template next_schedule is currently 7
- on delete of schedule 7, update_computed_fields will try to set
next_schedule to 8
- but while this logic is occurring, another transaction
is deleting 8

This leads to a db IntegrityError

The solution here is to call select_for_update() on the
next schedule, so that 8 cannot be deleted until
the transaction for deleting 7 is completed.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-06-09 20:39:18 -04:00
Dave
8827ae7554 Replace REMOTE_ADDR with ansible_base.lib.utils.requests.get_remote_host (#15175) 2024-06-06 14:47:04 +01:00
Jeff Bradberry
4915262af1 Do each batch of the HostMetric updates in a transaction
It looks like we can't do upserts currently without dropping to raw
SQL, but if we wrap each batch in a transaction, that should insure
that each is updated with the correct count.
2024-06-05 14:15:21 -04:00
Seth Foster
d43c91e1a5 Option for dev env to enable ssl for postgres (#15151)
PG_TLS=true make docker-compose

This will add some extra startup commands
for the postgres container to generate a key and
cert to use for postgres connections.
It will also mount in pgssl.conf which has ssl configuration.

This can be useful for debugging issues that only surface
when using ssl postgres connections.
2024-06-05 12:48:08 -04:00
Seth Foster
b470ca32af Prevent modifying shared resources when using platform ingress (#15234)
* Prevent modifying shared resources

Adds a class decorator to prevent modifying shared resources
when gateway is being used.

AWX_DIRECT_SHARED_RESOURCE_MANAGEMENT_ENABLED is the setting
to enable/disable this feature.

Works by overriding these view methods:
- create
- delete
- perform_update

create and delete are overridden to raise a
PermissionDenied exception.

perform_update is overridden to check if any shared
fields are being modified, and raise a PermissionDenied
exception if so.

Additional changes:

Prevent sso conf from registering external authentication related settings if
AWX_DIRECT_SHARED_RESOURCE_MANAGEMENT_ENABLED is False

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-06-05 12:44:01 -04:00
Satoe Imaishi
793777bec7 Add cython to VENV_BOOTSTRAP for grpcio (#15256) 2024-06-05 11:04:15 -04:00
Jake Jackson
6dc4a4508d fix cve 2024-24680 (#15250) 2024-06-04 15:44:09 -04:00
Hao Liu
cf09a4220d Repin cython due to https://github.com/yaml/pyyaml/pull/702 (#15248)
* Revert "Unpin cypthon (#15246)"

This reverts commit 659c3b64de.

* Pin grpcio

Avoid cython 3 due to https://github.com/yaml/pyyaml/pull/702

* Delete asyncpg.txt
2024-06-03 19:42:20 +00:00
Hao Liu
659c3b64de Unpin cypthon (#15246)
* Unpin cython

* Remove unused asyncpg

* Remove asyncpg license file
2024-06-03 11:41:56 -04:00
Ethem Cem Özkan
37ad690d09 Add AWS SNS notification support for webhook (#15184)
Support for AWS SNS notifications. SNS is a widespread service that is used to integrate with other AWS services(EG lambdas). This support would unlock use cases like triggering lambda functions, especially when AWX is deployed on EKS.

Decisions:

Data Structure
- I preferred using the same structure as Webhook for message body data because it contains all job details. For now, I directly linked to Webhook to avoid duplication, but I am open to suggestions.

AWS authentication
- To support non-AWS native environments, I added configuration options for AWS secret key, ID, and session tokens. When entered, these values are supplied to the underlining boto3 SNS client. If not entered, it falls back to the default authentication chain to support the native AWS environment. Properly configured EKS pods are created with temporary credentials that the default authentication chain can pick automatically.

---------

Signed-off-by: Ethem Cem Ozkan <ethemcem.ozkan@gmail.com>
2024-06-02 02:48:56 +00:00
Akira Yokochi
7845ec7e01 Modify the link to terraform_state inventory plugin (#15241)
fix link to terraform_state inventory plugin
2024-06-01 22:36:30 -04:00
Chris Meyers
a15bcf1d55 Add requirements comment 2024-05-31 13:55:17 -04:00
Chris Meyers
7b3fb2c2a8 Add example grafana dashboard
* Per-service log view
2024-05-31 13:55:17 -04:00
Chris Meyers
6df47c8449 Rework which loggers we sent to OTEL
* Send all propagate=False loggers to OTEL AND the awx logger
2024-05-31 13:55:17 -04:00
Chris Meyers
cae42653bf Add recording
* Always output awx logs to a file via otel
* That log file can always be later replayed into a product that
  supports otlp at a later date.
* Useful when you find a problem that you need a time series DB to help
  find and solve.
* Useful if a community member or customer has a problem where a time
  series db would be helpful. You can take a "remote" users log and
  replay it locally for analysis.
2024-05-31 13:55:17 -04:00
Chris Meyers
da46a29f40 Move requirements out of dev and into mainline
* Add new package license files
2024-05-31 13:55:17 -04:00
Chris Meyers
0eb465531c Centralized logging via otel 2024-05-31 13:55:17 -04:00
Hao Liu
d0fe0ed796 Add check_instance_ready management command (#15238)
- throw exception and return 1 if instance not ready
- return 0 if ready
2024-05-31 09:29:40 -04:00
Chris Meyers
ceafa14c9d Use settings fixture in tests
* Otherwise, settings value changes bleeds over into other tests.
* Remove django.conf settings import so that we do not accidentally
  forget to use the settings fixture.
2024-05-30 14:10:35 -05:00
Chris Meyers
08e1454098 Make named url work with optional url prefix
* Handle named url sub-resources
* i.e. /api/v2/inventories/my_inventory++Default/hosts/
2024-05-29 12:39:25 -05:00
Harshith u
776b661fb3 use optional api prefix in collection if set as environ vairable (#15205)
* use optional api prefix if set as environ variable

* Different default depending on collection type
2024-05-29 11:54:05 -04:00
Hao Liu
af6ccdbde5 Fix galaxy publishing (#15233)
- switch to galaxy search API for determining if the version we want to publish already exist
- switch from github action variable to env var for easier copy and paste testing
2024-05-28 15:27:34 -04:00
Matthew Jones
559ab3564b Include Kube credentials in the inventory source picker (#15223) 2024-05-28 14:05:24 -04:00
Alan Rominger
208ef0ce25 Update test so that DAB change can merge (#15222) 2024-05-28 11:53:01 -04:00
Alexander Pykavy
c3d9aa54d8 Mention in the docs that you can skip make docker-compose-build (#15149)
Signed-off-by: Alexander Pykavy <aleksandrpykavyj@gmail.com>
2024-05-22 19:33:13 +00:00
irozet12
66efe7198a Wrap long line to fit help window (#14597) (#15169)
Wrap long line to fit description window (#14597)

Co-authored-by: Ирина Розет <irozet@astralinux.ru>
2024-05-22 19:31:03 +00:00
Beni ~HB9HNT
adf930ee42 awxkit: replace deprecated locale.format() with locale.format_string() to fix human output on Python 3.12 (#15170)
Replace deprecated locale.format with locale.format_string

This will be removed in Python 3.12 and will break human output unless fixed.
2024-05-22 19:27:31 +00:00
Hao Liu
892410477a Fix promote from release event (#15215) 2024-05-22 18:58:11 +00:00
Seth Foster
0d4f653794 Fix up ansible-test sanity checks due to ansible 2.17 release (#15208)
* Fix up ansible sanity checks

* Fix awx-collection test failure

* Add ignore for ansible-test 2.17 

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-05-21 15:05:59 -04:00
Alan Rominger
8de8f6dce2 Update a few dev requirements (#15203)
* Update a few dev requirements

* Fix test failures due to upgrade

* Update patterns for mocker usage
2024-05-20 23:37:02 +00:00
Hao Liu
fc9064e27f Allow wsrelay to fail without FATAL (#15191)
We have not identify the root cause of wsrelay failure but attempt to make wsrelay restart itself resulted in postgres and redis connection leak. We were not able to fully identify where the redis connection leak comes from so reverting back to failing and removing startsecs 30 will prevent wsrelay to FATAL
2024-05-20 23:34:12 +00:00
TVo
7de350dc3e Added docs for new RBAC changes (#15150)
* Added docs for new RBAC changes

* Added UI changes with screens and API endpoints with sample commands.

* Update docs/docsite/rst/userguide/rbac.rst

Co-authored-by: Vidya Nambiar <43621546+vidyanambiar@users.noreply.github.com>

* Incorporated review feedback from @vidyanambiar.

---------

Co-authored-by: Vidya Nambiar <43621546+vidyanambiar@users.noreply.github.com>
2024-05-17 20:10:16 -04:00
Michael Anstis
d4bdaad4d8 Fix success_url_allowed_hosts set instantiation (#15196)
Co-authored-by: Michael Anstis <manstis@redhat.com>
2024-05-16 12:08:50 -04:00
Bikouo Aubin
a9b2ffa3e9 Fix terraform backend credential issue (#15141)
fix issue introduced by PR15055
2024-05-15 15:19:18 -04:00
Sean Sullivan
1b8d409043 Add skip authorization option to collection application module (#15190) 2024-05-15 09:29:00 -04:00
dependabot[bot]
da2bccf5a8 Bump jinja2 from 3.1.3 to 3.1.4 in /docs/docsite (#15168)
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.4.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.3...3.1.4)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-14 14:26:26 -04:00
Hao Liu
a2f083bd8e Fix podman failure in development environment (#15188)
```
ERRO[0000] path "/var/lib/awx/.config" exists and it is not owned by the current user
```
start to happen with podman 5

it seems that the config files are no longer needed removing it fixes the problem
2024-05-14 14:18:48 -04:00
Michael Anstis
4d641b6cf5 Support Django logout redirects (#15148)
* Allowed hosts for logout redirects can now be set via the LOGOUT_ALLOWED_HOSTS setting

Authored-by: Michael Anstis <manstis@redhat.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-05-13 13:03:27 -04:00
Elijah DeLee
439c3f0c23 Skip 3 expensive calls for jobs saving in 'waiting' status on UnifiedJob (#15174)
skip update parent logic for 'waiting' on UnifiedJob

by not looking up "status_before" from previous instance
we save 2 to 3 expensive calls (the self lookup of old state, the lookup
of parent, and the update to parent if allow_simultaneous == False or status == 'waiting')
2024-05-13 10:26:03 -04:00
jessicamack
946bbe3560 Clean up settings file (#15135)
remove unneeded settings
2024-05-10 11:25:15 -04:00
James
20f054d600 Expose websockets on api prefix v2 2024-05-01 10:44:51 -04:00
Alan Rominger
918d5b3565 Do some aesthetic adjustments to role presentation fields (#15153)
* Do some asthetic adjustments to role presentation fields

* Correctly test managed setup

* Minor migration adjustments
2024-04-29 17:11:10 -04:00
Hao Liu
158314af50 Delete deprecated Cypress UI e2e_test.yml (#15155)
Delete e2e_test.yml

Remove because it's no longer being maintained
2024-04-29 12:58:10 -04:00
Seth Foster
4754819a09 awx modules wait on event processing finished (#15152)
This change makes "wait: true" for jobs and syncs
look at the event_processing_finished instead of
finished field.

Right now there is a race condition where
a module might try to delete an inventory, but the events
for an inventory sync have not yet finished. We have a
RelatedJobsPreventDeleteMixin that checks for this condition.

bulk jobs don't have event_processing_finished so we just
use finished field in that case.
2024-04-26 17:33:34 -04:00
Seth Foster
78fc23138a Pin openssl 3.0.7 (#15147)
followup to PR #15142

This commit pins openssl in the awx image,
not just the builder image.
2024-04-26 12:29:22 -04:00
Alan Rominger
014534bfa5 Upgrade DRF (#15144)
* Upgrade DRF

* Fix failures caused by DRF upgrade
2024-04-25 15:37:08 -04:00
Seth Foster
2502e7c7d8 Temporarily downgrade openssl (#15142)
openssl 3.2.0 has incompatiblity issues with
the libpq version we are using, and causes
some C runtime errors:
"double free or corruption (out)"

see awx issue #15136

also this issue

github.com/conan-io/conan-center-index/pull/22615

once the libpq libraries on centos stream9 are
updated with the patch, we can unpin openssl

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-04-25 14:01:03 -04:00
Jeff Bradberry
fb237e3834 Stop pre-caching every resource in the system upon import
If we don't have something in the cache when we call
get_by_natural_key, do an actual filtered query for it and cache the
results.  We'll get more overall API calls this way, but they'll be
smaller and will happen while we are importing, not upfront.
2024-04-24 17:04:40 -04:00
irozet12
e4646ae611 Add help message for expiration tokens (#15076) (#15077)
Co-authored-by: Ирина Розет <irozet@astralinux.ru>
2024-04-24 19:58:09 +00:00
Bruno Sanchez
7dc77546f4 Adding CSRF Validation for schemas (#15027)
* Adding CSRF Validation for schemas

* Changing retrieve of scheme to avoid importing new library

* check if CSRF_TRUSTED_ORIGINS exists before accessing it

---------

Signed-off-by: Bruno Sanchez <brsanche@redhat.com>
2024-04-24 15:47:03 -04:00
Michael Tipton
f5f85666c8 Add ability to set SameSite policy for userLoggedIn cookie (#15100)
* Add ability to set SameSite policy for userLoggedIn cookie

* reformat line for linter
2024-04-24 15:44:31 -04:00
Alan Rominger
47a061eb39 Fix and test data migration error from DAB RBAC (#15138)
* Fix and test data migration error from DAB RBAC

* Fix up migration test

* Fix custom method bug

* Fix another fat fingered bug
2024-04-24 15:14:03 -04:00
Alan Rominger
c760577855 Adjust test for stricter DAB user view permission enforcement (#15130) 2024-04-23 15:21:06 -04:00
TVo
814ceb0d06 Backports previously approved corrections. (#15121)
* Backports previously approved corrections.

* Deleted a blank line in inventories line 100
2024-04-22 09:55:19 -06:00
Seth Foster
f178c84728 Fix instance peering pagination (#15108)
Currently the association box displays a
list of available instances/addresses that can
be peered to.

The pagination issue arises as follows:

- fetch 5 instances (based on page_size)
- filter these instances down based on some
criteria (like is_internal: false)
- show results

Filtering down the results inside of the fetch
method results in pagnation errors (may return fewer than
5, for example)

instead, do the filtering by API queries. That way the
pagination count will be correct.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-04-18 15:17:10 -04:00
Jeff Bradberry
c0f71801f6 Use $(shell ...) to filter the redis docker volumes
Makefiles use $() for variable templating, so trying to use it
directly as a shell subcommand doesn't work.
2024-04-17 16:45:23 -04:00
Jan-Piet Mens
4e8e1398d7 Omit using -X when not needed, and don't default to demonstrating -k
It's the year 2024: using -k as default in https URL schemes should be deprecated. (I have left one mention of it possibly being required if no CA available). Furtheremore, neither -XGET or -XPOST are required, as curl(1) well knows when to use which method.
2024-04-17 15:18:57 -04:00
STEVEN ADAMS
3d6a8fd4ef chore: remove repetitive words (#15101)
Signed-off-by: hugehope <cmm7@sina.cn>
2024-04-17 19:18:25 +00:00
Hao Liu
e873bb1304 Fix wsrelay connection leak (#15113)
- when re-establishing connection to db close old connection
- re-initialize WebSocketRelayManager when restarting asyncio.run
- log and ignore error in cleanup_offline_host (this might come back to bite us)
- cleanup connection when WebSocketRelayManager crash
2024-04-16 14:54:36 -04:00
lucas-benedito
672f1eb745 Fixed missing fstring from wsrelay logging (#15094)
Fixed missing fstring from wsrelay logging
2024-04-16 13:32:34 -04:00
abikouo
199507c6f1 Support Google credentials on Terraform credentials type 2024-04-16 10:34:38 -04:00
jessicamack
a176c04c14 Update LDAP/SAML config dump command (#15106)
* update LDAP config dump

* return missing fields if any

* update test, remove unused import

* return bool and fields. check for missing_fields
2024-04-15 12:26:57 -04:00
Alan Rominger
e3af658f82 Use released version of django-radius (#15103) 2024-04-12 16:34:23 -04:00
Hao Liu
e8a3b96482 Use latest awx-ee in devel CI (#15098) 2024-04-12 14:30:39 -04:00
Hao Liu
c015e8413e Store molecule debug output to github artifacts (#15107)
Related to https://github.com/ansible/awx-operator/pull/1823
2024-04-12 13:56:25 -04:00
Alan Rominger
390c2d8907 [RBAC] Update related name to reflect upstream DAB change (#15093)
Update related name to reflect upstream DAB change
2024-04-11 14:59:09 -04:00
Alan Rominger
97605c5f19 Make custom urls work with RBAC 2024-04-11 14:59:09 -04:00
Alan Rominger
818c326160 [RBAC] Rename managed role definitions, and move migration logic here (#15087)
* Rename managed role definitions, and move migration logic here

* Fix naming capitalization
2024-04-11 14:59:09 -04:00
Alan Rominger
c98727d83e [RBAC] Fix bug where team could not be given read_role to other team (#15067)
* Fix bug where team could not be given read_role to other team

* Avoid unwanted triggers of parentage granting

* Restructure signal structure

* Fix another bug unmasked by team member permission fix

* Changes to live with test writing

* Use equality as opposed to string "in"

from Seth in review comment

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2024-04-11 14:59:09 -04:00
Alan Rominger
a138a92e67 [RBAC] Tweaks to reflect what endpoints are deprecated (#15068)
Tweaks to reflect what endpoints are deprecated
2024-04-11 14:59:09 -04:00
Alan Rominger
7aed19ffda Fix missing role membership when giving creator permissions (#15058) 2024-04-11 14:59:09 -04:00
Seth Foster
3bb559dd09 AWX Collections for DAB RBAC
Adds new modules for CRUD operations on the
following endpoints:

- api/v2/role_definitions
- api/v2/role_user_assignments
- api/v2/role_team_assignments

Note: assignment is Create or Delete only

Additional changes:
- Currently DAB endpoints do not have "type"
field on the resource list items. So this modifies
the create_or_update_if_needed to allow manually
specifying item type.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-04-11 14:59:09 -04:00
Alan Rominger
389a729b75 [RBAC] Fix known issues with backward compatible access_list (#15052)
* Remove duplicate access_list entries for direct team access

* Revert test changes for superuser in access_list
2024-04-11 14:59:09 -04:00
Alan Rominger
2f3c9122fd Generalize can_delete solution, use devel DAB (#15009)
* Generalize can_delete solution, use devel DAB

* Fix bug where model was used instead of model_name

* Linter fixes
2024-04-11 14:59:09 -04:00
Alan Rominger
733478ee19 [RBAC] Fix server error from delete capability of approvals (#15002)
Fix server error from delete capability of approvals
2024-04-11 14:59:09 -04:00
Alan Rominger
41c6337fc1 [RBAC] Fix migration for created and modified field changes (#14999)
Fix migration for created and modified field changes
2024-04-11 14:59:09 -04:00
Alan Rominger
7446da1c2f Bump migration number for RBAC branch 2024-04-11 14:59:09 -04:00
Alan Rominger
c79fca5ceb Adopt internal DAB RBAC Permission model (#14994) 2024-04-11 14:59:09 -04:00
Alan Rominger
dc5f43927a Minor RBAC test fix (#14982) 2024-04-11 14:59:09 -04:00
Alan Rominger
35a5a81e19 Use AWX base view to make unauth requests 401 (#14981) 2024-04-11 14:59:09 -04:00
Alan Rominger
9dcc11d54c [DAB RBAC] Re-implement system auditor as a singleton role in new system (#14963)
* Add new enablement settings from DAB RBAC

* Initial implementation of system auditor as role without testing

* Fix system auditor role, remove duplicate assignments

* Make the system auditor role managed

* Flake8 fix

* Remove another thing from old solution

* Fix a few test failures

* Add extra setting to disable custom system roles via API

* Add test for custom role prohibition
2024-04-11 14:59:09 -04:00
Alan Rominger
74ce21fa54 Bump number of allowed endpoints (#14956) 2024-04-11 14:59:09 -04:00
Alan Rominger
eb93660b36 Cache organization child evaluations and remove hacks 2024-04-11 14:59:09 -04:00
Alan Rominger
f50e597548 Cast ObjectRole object_id to int, very wrong, tmp fix 2024-04-11 14:59:09 -04:00
Alan Rominger
817c3b36b9 Replace role system with permissions-based DB roles
Develop ability to list permissions for existing roles

Create a model registry for RBAC-tracked models

Write the data migration logic for creating
  the preloaded role definitions

Write migration to migrate old Role into ObjectRole model

This loops over the old Role model, knowing it is unique
  on object and role_field

Most of the logic is concerned with identifying the
  needed permissions, and then corresponding role definition

As needed, object roles are created and users then teams
  are assigned

Write re-computation of cache logic for teams
  and then for object role permissions

Migrate new RBAC internals to ansible_base

Migrate tests to ansible_base

Implement solution for visible_roles

Expose URLs for DAB RBAC
2024-04-11 14:59:09 -04:00
Alan Rominger
1859a6ae69 Fix failure from DAB (#15102)
@AlanCoding said to do this 🚌
2024-04-11 17:10:11 +00:00
Chris Meyers
0645d342dd Implement optional url prefix the Django way
* Before, the optional url prefix feature required calling our
  versioning version of reverse(). This worked _ok_ until we added more
  and more urls from 3rd party apps. Those 3rd party apps do not call
  our reverse(), writefully so.
* This implementation looks at the incoming request path. If it includes
  the special optional prefix url, then we register ALL the urls WITH
  the optional url prefix.
  If the incoming request path does NOT contain the options url prefix
  then we register ALL the urls WITHOUT the optional url prefix.
* Before this, we were registering BOTH sets of urls and then reverse()
  + the request as context to decide which url.
2024-04-10 16:03:09 -04:00
Chris Meyers
61ec03e540 Move named url init out of Middleware init
* Middleware classes can be instantiated multiple times in testing. To
  make this a non-issue, move the init code for named urls out of the
  middleware init and into the app init.
* This makes it easier to use other testing facilities, like
  LiveServerTestCase, without having to mock the named url middleware
  init.
2024-04-10 15:46:30 -04:00
Shane McDonald
09f0a366bf Revert accidental line deletion
I made a mistake in https://github.com/ansible/awx/pull/15096. I realized afterwards that it must have been being consumed by the make target.
2024-04-10 12:19:07 -04:00
Shane McDonald
778961d31e Fix awxkit uploads when re-running promote workflow 2024-04-10 12:12:35 -04:00
Shane McDonald
f962c88df3 Allow for manually restarting promote workflow
The promote workflow recently failed. Since this was just a problem with our automation, it would be nice if we didn't have to do another release just to fix our tooling.
2024-04-10 12:00:30 -04:00
Hao Liu
8db3ffe719 Check galaxy collection with and without redirect 2024-04-10 11:26:42 -04:00
Alan Rominger
cc5d4dd119 Clean the postgres 15 volume (#15083) 2024-04-09 16:06:42 -04:00
Hao Liu
86204cf23b Publish amd64 and arm64 awx image on release (#15053)
* Stage multi-arch awx image

- change CI to use `make awx-kube-build` instead of build playbook
- update staging CI to build and push multiarch awx image
- update doc to use `make awx-kube-build` to build awx image
- remove build playbook (no longer used)
2024-04-09 09:50:09 -04:00
Chris Meyers
468949b899 Remove uneeded drf_reverse overwrite
* `drf_reverse()` was introduced here 1a75b1836e
* There is a comment about monkey patching. I can't find the monkey patch it is referencing.
* AWX `drf_reverse()` is a copy paste of this https://github.com/encode/django-rest-framework/blob/master/rest_framework/reverse.py#L32
  * The only difference is DRF's version calls `preserve_builtin_query_params()`
    * `preserve_builtin_query_params()` only does something if `api_settings.URL_FORMAT_OVERRIDE` is defined.
      * We don't use `REST_FRAMEWORK.URL_FORMAT_OVERRIDE`
2024-04-08 16:14:11 -04:00
TVo
f1d9966224 Added docs for terraform credential/inventory source (#15004)
* Added docs for terraform credential/inventory source

* Updated screen captures for inventories and source to match wfjt example

* Added docs for terraform credential/inventory source

* Updated screen captures for inventories and source to match wfjt example

* Update docs/docsite/rst/userguide/inventories.rst

Co-authored-by: Aoki <lucasaoki@users.noreply.github.com>

* Revised per review feedback.

* Update docs/docsite/rst/userguide/inventories.rst

Co-authored-by: Helen Bailey <hakbailey@gmail.com>

---------

Co-authored-by: Aoki <lucasaoki@users.noreply.github.com>
Co-authored-by: Helen Bailey <hakbailey@gmail.com>
2024-04-05 09:57:27 -06:00
César Francisco San Nicolás Martínez
b022b50966 fix service-index url calling reverse method 2024-04-04 07:48:04 -04:00
Elijah DeLee
e2f4213839 Round out options url prefix edge cases 2024-04-04 07:48:04 -04:00
Chris Meyers
ae1235b223 Rename container hostname from awx_1 to awx-1
* Django and other webservers that care about proper hostnames don't
  like underscores in them.
2024-04-03 15:58:17 -04:00
Tom Page
c061f59f1c Add tags and skip_tags option to awx.awx.workflow_launch (#15011)
Signed-off-by: Tom Page <tpage@redhat.com>
2024-04-03 15:29:43 -04:00
Jeff Bradberry
3edaaebba2 Adjust the awx-manage script to make use of importlib (#15015)
* Adjust the awx-manage script to make use of importlib

removing the deprecation warning.

* Synlink awx-manage in docker-compose

No longer need to rebuild docker-compose devel image to load change for `tools/docker-compose/awx-manage` in development environment

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-04-02 17:20:05 -04:00
Hao Liu
7cdf1c7f96 Update DOCKER_COMPOSE command to docker compose (#15056)
* Update DOCKER_COMPOSE command

docker-compose will stop being supported soon and this is causing CI flake setting DOCKER_COMPOSE default to `docker compose`

* Give AWX network a static name
2024-04-02 15:13:14 -04:00
Hao Liu
d558204192 Make db password optional for wsrelay (#15046)
* Make db password optional for wsrelay

* Change DB setting copy to deepcopy

safer than copy()

Co-Authored-By: Jeff Bradberry <685957+jbradberry@users.noreply.github.com>

---------

Co-authored-by: Jeff Bradberry <685957+jbradberry@users.noreply.github.com>
2024-04-02 11:47:24 -04:00
Chris Meyers
d06ce8f911 Remove json formatter for job lifecycle
* We didn't really make use of json formatting across the app. Remove
  the special case json formatter. Instead, output all of the meta-data
  associated with a job lifecycle event every time. Before, we tried to
  only output this extra meta data when in DEBUG mode. It turns out this
  information is smaller than we thought and more useful than we thought
  so always output it.
2024-04-02 11:39:34 -04:00
Alan Rominger
4b6f7e0ebe Add link to service-index URL 2024-03-29 10:07:15 +00:00
John Barker
370c567be1 Remove /17 magic number from Forum URL 2024-03-29 10:03:45 +00:00
John Barker
9be64f3de5 Improve social documentation release_process.md 2024-03-29 10:03:45 +00:00
Alan Rominger
30500e5a95 Re-parent DAB views from AWX base 2024-03-29 10:03:12 +00:00
David O Neill
bb323c5710 Loosen up body check on template
https://github.com/ansible/awx/issues/14985
https://github.com/ansible/awx/issues/13983
2024-03-29 10:02:18 +00:00
Chris Meyers
7571df49d5 Pass --exclude="list of exclude dirs like this"
* Previously, the params were passed without quotes and each directory
  was being interpreted as a seperate command line flag.
* Added some structure around the error messages returned from
  receptorctl so we can more easily decide how to handle each case. For
  example, releasing the cleanup job from receptor doesn't absolutely
  need to succeed because we have a periodic job that does that. In
  fact, that is the thing that is making it fail .. but I digress.
2024-03-28 14:42:08 -04:00
Jeff Bradberry
1559c21033 Change awx.awx.application to output the OAuth2 client secret
if one was generated.
2024-03-28 10:17:46 -04:00
PabloHiro
d9b81731e9 Fix: broken reference to API url 2024-03-27 20:37:53 +01:00
Adam Miller
2034cca3a9 update playbooks to use fqcn
Signed-off-by: Adam Miller <admiller@redhat.com>
2024-03-27 15:13:43 -04:00
Chris Meyers
0b5e59d9cb Fix websocket relay. Set autocommit so conn.notifies() does not blocks forever (#15043)
Without autocommit conn.notifies() blocks forever
2024-03-27 15:11:17 -04:00
Alan Rominger
f48b2d1ae5 Add resource and ansible_id to serializers (#15020) 2024-03-26 22:37:15 -04:00
Dimitri Savineau
b44bb98c7e Dockerfile: Fix collectstatic command (#15035)
Recent changes in awx and/or django ansible base cause the django
collectstatic command to fail when using an empty settings file.
Instead, use the defaults settings file from controller via
DJANGO_SETTINGS_MODULE=awx.settings.defaults

[linux/amd64 builder 13/13] RUN AWX_SETTINGS_FILE=/dev/null
SKIP_SECRET_KEY_CHECK=yes SKIP_PG_VERSION_CHECK=yes
/var/lib/awx/venv/awx/bin/awx-manage collectstatic --noinput --clear
Traceback (most recent call last):
(...)
django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly
configured. Please supply the ENGINE value. Check settings documentation for
more details.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2024-03-26 14:19:51 -04:00
Hao Liu
8cafdf0400 Fix wsrelay KeyboardInteruption not respected (#15036)
- stop wsrelay on keyboard interuption
- restart wsrelay for any other failure reason
2024-03-26 17:29:15 +00:00
Hao Liu
3f566c8737 Fix wsrelay not retry to establish db connection (#15031)
- run_wsrelay retry to run wsrelay forever with 10 second sleep
- wsrelay restart on`on_ws_heartbeat` task if fail to db connection goes away
2024-03-26 11:56:16 -04:00
Hao Liu
c8021a25bf Fix keycloak doc (#15024) 2024-03-25 15:01:49 -04:00
Matt Martz
934646a0f6 Address first_found skip bug (#15017)
* Address first_found skip bug

* Don't attempt installing project root requirements.yml as v2 collection format
2024-03-22 12:06:43 +01:00
Michael Abashian
9bb97dd658 Fix bug where extra variables were reset on schedule edit
Fix survey prompt presentation inconsistencies

Remove unnecessary conditional

This conditional always returned true.  See the following warning: This condition will always return 'true' since JavaScript compares objects by reference, not value.

Fix schedule edit tests
2024-03-20 10:30:10 -04:00
Hao Liu
7150f5edc6 Editable dependencies in docker compose development environment (#14979)
* Editable dependencies in docker compose development environment
2024-03-19 15:09:15 -04:00
Hao Liu
93da15c0ee Setting modification to address requests from UI_NEXT devs (#14996)
Modification to settings

- Add hidden to indicate to UI_NEXT to hide the field
- Add warning_text to indicate to UI_NEXT to display the warning when specific setting is modified
- Address some non required field being marked as required
2024-03-19 15:08:41 -04:00
Hao Liu
ab593bda45 Add setting for configuring optional URL prefix for /api (#14939)
* Add setting for configuring optional URL prefix for /api

Add OPTIONAL_API_URLPATTERN_PREFIX setting

examples:
- if set to `''` (empty string) API pattern will be `/api`
- if set to 'controller' API pattern will be `/api` AND `/api/controller`
2024-03-19 15:56:33 +00:00
TVo
065bd3ae2a Backported from product-docs PR #2001 (misc doc cleanup) (#14980)
* Backported from product-docs PR #2001 (misc doc cleanup)

* Update docs/docsite/rst/administration/awx-manage.rst
2024-03-15 12:20:02 -06:00
Hao Liu
8ff7260bc6 Add dump_auth_config management cmd (for SAML and LDAP) (#14947)
* Add dump_auth_config management cmd

- Dump SAML config from AWX to DAB authenticator config in json format

* Add dumping of LDAP settings

* add test for command

* Fix is_enabled

* fix command name typo

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

* add fields to config, add name to data

* break out IDP values

* change test fields and value comparison

* edit help text, reformat settings

---------

Co-authored-by: jessicamack <jmack@redhat.com>
2024-03-15 13:47:30 -04:00
Hao Liu
a635445082 Fix failing bulk launch job due to create partition race
https://github.com/ansible/awx/pull/14910/files

introduced a bug where we no longer accept the right exceptions

when 2 job launch at the sametime and try to create jobevent table partition 1 of the job will fail
2024-03-15 10:10:38 -04:00
Hao Liu
949e7efab1 Fix wsrelay hanging after db outage
TCP keepalive settings was moved out from settings.DATABASE to settings.LISTENER_DATABASES and it's not longer being respected by wsrelay
2024-03-14 15:51:30 -04:00
Hao Liu
615f09226f Fix awx-manage run_wsrelay --status (#14997)
by don't start the metrics server if --status is passed in
2024-03-14 18:55:05 +00:00
Dave
d903c524f5 Fix for 14924 - Unformatted help text toast message (#14990)
Fix for 14924  - Unformatted help text is popped out when peers for intances are changed

Co-authored-by: David O Neill <daoneill@redhat.com>
2024-03-14 13:24:53 -04:00
Cesar Francisco San Nicolas Martinez
393d9c39c6 Mismatch dependencies version (#14986)
* Fixed mismatch between setuptools version in the makefile and requirements file

* Fix mismatch of versions in makefile and requirements

* Added maturin license
2024-03-14 13:32:56 +01:00
Hao Liu
dfab342bb4 Skip replicas test for awx-operator (#14987)
speed up CI, also AWX code change won't effect that test
2024-03-13 18:01:02 +00:00
Dave
12843eccf7 AAP-13369 Python 3.9 -> 3.11 upgrade (#14771)
* Python 3.9 -> 3.11 upgrade

* Test: updating azure-keyvault to 4.2.0

* Revert "Test: updating azure-keyvault to 4.2.0"

This reverts commit cf0b83699442e0c0de4a1152d4af8543a5e05b88.

* Test: updating azure-keyvault to latest and adding azure-identity

* Fix licenses

* Adding new licenses

* Revert "Fix licenses"

This reverts commit da3876911ef5ebbe7a8adbddd336ced3039b6228.

* Fixing dependencies

* Test: updating azure-keyvault to 4.2.0

* Fix licenses

* Revert "Fix licenses"

This reverts commit da3876911ef5ebbe7a8adbddd336ced3039b6228.

* Fixing dependencies

---------

Co-authored-by: César Francisco San Nicolás Martínez <csannico@redhat.com>
2024-03-13 14:41:40 +01:00
Hao Liu
dd9160135d Prune dangle image periodically (#14957)
Prune dangle image periodically

pairs with https://github.com/ansible/ansible-runner/pull/1342

this fix the problem of us forcefully remove images when setting changing ee image that's being used in a job causing the job to fail
2024-03-12 10:57:57 -04:00
Chris Meyers
ad96a92fa7 Align Orign and Host header (#14970)
* Align Orign and Host header

* Before this change the Host: header was runserver. Seems to be set by
  nginx upstream flow.
* After this change we explicitly set the Host: header
* More about CSRF checks ...
  CSRF checks that Origin == Host. Think about how the browser works.

  <browser goes to awx.com>
  "I'm executing javascript that I downloaded from awx.com (ORIGIN) and
  I'm making an XHR POST request to awx.com (HOST)"
  Server verifies; Host: header == Origin: header; OK!

  vs. the malicious case.

  <hacker injects javascript code into google.com>
  <browser goes to google.com>
  "I'm executing javascript that I downloaded from google.com (ORIGIN)
  and I'm making an XHR POST request to awx.com (HOST)"
  Server verifies; Host: header != Origin: header; NOT OK!

* Update awx/settings/development.py

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-03-11 17:06:09 -04:00
David O Neill
ca8085fe7e English string validation to error code validation 2024-03-11 20:07:16 +00:00
Hao Liu
b076cb00a9 Revert "Implement project pulling from Azure DevOps using Service Pri… (#14977)
Revert "Implement project pulling from Azure DevOps using Service Principals (#14628)"

This reverts commit 2e2cd7f2de.
2024-03-11 14:05:24 +00:00
John Westcott IV
ee9eac15dc Upgrade to postgres:15 (#14230)
* Upgrade to postgres:15
* Changed postgres:15 to quay.io/sclorg/postgresql-15-c9s
2024-03-07 16:27:03 -05:00
Hao Liu
3f2f7b75a6 [developer productivity improvement] Running awx components in vscode debugger (#14942)
Enable VSCode debugger integration when attaching VSCode to with AWX docker-compose development environment container

- add debugpy launch target in `.vscode/launch.json` to enable launching awx processes with debugpy
- add vscode tasks in `.vscode/tasks.json` to facilitate shutting down corresponding supervisord managed processes while launching process with debugpy
- modify nginx conf to add django runserver as fallback to uwsgi (enable launching API server via debugpy)
2024-03-07 19:31:50 +00:00
Dave
b71645f3b1 AAP-12273 remove incorrect sentence conjugation (#14946)
AAP-12273 remove incorrect sentance conjugation

Co-authored-by: David O Neill <daoneill@redhat.com>
2024-03-07 14:04:11 -05:00
Hao Liu
eb300252b8 Fix awx-autoreload in dev environment (#14968)
Fix awx-autoreload, recent change made autoreload no longer take the command parameter
2024-03-07 16:33:23 +00:00
Patrick Uiterwijk
2e2cd7f2de Implement project pulling from Azure DevOps using Service Principals (#14628)
* Credential Lookup with multiple types
Allow looking up a credential with one of multiple type IDs.

* Allow Azure cred for SCM
Allow selecting an Azure Resource Manager credential for Git-based SCMs.
This is in order to enable using Azure Service Principals for project updates.

* Implement Azure Service Principal Git
This adds support for using an Azure Service Principal for project updates.

---------

Signed-off-by: Patrick Uiterwijk <patrick@puiterwijk.org>
2024-03-07 10:07:03 -05:00
Hao Liu
727278aaa3 Add pip>=21.3 to dev requirement to install django-ansible-base in editable mode (#14961)
Add  pip>=21.3 to dev requirement required for installing django-ansible-base in editable mode

https://peps.python.org/pep-0660/

PEP 660 – Editable installs for pyproject.toml based builds (wheel based)
2024-03-06 21:28:41 -05:00
Michael Abashian
81825ab755 Bump axios UI dep to 1.6.z (#14954)
* Bump axios UI dep to 1.6.z

* Add proxy-from-env license
2024-03-06 16:03:49 -05:00
Helen Bailey
7f2a1b6b03 Add terraform state inventory source (#14840)
* Add terraform state inventory source
* Update inventory source plugin test
Signed-off-by: Helen Bailey <hebailey@redhat.com>
2024-03-06 20:27:52 +00:00
Hao Liu
1b56d94d30 In development environment not auto-reload explicitly STOPPED processes (#14958)
Not auto-reload explicitly STOPPED processes

In development/debug workflow sometime we explicitly STOP processes this will make sure auto-reload does not start them back up
2024-03-06 20:22:44 +00:00
Shane McDonald
e1e32c971c Allow for manually starting workflow to build devel images (#14955) 2024-03-06 17:53:05 +00:00
Alan Rominger
a4a2fabc01 Add test for utils method is_testing 2024-03-05 11:38:14 +00:00
Hao Liu
b7b7bfa520 Fix test that fail on rerun due to expecting exact IDs (#14943)
Fix test that fail on rerun

due to expecting exact IDs
2024-03-01 12:37:17 -05:00
jessicamack
887604317e Integrate resources API in Controller (#14896)
* add resources api to controller

* update setting

models are not the source of truth in AWX

* Force creation of ServiceID object in tests

* fix typo

* settings fix for CI

---------

Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-03-01 11:18:35 -05:00
Chris Meyers
d35d8b6ed7 fails when ran with vscode debugger
Tried to dig as to why we ever needed this and could not find the answer. We removed it and ran all the tests and the tests passed so assuming it's no longer needed.
2024-02-29 15:22:41 -05:00
Hao Liu
ec28eff7f7 Convert swagger release fixture to env var (#14940)
`pytest awx/main/tests/docs --release=$(VERSION_TARGET)`
where --release is required breaks test discovery and running in vscode (from within the container)
2024-02-29 14:55:02 -05:00
Cesar Francisco San Nicolas Martinez
a5d17539c6 Removing Podman to use Docker again in the collection ci (#14938)
Removing Podman var to use Docker again in the collection ci
2024-02-28 14:56:49 +01:00
Thanhnguyet Vo
a49d894cf1 Added missing AWS secret management lookup creds. 2024-02-28 04:16:59 +00:00
Chris Meyers
b3466d4449 Make JWT the first auth class and default
* No harm in adding it to the list. If a JWT auth header is provided,
  then process it (valid or not). If a JWT is not provided, move on to
  the next auth.
2024-02-27 15:09:16 -05:00
Hao Liu
237adc6150 Publish multi-arch versioned awx-ee
dependent on https://github.com/ansible/awx-ee/pull/235
2024-02-27 08:21:51 +00:00
Hao Liu
09b028ee3c Publish multi-arch manifest of awx (#14929)
Promote multi-arch awx manifest
2024-02-26 17:05:57 -05:00
Hao Liu
fb83bfbc31 Fix ui_next banner (#14928) 2024-02-26 14:20:42 -05:00
Hao Liu
88e406e121 Fix CVEs and bump receptorctl (#14925)
CVE-2023-47627
CVE-2023-49083
CVE-2023-41040
CVE-2024-22195
CVE-2023-46137
2024-02-26 15:48:38 +00:00
Thanhnguyet Vo
59d0bcc63f Fixed some misc errors in illustrations and header formatting 2024-02-22 13:04:37 +00:00
Hao Liu
3fb3125bc3 Send QUIT to worker before dying (#14913)
Fix deadlock scenario where dispatcher child process stuck in reading from queue loop after dispatcher parent process decided to quit

Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-02-21 16:08:43 -05:00
Alan Rominger
d70c6b9474 Reset another to test-playbooks 2024-02-21 15:11:12 +00:00
Alan Rominger
5549516a37 Reset these tests back to test-playbooks 2024-02-21 15:11:12 +00:00
Alan Rominger
14ac91a8a2 Swap repos test fix 2024-02-21 15:11:12 +00:00
Alan Rominger
d5753818a0 Stop using the bulky test-playbooks in tests where possible 2024-02-21 15:11:12 +00:00
Chad Ferman
33010a2e02 added # -*-coding:utf-8-*- to test if this fixes issues with users being unable to have Japanese, Chinese and Korean Characters in email messages 2024-02-21 14:50:46 +00:00
Alan Rominger
14454cc670 Fix playbook errors found by debugging 2024-02-21 14:36:41 +00:00
Alan Rominger
7ab2bca16e Fix missing var name change 2024-02-21 14:36:41 +00:00
Alan Rominger
f0f655f2c3 Style consistency for task 'when' 2024-02-21 14:36:41 +00:00
Brian Coca
4286d411a7 fix project_update role/collection install
- avoid looping
  - avoid using multiple files, only one should be provided and processed per type
  - use first_found and variables to locate existing file
  - skip if no file exists
2024-02-21 14:36:41 +00:00
James Talton
06ad32ed8e Enhance the dashboard job summary endpoint to contain canceled and error job counts
Signed-off-by: James Talton <jtalton@redhat.com>
2024-02-21 14:12:59 +00:00
Alan Rominger
1ebff23232 Do not rely on unreliable dir output and use exists query 2024-02-21 13:43:54 +00:00
Alan Rominger
700de14c76 Make the migration middleware faster, second attempt 2024-02-21 13:43:54 +00:00
Thanhnguyet Vo
8605e339df Deleted duplicate graphics that were converted to drawio. 2024-02-21 13:10:41 +00:00
Thanhnguyet Vo
e50954ce40 Fixed graphics, illustrations, tables, examples, sizing 2024-02-21 13:10:41 +00:00
Hao Liu
7caca60308 Multi-arch build for AWX images in ghcr.io (#14899)
build amd64 and ARM image for
- awx
- awx_devel
- awx_kube_devel
2024-02-20 17:17:31 -05:00
Cesar Francisco San Nicolas Martinez
f4e13af056 Add support for terraform credentials in awxkit (#14902) 2024-02-20 13:42:25 +00:00
Jérémy Lal
decdb56288 Fix mistake in french message 2024-02-20 12:33:59 +00:00
Fdubois97
bcd4c2e8ef Fix: Ajout de traduction sur la langue FR 2024-02-20 12:32:18 +00:00
Thaïs DOUCET
d663066ac5 Fix typo in french message
Typo in "Launch Template" french message

Signed-off-by: Thaïs DOUCET <tdoucet@amiltone.com>
2024-02-20 12:21:31 +00:00
Sasa Jovicic
1ceebb275c Fix: login rerouting only works on the user's current tab (PR:#14399)
Signed-off-by: Sasa Jovicic <sjovicic@anexia-it.com>
2024-02-20 12:16:13 +00:00
César Francisco San Nicolás Martínez
f78ba282a6 Ability to user awxkit with websocket custom urls 2024-02-20 11:58:18 +00:00
Mikhail Yohman
81d88df757 Add YAML tab for Job Output event modal. 2024-02-19 16:45:46 +00:00
Hao Liu
0bdb01a9e9 Allow dev image to build on fork (#14898)
* Allow dev image to build on fork

Fix uppercase repo owner error in CI
example
```
Run docker pull ghcr.io/TheRealHaoLiu/awx_devel:devel
  docker pull ghcr.io/TheRealHaoLiu/awx_devel:devel
  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
  env:
    LC_ALL: C.UTF-8
    CI_GITHUB_TOKEN: ***
    DEV_DOCKER_OWNER: TheRealHaoLiu
    COMPOSE_TAG: devel
    py_version: 3
invalid reference format: repository name must be lowercase
```

---------

Co-authored-by: Rick Elrod <rick@elrod.me>
2024-02-19 16:19:59 +00:00
Alan Rominger
cd91fbf59f Label any changes to requirements folder with dependencies label 2024-02-19 15:56:53 +00:00
Michael Abashian
f240e640e5 Run prettier 2024-02-19 14:48:36 +00:00
ivarmu
46f489185e fixes problems with workflow nodes information section 2024-02-19 14:48:36 +00:00
Thanhnguyet Vo
dbb80fb7e3 Updated release notes so they don't need to be revised so often. 2024-02-19 12:18:57 +00:00
Seth Foster
cb3d357ce1 Disable install_bundle endpoint for ingress node
As we do for control nodes, disable the
install_bundle endpoint for ingress nodes.

This can be done by checking if instance managed
is True.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-19 11:00:06 +00:00
Chris Meyers
dfa4db9266 Add tests for websocket endpoints
* authorized/not authorized tests for wsrelay endpoint
* not authorized test for web browser websockets
* skeleton of a test for authorized web browser websockets
2024-02-17 18:37:53 -05:00
Chris Meyers
6906a88dc9 Add pytest-asyncio to test channels websockets 2024-02-17 18:37:53 -05:00
Alan Rominger
1f7be9258c Remove tower_legacy module_utils that appears unused (#14421)
* Remove tower_legacy module that appears unused

* Update license details
2024-02-16 16:02:09 -05:00
Jake Jackson
dcce024424 Update release doc to check for awxkit tar files. (#14892)
add a step to release process to confirm tar file is published
2024-02-16 18:51:58 +00:00
Jeff Bradberry
79d7179c72 Fix the persistent breakage when cleaning up branches
The github workflow that we have set up for branch deletion doesn't work:

- the `on: delete` event does not support the `branches:` filter
- the `mode` flag for the aws_s3 module does not have `delete` as one
  of the options.  The proper option appears to be `delobj`.
2024-02-15 15:16:18 -05:00
Alan Rominger
4d80f886e0 Revert "Drop cython dep" (#14884)
* Revert "Remove cython lib"

This reverts commit 46f816e7a4.

* Revert "WIP consider droping cython dep"

This reverts commit 54b32c10f0.

* Update Cython comment
2024-02-15 11:58:17 -05:00
TVo
5179333185 Added mesh ingress content to instances chapter. (#14854)
* Added mesh ingress content to instances chapter.

* Changes to incorp feedback from @TheRealHaoLiu.

* Added mesh ingress content to instances chapter.

* Line 75

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Line 117

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Line 126

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Wording changes and update graphics

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2024-02-15 07:19:52 -07:00
Alan Rominger
362e11aaf2 Respect old downtime setting name if user has already set it 2024-02-15 12:34:24 +00:00
Hao Liu
decff01fa4 Update command for sos-report
wsbroadcast has been renamed to wsrelay
2024-02-15 12:32:05 +00:00
Daniel Gonçalves
a14cc8199d Add python 3.12 dependency (#14869)
Signed-off-by: Daniel Gonçalves <daniel.gonc@lves.fr>
2024-02-14 15:12:51 -05:00
Chris Meyers
b6436826f6 Support websocket per-endpoint auth
* Channels doesn't really give you an interface to support per-endpoint
  auth ... so this adds one.
* The web browser and node <--> node communication have different auth
  needs.
2024-02-14 15:11:52 -05:00
Chris Meyers
2109b5039e Fixes "Task was destroyed but it is pending"
* asyncio.gather expects *tasks .. gotta unpack the list
2024-02-14 15:11:52 -05:00
root
b6f9b73418 adding iputils 2024-02-14 17:01:15 +00:00
Christian M. Adams
40a8a3cb2f Add dockerx make target for building awx for arm64
Signed-off-by: Christian M. Adams <chadams@redhat.com>
2024-02-14 16:01:36 +00:00
David O Neill
19f80c0a26 GH13983 - Add additional check for bad templates 2024-02-14 15:49:26 +00:00
Christian Adams
5d1bb2125e Update .github/workflows/stage.yml
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-14 15:44:21 +00:00
Christian Adams
99c512bcef Update .github/workflows/stage.yml
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-14 15:44:21 +00:00
Christian M. Adams
ed0329f5db Build multi-arch awx-operator images when releasing 2024-02-14 15:44:21 +00:00
Chris Meyers
dd53345397 shhhhhhhhhhhhhhh
Hopefully silence some setuptools
2024-02-14 15:38:31 +00:00
Chris Meyers
f66cde51d7 More locked down websocket path
* Previously, the nginx location would match on /foo/websocket... or
  /foo/api/websocket... Now, we require these two paths to start at the
  root i.e. <host>/websocket/... /api/websocket/...
* Note: We now also require an ending / and do NOT support
  <host>/websocket_foobar but DO support <host>/websocket/foobar. This
  was always the intended behavior. We want to keep
  <host>/api/websocket/... "open" and routing to daphne in case we want
  to add more websocket urls in the future.
2024-02-14 13:50:51 +00:00
thedoubl3j
d1c31687fc remove the ldap volume when cleaning all volumes 2024-02-14 12:33:42 +00:00
Jeff Bradberry
38424487f1 Unbreak the pip-compile command when multiple files are passed in (#14875) 2024-02-13 15:59:04 -05:00
Hao Liu
b0565e9937 Switch to docker_compose_v2 in tools playbook (#14872)
Switch to docker_compose_v2

Fix
```
"Configuration error - kwargs_from_env() got an unexpected keyword argument 'ssl_version'"}
```
2024-02-13 13:05:33 -05:00
Hao Liu
44d85b589c Retries on vault on seal (#14873)
Sometime we tried to unseal when vault is not ready yet
2024-02-13 13:05:23 -05:00
Alan Rominger
46f816e7a4 Remove cython lib 2024-02-13 14:45:28 +00:00
Alan Rominger
54b32c10f0 WIP consider droping cython dep 2024-02-13 14:45:28 +00:00
Alan Rominger
20202054cc Use lowercase password 2024-02-13 14:36:39 +00:00
Alan Rominger
e84e2962d0 Support DB configs where PASSWORD is not used 2024-02-13 14:36:39 +00:00
Chris Meyers
2259047527 Websockets now use rest_framework configed auth
* Always support cookies, session, and also allow rest_framework
  configured auth methods over the browser websocket.
* The node -> node websocket auth remains locked down and unchanged
2024-02-13 12:05:25 +00:00
Chris Meyers
f429ef6ca7 Allow connecting to websockets via api/websocket/
* Before, we just allowed websockets on <host>/websocket/. With this
  change, they can now come from <host>/api/websocket/
2024-02-13 12:02:44 +00:00
Hao Liu
4b637c1319 gitignore pyenv python-version file
pyenv local can be use to set directory specific python version in `.python-version` file in the directory

Signed-off-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-13 12:00:57 +00:00
Alan Rominger
4c41f6b018 Ability to use updater script to pin dev requirements (#14644)
* Add a dev option for updater script to pin CI reqs

* Avoid removing git links for dev requirements

* Add dev to primary options

* Fix up sanitize git switch
2024-02-12 11:57:59 -05:00
Jesse Wattenbarger
3ae72219b4 Change parsing of docker info in dev build
This is a non-functional change. The way os_info is populated with docker info
and grep 'Operating System' breaks on podman and likely in other places. This
makes it work on both podman and docker, and it will continue to return the
exact same strings everywhere else.
2024-02-12 16:40:48 +00:00
jessicamack
402c29dc52 Switch mailing list to forum (#14600)
* Switch mailing list to forum

* add link to community

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

* use correct channel name

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-12 15:51:31 +00:00
Alan Rominger
8eb4a9a2a0 Update location of logstash build context (#14676) 2024-02-12 15:49:29 +00:00
Jeff Bradberry
36f3b46726 Avoid using SmartFilter during migrations (#14786)
Our migrations that touch roles tend to bring in our real models via
migration_utils.set_current_apps_for_migrations, and that can have
some undesirable side-effects.
2024-02-12 15:48:58 +00:00
Bikouo Aubin
55c6a319dc Add new credential type to support Terraform backend configuration (#14828)
* Add new credential type to support configuration of Terraform Backend

* Fix unit tests
2024-02-12 15:47:24 +00:00
Dave
56b6a07f6e Remove json serialization for notify validation (#14847)
* Remove json serialization for notify validation

* Update serializers.py
2024-02-12 15:43:43 +00:00
Jake Jackson
519fd22bec Add ldap support to vault container in docker dev environment (#14777)
* add ldap_auth mount and configure it

* added in key engines, userpass auth method, still needs testing

* add policies and fix ldap_user

* start awx automation for vault demo and move ldap

* update docs with new flags/new credentials
2024-02-09 15:19:17 -05:00
TVo
2e5306ae8e Added LDAP support for HashiCorp Vault lookup credential (#14833)
* Added LDAP support for HashiCorp Vault lookup credential

* Added LDAP support for HashiCorp Vault lookup credential

* Replaced graphics and updated missing fields.

* Added LDAP support for HashiCorp Vault lookup credential

* Replaced graphics and updated missing fields.

* Incorporated review feedback from @thedoubl3j and @djyasin.
2024-02-09 10:17:14 -07:00
Cesar Francisco San Nicolas Martinez
068e6acbd5 Fix the way we are passing the awxkit base path to resources (#14862) 2024-02-09 15:31:52 +01:00
Seth Foster
f9a23a5645 Remove enable button for hop nodes (#14861)
Enabled/disabled does not apply to hop nodes,
since hop nodes don't run jobs.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-08 22:10:10 -05:00
Seth Foster
40150a2be8 Fix UI peers_from_control_nodes (#14858)
* Fix UI peers_from_control_nodes

Fixes bug where peers_from_control_node was
greyed out in UI.

Additional changes:
- Make edit instance button only show for instances
with managed = False
- Make remove instance button only show for instances
with managed = False
- InstanceList selectable only for instances with
managed = False

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-08 14:13:11 -05:00
TVo
b79aa5b1ed Removed erroneous line for basic auth 2024-02-08 12:45:47 -05:00
Jeff Bradberry
b3aeb962ce Fix the test_export_system_auditor collection test 2024-02-07 15:55:19 -05:00
Jeff Bradberry
2300b8fddf Fix linting problem 2024-02-07 15:55:19 -05:00
Lorenzo Tanganelli
3a3284b5df rework with check if POST exists 2024-02-07 15:55:19 -05:00
Lorenzo Tanganelli
2359004cc1 fix decription extraction on export 2024-02-07 15:55:19 -05:00
Lorenzo Tanganelli
694d7e98e7 leave $encrypted$ on export
add encrypted removal from import when object not exists
2024-02-07 14:29:30 -05:00
Julen Landa Alustiza
8c9c02c975 awxkit: allow to modify api base url (#14835)
Signed-off-by: Julen Landa Alustiza <jlanda@redhat.com>
2024-02-07 12:26:42 +01:00
Chris Meyers
8a902debd5 Per-service metrics http server
* Organize metrics into their respective service
* Server per-service metrics on a per-service http server
* Increase prometheus client usage over our custom metrics fields
2024-02-05 15:17:24 -05:00
Seth Foster
6dcaa09dfb UI rename Endpoints to Listener Addresses
Listener Addresses is a better name to
emphasize these are routable addresses to
reach a listener service on the node.

Also removed expand toggle on the listener
addresses list items, as the expanded mode
had no additional information.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
21fd6af0f9 InstanceLink unique constraint source and target
Prevent creating InstanceLinks with duplicate
source and target pairings.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
eeae1d59d4 Disable health check button if managed
Also, update ui screen tests to expect
injecting "listener_port: null" if
listener_port is empty

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
a252d0ae33 Fix UI lint by running npm prettier
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
48971411cc Protocol blank if no canonical address
Make protocol be blank on instance if there
is no canonical address for this instance.

It was defaulting to "tcp" before.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
083c05f12a Prevent duplicating instance links
In receptor address post-save method:
- Fixed detecting if address was missing
a link from control nodes
- Use InstanceLink create_or_update to prevent
adding duplicate InstanceLink source and target
peers

In instance serializer create_or_update,
delete receptor addresses first before doing
instance create or update. This ensures that we don't
trigger unnecessary post-save methods that might
attempt to manipulate receptor addresses that
will just be removed later.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
b558397b67 Remove redundant tests
test_listener_port
test_peers_from_control_nodes
test_peers_from_control_nodes_without_listener_port

are covered in the following tests:

test_no_op
test_creates_canonical_address
test_deletes_canonical_address
test_updates_canonical_address
test_canonical_address_validation_error

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
904c6001e9 If managed, cannot modify peers_from_control_nodes
Adds validation to prevent changing
peers_from_control_nodes if instance managed=True

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
818e11dfdc Test inspect_established_receptor_connections
Add functional test case for inspecting
established receptor connections.

InstanceLink starts in ADDING state, and should
move to ESTABLISHED state if the connection
is detected in the receptor status output.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Jeff Bradberry
7fc13a0569 Write tests around the two special instance serializer fields
and all of the cases that they might be in.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
92c693f14e Break out peer validation into its own method 2024-02-02 10:37:41 -05:00
Jeff Bradberry
f2417f0ed2 Make the peer validation more compact
and reuse information.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
8f22188116 Use a select_related to build the peers queryset in the install bundle
Since the relationship is ReceptorAddress -> Instance,
prefetch_related isn't necessary.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
05502c0af8 Placeholder FIXMEs for things of concern 2024-02-02 10:37:41 -05:00
Jeff Bradberry
957ce59bf7 Use Counter to find duplicate peer relationships
Gives a bit more readability.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
cc4cc37d46 'managed' is a read-only field on InstanceSerializer
so we don't need complex logic to compare an incoming to existing value.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
1e254c804c The listener port cannot be disabled when setting peers_from_control_nodes
If the port is explicitly set to null (causing any ReceptorAddress to
be deleted), then that's a validation error.

If the port is left off but a ReceptorAddress doesn't already exist,
we should not infer a port number and that is also a validation error.
2024-02-02 10:37:41 -05:00
Seth Foster
1b44bebed3 Peers_from_control_nodes requires listener port
Adds validation and a unit test to ensure:

- peers_from_control_nodes=True should fail if
listener_port is not set
- peers_from_control_nodes=False should be NOOP if
listener_port is not set

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
a4cf55bdba Require receptor collection 2.0.3
This release supports ingress hop node, and
is backwards compatible with previous AWX
version.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
c333d0e82f Prevent modifying peers on managed node
Add validation to prevent any managed node
from modifying "peers" through the API

Peering from these nodes should be handled
by setting peers_from_control_nodes only.

Managed nodes are control nodes and
ingress hop nodes.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
David O Neill
b093c89a84 Form hardening and node type exclusion
Disabled add/edit/remove for managed nodes
Tightened validation on Peers from control nodes
2024-02-02 10:37:41 -05:00
Jeff Bradberry
f98493aa61 Support wss as ws-listener in the Receptor config 2024-02-02 10:37:41 -05:00
Hao Liu
c36d2b0485 Update requirements.yml 2024-02-02 10:37:41 -05:00
Jeff Bradberry
8ddb604bf1 Template the listener protocol into the receptor install bundle (#14792)
Previously we were hard-coding tcp, now we need to also support ws/wss.
2024-02-02 10:37:41 -05:00
Seth Foster
cd9dd43be7 Make InstanceLink target non-nullable
InstanceLink target should not be null.

Should be safe to set to null=False, because we have
a custom RunPython method to explicitly set
target to a proper key.

Also, add new test to test_migrations
which ensures data integrity after migrating
the receptor address model changes.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Jeff Bradberry
82323390a7 Reconstitute migration file
Do the things as a proper 3-step process -- add new stuff, copy the data, drop the old.
2024-02-02 10:37:41 -05:00
Seth Foster
4c5ac1d3da Add management command to remove address
Adds remove_receptor_address to delete a
receptor address from the database

Also, enforce that only 1 canonical address
can be added to an instance via
the add_receptor_address command.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
David O Neill
9c06370e33 InstanceAdd sends null for port_listener 2024-02-02 10:37:41 -05:00
David O Neill
449b95d1eb Fix remaning tests, removed unused code 2024-02-02 10:37:41 -05:00
David O Neill
1712540c8e Updates for receptor reaslese to ui for protocol and is_managed 2024-02-02 10:37:41 -05:00
Seth Foster
7cf639d8eb Add migration to support InstanceLink changes
- Add forwards method to create a receptor address
for any existing Instance that has listener_port defined

- Add forwards method to modify each InstanceLink object
that changes target to the newly created receptor addresses

This migration was implemented as follows:

1. Add a target_new to InstanceLink which is a foreign key
to ReceptorAddress
2. create receptor addresses
3. link to these receptor addresses using the target_new field
4. rename target_new to target
5. drop listener_port and peers_from_control_nodes from Instance

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
dbfcc40d7c Only create receptor address if port is defined
If a Instance endpoint is patched with
{"peers_from_control_nodes" True}

but a listener_port is not defined on the instance,
or is part of the patch payload, do not create a
receptor address.

Only update or create a receptor address if listener_port
is set, either in the payload or already on the instance.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
73d2c92ae3 Fix condition for creating receptor_address
If listener_port is not explicitly defined don't create a receptor_address
2024-02-02 10:37:41 -05:00
Seth Foster
24a4242147 Add protocol to receptor address serializer
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
92ce85b688 Remove unused warnings import
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
9531f8377a Ensure register_peers target is ReceptorAddress
command has 3 "targets", all of which should be a
ReceptorAddress object

- peers, disconnect, exact

Add logic to make sure each entry in those lists
are receptor addresses.

When creating InstanceLink objects, make sure
target is ReceptorAddress, not an Instance.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
15a16b3dd1 Update bootstrap_development.sh 2024-02-02 10:37:41 -05:00
Seth Foster
a37e7bf147 Add canonical=True when creating ReceptorAddress in tests
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
a2fcd2f97a Fix ui-lint error 2024-02-02 10:37:41 -05:00
Hao Liu
c394ffdd19 Fix lint error, remove unused import 2024-02-02 10:37:41 -05:00
Seth Foster
69102cf265 Remove receptor_address module from collection
After removing CRUD from receptor addresses, we need
to remove the module.

- remove receptor_address module
- Add listener_port to instance module
- Add peers_from_control_nodes to instance module

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
a188798543 Make canonical field default to False
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
60108ebd10 Rename migration dependency
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Jeff Bradberry
8c7c00451a Join across the InstanceLink.target to the underlying Instance
to avoid having to do a lookup for every instance in the list for the
mesh visualizer endpoint.
2024-02-02 10:37:41 -05:00
Seth Foster
7a1ed406da Remove CRUD for Receptor Addresses
Removes ability to directly create and delete
receptor addresses for a given node.

Instead, receptor addresses are created automatically
if listener_port is set on the Instance.

For example patching "hop" instance

with {"listener_port": 6667}

will create a canonical receptor address with port
6667.

Likewise, peers_from_control_nodes on the instance
sets the peers_from_control_nodes on the canonical
address (if listener port is also set).

protocol is a read-only field that simply reflects
the canonical address protocol.

Other Changes:

- rename k8s_routable to is_internal
- add protocol to ReceptorAddress
- remove peers_from_control_nodes and listener_port
from Instance model

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
César Francisco San Nicolás Martínez
f916ffe1e9 Adjust migration names and dependencies 2024-02-02 10:37:41 -05:00
César Francisco San Nicolás Martínez
901dbd697e Comment unused dependency 2024-02-02 10:37:41 -05:00
David O Neill
d8b4a9825e Cleanup
- Remove peer selection on add and edit instance
- Added conconcial name and order comlums names same on endpoints and
  peers
- Other cleanup items
- rename name to instance name
2024-02-02 10:37:41 -05:00
Hao Liu
6db66c5f81 Fix provision instance not respecting protocol 2024-02-02 10:37:41 -05:00
David O Neill
82ad7dcf40 Mesh UI support
- add endpoint
- delete endpoint (wip)
- associate
- disassociate
2024-02-02 10:37:41 -05:00
David O Neill
93500f9fea Fix lint trailing whitespace 2024-02-02 10:37:41 -05:00
Seth Foster
9ba70c151d Add canonical receptor address
Creates a non-deletable address that acts as
the "main" address for this instance.

All other addresses for that instance must
be non-canonical.

When listener_port on an instance is set, automatically
create a canonical receptor address where:
  - address is hostname of instance
  - port is listener_port
  - canonical is True

Additionally, protocol field is added to instance to
denote the receptor listener protocol to use (ws, tcp).

The receptor config listener information is derived from
the listener_port and protocol information. Having a
canonical address that mirrors the listener_port ensures that
an address exists that matches the receptor config information.

Other changes:
- Add managed field to receptor address.
If managed is True, no fields on on this address can be edited
via the API.
If canonical is True, only the address cannot be edited.

- Add managed field to instance. If managed is True, users
cannot set node_state to deprovisioning (i.e. cannot delete node)

This change to our mechanism to prevent users from deleting
the mesh ingress hop node.

- Field is_internal is now renamed to k8s_routable

- Add reverse_peers on instance which is a list of instance IDs
that peer to this instance (via an address)

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
David O Neill
46dc61253f UI Updates for receptor peering 2024-02-02 10:37:41 -05:00
Seth Foster
6cb2cd18b0 Fix proper indent to instance module
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5d1dd8ec41 Fix inconsistent tab width
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
9f69daf787 Add choices to module protocol field
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
16ece5de7e Remove unused variables and imports
Addresses flake8 and linting failures

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
ab0e9265c5 Add search fields to views
Make receptoraddress list views
searchable by "address"

Other changes:
- Add help text to source and target of the
InstanceLink model

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
04cbbbccfa Update awx_collection to support ReceptorAddress
- Add receptor_address module which allows
users to create addresses for instances

- Update awx_collection functional and integration
tests to support new peering design

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
d1cacf64de Add functional and unit tests
Updated existing tests to support the
ReceptorAddress model

- cannot peer to self
- cannot peer to node that is already peered to me
- cannot peer to node more than once (via 2+ addresses)
- cannot set is_internal True

Other changes:
Change post save signal to only call
schedule_write_receptor_config() when an actual change is detected.

Make functional tests more robust by
checking for specific validation error in the
response.

I.e. instead of just checking for 400, just for 400
and that the error message corresponds to the
validation we are testing for.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5385eb0fb3 Add validation when setting peers
- cannot peer to self
- cannot peer to instance that is already peered to self

Other changes:
- ReceptorAddress protocol field restricted to choices: tcp, ws, wss
- fix awx-manage list_instances when instance.last_seen is None
- InstanceLink make source and target unique together
- Add help text to the ReceptorAddress fields

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
7d7503279d Add API validation when creating ReceptorAddress
- websocket_path can only be set if protocol is ws
- is_internal must be False
- only 1 address per instance can have
peers_from_control_nodes set to True

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
d860d1d91b Temp change to aid dev
Temporarily switch to feature branch for receptor-collection in instance_install_bundle to make dev easier
2024-02-02 10:37:41 -05:00
Seth Foster
3a17c45b64 Register_peers support for receptor_addresses
register_peers has inputs:

source: source instance
peers: list of instances the source should peer to

InstanceLink "target" is now expected to be a ReceptorAddress

For each peer, we can just use the first receptor address. If
multiple receptor addresses exist, throw a command error.

Currently this command is only used on VM-deployments, where
there is only a single receptor address per instance, so this
should work fine.

Other changes:
drop listener_port field from Instance. Listener port is now just
"port" on ReceptorAddress

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
bca68bcdf1 Add install bundle support
group_vars all.yaml changes:
- peer entry has two fields, address and port
- receptor_port is inferred from the first
receptor_address entry that uses protocol tcp

other changes:
ActivityStream now records when receptor_addresses
are peered to

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
c32f234ebb Add peers_from_control_nodes to ReceptorAddress
- write_receptor_config peers to ReceptorAddress entries
that have peers_from_control_nodes enabled

- peers_from_control_nodes and listener_port removed from Instance model

- peers_from_control_nodes added to ReceptorAddress model

- ReceptorAddress is now unique by address and protocol combination

- Write receptor config task is dispatched upon ReceptorAddress creation
or deletion, and when control node is first created

- InstanceLinkSerializer adds a target_address field and has logic
to grab the instance hostname associated with the peered ReceptorAddress

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5cb3d3b078 Update receptor conf when address changes
Add post save and post delete hooks to
call write_receptor_config when
a receptor address is added / removed.

Add peers_from_control_nodes to
provision_instance

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5199cc5246 Add ReceptorAddress to root urls
- Add database contraints to make sure addresses
are unique
If port is defined:
address, port, protocol, websocket_path are unique together

if port is not defined:
address, protocol, websocket_path are unique together

- Allow deleting address via API
- Add ReceptorAddressAccess to determine permissions
- awx-manage add_receptor_address returns changed: True
if successful
2024-02-02 10:37:41 -05:00
Hao Liu
387e877485 Connect from controlplane node to mesh ingress 2024-02-02 10:37:41 -05:00
Seth Foster
d54c5934ff Add support for inbound hop nodes 2024-02-02 10:37:41 -05:00
Chris Meyers
2fa5116197 Project updates do not run against hosts
* The project update event has no host_id or host_name because they
  only run on localhost.
2024-02-01 14:45:16 -05:00
Chris Meyers
527755d986 Always get host from event data
* Regardless of if there is a host map or not, get the host that Ansible
  reports and assign it to the Events' host_name
2024-01-31 09:42:01 -05:00
Chris Meyers
f9c0b97c53 Avoid EDA dev env port conflict
* Not many, if any, folks use the notebook feature. It kind of goes in
  and out of popularity. We've used it in the past when we work on
  features that require visualization (i.e. network graphs, workflows).
  Might as well keep it around in case we use it again.
2024-01-30 11:17:30 -05:00
TVo
65655f84de Added section for using private image for default EE. (#14815)
* Added hop node information and beefed up exec nodes section.

* Made updates to the Topology viewer & dis/associate peers

* Added hop node information and beefed up exec nodes section.

* Made updates to the Topology viewer & dis/associate peers

* Fixed build and image rendering issues.

* Replaced graphic with many many nodes!

* Updated images with latest

* Incorporated review inputs from @fosterseth.

* Incorporated more review feedback from @fosterseth

* Resized graphic

* Resized graphic again

* Reduced image sizes to scale better on different browsers.

* Added section for using private image for default EE.

* Deleted .md file for execution nodes due to migration to RTD.
2024-01-29 17:48:53 -07:00
Elijah DeLee
9aa3d5584a fix nginx append slash to respect proxy
This is already fixed in awx-operator.
See a534c856db/roles/installer/templates/configmaps/config.yaml.j2 (L215)
This just makes it so a development environment can also work correctly
behind a proxy

Fixes problem of
GET to https://$PROXY/something/awx/v2/me
rewritten to https://$AWX/something/awx/v2/me/ (which doesn't exist)

instead path is correctly rewritten as https://$PROXY/something/awx/v2/me/
2024-01-29 15:30:16 -05:00
TVo
266e31d71a Added hop node information and beefed up exec nodes section. (#14787)
* Added hop node information and beefed up exec nodes section.

* Made updates to the Topology viewer & dis/associate peers

* Added hop node information and beefed up exec nodes section.

* Made updates to the Topology viewer & dis/associate peers

* Fixed build and image rendering issues.

* Replaced graphic with many many nodes!

* Updated images with latest

* Incorporated review inputs from @fosterseth.

* Incorporated more review feedback from @fosterseth

* Resized graphic

* Resized graphic again

* Reduced image sizes to scale better on different browsers.
2024-01-27 14:23:32 -07:00
Alan Rominger
a1bbe75aed Adopt new rules from black upgrade (#14809) 2024-01-26 12:54:44 -05:00
Sandra McCann
695f1cf892 Replaced old tower docs link with new AWX docs link (#14801) 2024-01-25 17:23:48 -07:00
Chris Meyers
0ab103d8c4 Get that new AWX DAB hotness 2024-01-25 15:45:18 -05:00
Chris Meyers
9ac1c0f6c2 Specify docker network when multiple networks
* We introduced multi networks to our docker env. The code this replaces
  would return both networks' ip addresses concatinated i.e.
  '192.168.2.1192.168.2.3'.
2024-01-25 14:47:02 -05:00
Lila Yasin
2e168d8177 Add userpass and LDAP support for HashiCorp vault credential_plugin (#14654)
* Add username and password to handle_auth and update exception message

Revise naming of ldap username and password

* Add url for LDAP and userpass to method_auth

* Add information regarding LDAP and username and password to credential plugins documentation

Revise ldap_auth to userpass_auth and revised exception to better reflect functionality

* Revise method_auth to ensure certs can be used with username and ensure namespace functionality is not hindered
2024-01-25 09:50:13 -05:00
Kristof Wevers
d4f7bfef18 feat: Add retries to requests sessions
Every so often we get connection timed out errors towards our HCP Vault
endpoint. This is usually when a larger number of jobs is running
simultaneously. Considering requests for other jobs do still succeed this
is probably load related and adding a retry should help in making this a
bit more robust.
2024-01-24 15:45:54 -05:00
dependabot[bot]
985a8d499d Bump jinja2 from 3.1.2 to 3.1.3 in /docs/docsite
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.2 to 3.1.3.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.2...3.1.3)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-24 15:36:19 -05:00
Chris Meyers
e3b52f0169 Join the service-mesh docker network
* Put the awx node(s) on a service-mesh docker network so they can be
  proxied to. Also put all the other containers on an explicit awx
  network otherwise they can not talk to each other. We might could be
  more surgical about what containers we put on awx but I just added all
  of them.
2024-01-24 10:34:44 -05:00
jessicamack
f69f600cff Refer to the ansible repo for django-ansible-base requirement (#14793)
* update to the proper repo

* refer to devel
2024-01-22 10:29:47 -05:00
TVo
74cd23be5c Fixed/updated URL for “Passing Variables on the Command Line" link. (#14763) 2024-01-19 11:41:17 -07:00
jessicamack
209747d88e Update for django-ansible-base split (#14783)
* update paths and names

* temp to get tests passing

* fix typo
2024-01-19 12:30:32 -05:00
Alan Rominger
d91da39f81 New setting for pg_notify listener DB settings, add keepalive (#14755) 2024-01-17 13:44:04 -05:00
Michael Tipton
5cd029df96 Add secure flag option for userLoggedIn cookie if SESSION_COOKIE_SECU… (#14762)
Add secure flag option for userLoggedIn cookie if SESSION_COOKIE_SECURE set to True
2024-01-17 09:36:06 -05:00
Michael Abashian
5a93a519f6 Fix linting error in SubscriptionUsageChart 2024-01-16 14:12:55 -05:00
jessicamack
5f5cd960d5 Add django-ansible-base settings (#14768)
add ansible base settings
2024-01-16 15:55:59 +00:00
Jeff Bradberry
42701f32fe Build the source distribution bundle to also upload to PyPI
This has been broken since 20.0.1.
2024-01-11 15:49:07 -05:00
Hao Liu
30d4df788f Update dependency django-ansible-base (#14752) 2024-01-10 11:05:57 -05:00
Cameron McLaughlin
1bcd71a8ac Update execution environment documentation link (fixes #14690) (#14741)
Update execution enviorment documentation link (fixes #14690)

Signed-off-by: Cameron McLaughlin <auatr7@protonmail.com>
2024-01-05 15:47:55 -07:00
Patrick Uiterwijk
43be90f051 Add support for Bitbucket Data Center webhooks (#14674)
Add support for receiving webhooks from Bitbucket Data Center, and add support for posting build statuses back

Note that this is very explicitly only for Bitbucket Data Center.
The entire webhook format and API is entirely different for Bitbucket Cloud.
2024-01-05 09:34:29 -05:00
TVo
bb1922cdbb Update conf.py to 2024 (#14743)
* Update conf.py to 2023

* Update awxkit/awxkit/cli/docs/source/conf.py

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-01-04 12:38:21 -07:00
Martin Slemr
403f545071 Fix port conflicts when running other Ansible dev environments (#14701)
AAP: Docker port conflicts
2024-01-04 09:10:55 -05:00
Nenodema
a06a2a883c Adding "address" property 2024-01-03 15:08:18 -05:00
Keith Grant
2529fdcfd7 Persist schedule prompt on launch fields when editing (#14736)
* persist schedule prompt on launch fields when editing

* Merge job template default credentials with schedule overrides in schedule prompt

* rename vars for clarity

* handle undefined defaultCredentials

---------

Co-authored-by: Michael Abashian <mabashia@redhat.com>
2023-12-21 14:46:49 -05:00
loh
19dff9c2d1 Fix twilio_backend.py to send SMS to multiple destinations. (#14656)
AWX only sends Twilio notifications to one destination with the current version of code, but this is a bug. Fixed this bug for sending SMS to multiple destinations.
2023-12-20 15:31:47 -05:00
Chris Meyers
2a6cf032f8 refactor awxkit import code
* Move awxkit import code into a pytest fixture to better control when
  the import happens
* Ensure /awx_devel/awxkit is added to sys path before awxkit import
  runs
2023-12-18 12:00:50 -05:00
Chris Meyers
6119b33a50 add awx collection export tests
* Basic export tests
* Added test that highlights a problem with running Schedule exports as
  non-root user. We rely on the POST key in the OPTIONS response to
  determine the fields to export for a resource. The POST key is not
  present if a user does NOT have create privileges.
* Fixed up forwarding all headers from the API server back to the test
  code. This was causing a problem in awxkit code that checks for
  allowed HTTP Verbs in the headers.
2023-12-18 12:00:50 -05:00
John Westcott IV
aacf9653c5 Use filtering/sorting from django-ansible-base (#14726)
* Move filtering to DAB

* add comment to trigger building a new image

Signed-off-by: jessicamack <jmack@redhat.com>

* remove unneeded comment

Signed-off-by: jessicamack <jmack@redhat.com>

* remove unused imports

Signed-off-by: jessicamack <jmack@redhat.com>

* change mock import

Signed-off-by: jessicamack <jmack@redhat.com>

---------

Signed-off-by: jessicamack <jmack@redhat.com>
Co-authored-by: jessicamack <jmack@redhat.com>
2023-12-18 10:05:02 -05:00
Alan Rominger
325f5250db Narrow the actor types accepted for RBAC evaluations (#14709)
* Narrow the scope of RBAC evaluations

* Update tests for RBAC method changes

* Simplify querset for credentials in org

* Fix call pattern to pass in team role obj
2023-12-14 21:30:47 -05:00
Alan Rominger
b14518c1e5 Simplify RBAC get_roles_on_resource method (#14710)
* Simplify RBAC get_roles_on_resource method

* Fix bug

* Fix query type bug
2023-12-14 10:42:26 -05:00
Hao Liu
6440e3cb55 Send SIGKILL to rsyslog if hard cancellation is needed 2023-12-14 10:41:48 -05:00
Hao Liu
b5f6aac3aa Correct misuse of stdxxx_event_enabled
Not every log messages need to be emitted as a event!
2023-12-14 10:41:48 -05:00
Hao Liu
6e5e1c8fff Recover rsyslog from 4xx error
Due to https://github.com/ansible/awx/issues/7560

'omhttp' module for rsyslog will completely stop forwarding message to external log aggregator after receiving a 4xx error from the external log aggregator

This PR is an "workaround" for this problem by restarting rsyslogd after detecting that rsyslog received a 4xx error
2023-12-14 10:41:48 -05:00
Hao Liu
bf42c63c12 Remove superwatcher from docker-compose dev (#14708)
When making changes to the application sometime you can accidentally cause FATAL state and cause the dev container to crash which will remove any ephemeral changes that you have made and is ANNOYING!
2023-12-13 14:26:53 -05:00
Avi Layani
df24cb692b Adding hosts bulk deletion feature (#14462)
* Adding hosts bulk deletion feature

Signed-off-by: Avi Layani <alayani@redhat.com>

* fix the type of the argument

Signed-off-by: Avi Layani <alayani@redhat.com>

* fixing activity_entry tracking

Signed-off-by: Avi Layani <alayani@redhat.com>

* Revert "fixing activity_entry tracking"

This reverts commit c8eab52c2ccc5abe215d56d1704ba1157e5fbbd0.
Since the bulk_delete is not related to an inventory, only hosts which
can be from different inventories.

* get only needed vars to reduce memory consumption

Signed-off-by: Avi Layani <alayani@redhat.com>

* filtering the data to reduce memory increase the number of queries

Signed-off-by: Avi Layani <alayani@redhat.com>

* update the activity stream for inventories

Signed-off-by: Avi Layani <alayani@redhat.com>

* fix the changes dict initialiazation

Signed-off-by: Avi Layani <alayani@redhat.com>

---------

Signed-off-by: Avi Layani <alayani@redhat.com>
2023-12-13 10:28:31 -06:00
jessicamack
0d825a744b Update setuptools-scm (#14716)
* properly format requirement

* upgrade setuptools_scm

* Revert "properly format requirement"

This reverts commit 4c8792950f.

* test ansible-runner package upgrade

* Revert "test ansible-runner package upgrade"

This reverts commit ba4b74f2bb.
2023-12-11 17:11:04 -05:00
Marliana Lara
5e48bf091b Fix undefined error in settings/logging/edit form (#14715)
Fix undefined error in logging settings edit form
2023-12-11 10:58:30 -05:00
Alan Rominger
1294cec92c Fix updater bug due to missing newline at EOF (#14713) 2023-12-08 16:51:17 +00:00
jessicamack
dae12ee1b8 Remove incorrectly formatted line from requirements.txt (#14714)
remove git+ line
2023-12-08 11:06:17 -05:00
jessicamack
b091f6cf79 Add django-ansible-base (#14705)
* add django-ansible-base

Signed-off-by: jessicamack <jmack@redhat.com>

* add licenses

* add django-ansible-base

Signed-off-by: jessicamack <jmack@redhat.com>

* add licenses

* apply patch to fix permissions issue

---------

Signed-off-by: jessicamack <jmack@redhat.com>
2023-12-07 11:45:44 -05:00
Don Naro
fe564c5fad Dependabot for docsite requirements (#14670) 2023-12-06 15:11:44 -05:00
Tyler Muir
eb3bc84461 remove unnecessary required flags for saml backend (#14666)
Signed-off-by: Tyler Muir <tylergmuir@gmail.com>
2023-12-06 15:08:54 -05:00
Andrew Austin
6aa2997dce Add TLS certificate auth for HashiCorp Vault (#14534)
* Add TLS certificate auth for HashiCorp Vault

Add support for AWX to authenticate with HashiCorp Vault using
TLS client certificates.

Also updates the documentation for the HashiCorp Vault secret management
plugins to include both the new TLS options and the missing Kubernetes
auth method options.

Signed-off-by: Andrew Austin <aaustin@redhat.com>

* Refactor docker-compose vault for TLS cert auth

Add TLS configuration to the docker-compose Vault configuration and
use that method by default in vault plumbing.

This ensures that the result of bringing up the docker-compose stack
with vault enabled and running the plumb-vault playbook is a fully
working credential retrieval setup using TLS client cert authentication.

Signed-off-by: Andrew Austin <aaustin@redhat.com>

* Remove incorrect trailing space

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

* Make vault init idempotent

- improve error handling for vault_initialization
- ignore error if vault cert auth is already configured
- removed unused register

* Add VAULT_TLS option

Make TLS for HashiCorp Vault optional and configurable via VAULT_TLS env var

* Add retries for vault init

Sometime it took longer for vault to fully come up and init will fail

---------

Signed-off-by: Andrew Austin <aaustin@redhat.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
Co-authored-by: Hao Liu <haoli@redhat.com>
2023-12-06 19:12:15 +00:00
3343 changed files with 23231 additions and 380105 deletions

57
.codecov.yml Normal file
View File

@@ -0,0 +1,57 @@
---
codecov:
notify:
after_n_builds: 6 # Number of test matrix+lint jobs uploading coverage
wait_for_ci: false
require_ci_to_pass: false
token: >- # repo-scoped, upload-only, needed for stability in PRs from forks
2b8c7a7a-7293-4a00-bf02-19bd55a1389b
comment:
require_changes: true
coverage:
range: 100..100
status:
patch:
default:
target: 100%
pytest:
target: 100%
flags:
- pytest
typing:
flags:
- MyPy
project:
default:
target: 75%
lib:
flags:
- pytest
paths:
- awx/
target: 75%
tests:
flags:
- pytest
paths:
- tests/
- >-
**/test/
- >-
**/tests/
- >-
**/test/**
- >-
**/tests/**
target: 95%
typing:
flags:
- MyPy
target: 100%
...

View File

@@ -1,16 +1,6 @@
[run]
source = awx
branch = True
omit =
awx/main/migrations/*
awx/lib/site-packages/*
[report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
exclude_also =
# Don't complain about missing debug-only code:
def __repr__
if self\.debug
@@ -23,7 +13,18 @@ exclude_lines =
if 0:
if __name__ == .__main__.:
ignore_errors = True
^\s*@pytest\.mark\.xfail
[run]
branch = True
omit =
awx/main/migrations/*
awx/settings/defaults.py
awx/settings/*_defaults.py
source =
.
source_pkgs =
awx
[xml]
output = ./reports/coverage.xml

View File

@@ -4,25 +4,41 @@ inputs:
github-token:
description: GitHub Token for registry access
required: true
private-github-key:
description: GitHub private key for private repositories
required: false
default: ''
runs:
using: composite
steps:
- name: Get python version from Makefile
- uses: ./.github/actions/setup-python
- name: Set lower case owner name
shell: bash
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
run: echo "OWNER_LC=${OWNER,,}" >> $GITHUB_ENV
env:
OWNER: '${{ github.repository_owner }}'
- name: Log in to registry
shell: bash
run: |
echo "${{ inputs.github-token }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- uses: ./.github/actions/setup-ssh-agent
with:
ssh-private-key: ${{ inputs.private-github-key }}
- name: Pre-pull latest devel image to warm cache
shell: bash
run: docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} \
COMPOSE_TAG=${{ github.base_ref || github.ref_name }} \
docker pull -q `make print-DEVEL_IMAGE_NAME`
continue-on-error: true
- name: Build image for current source checkout
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} \
COMPOSE_TAG=${{ github.base_ref || github.ref_name }} \
make docker-compose-build

View File

@@ -9,20 +9,30 @@ inputs:
required: false
default: false
type: boolean
private-github-key:
description: GitHub private key for private repositories
required: false
default: ''
outputs:
ip:
description: The IP of the tools_awx_1 container
value: ${{ steps.data.outputs.ip }}
admin-token:
description: OAuth token for admin user
value: ${{ steps.data.outputs.admin_token }}
runs:
using: composite
steps:
- name: Disable apparmor for rsyslogd, first step
shell: bash
run: sudo ln -s /etc/apparmor.d/usr.sbin.rsyslogd /etc/apparmor.d/disable/
- name: Disable apparmor for rsyslogd, second step
shell: bash
run: sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.rsyslogd
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ inputs.github-token }}
private-github-key: ${{ inputs.private-github-key }}
- name: Upgrade ansible-core
shell: bash
@@ -35,9 +45,11 @@ runs:
- name: Start AWX
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
COMPOSE_UP_OPTS="-d" \
DEV_DOCKER_OWNER=${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref || github.ref_name }} \
DJANGO_COLORS=nocolor \
SUPERVISOR_ARGS="-n -t" \
COMPOSE_UP_OPTS="-d --no-color" \
make docker-compose
- name: Update default AWX password
@@ -57,21 +69,9 @@ runs:
awx-manage update_password --username=admin --password=password
EOSH
- name: Build UI
# This must be a string comparison in composite actions:
# https://github.com/actions/runner/issues/2238
if: ${{ inputs.build-ui == 'true' }}
shell: bash
run: |
docker exec -i tools_awx_1 sh <<-EOSH
make ui-devel
EOSH
- name: Get instance data
id: data
shell: bash
run: |
AWX_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' tools_awx_1)
ADMIN_TOKEN=$(docker exec -i tools_awx_1 awx-manage create_oauth2_token --user admin)
AWX_IP=$(docker inspect -f '{{.NetworkSettings.Networks.awx.IPAddress}}' tools_awx_1)
echo "ip=$AWX_IP" >> $GITHUB_OUTPUT
echo "admin_token=$ADMIN_TOKEN" >> $GITHUB_OUTPUT

27
.github/actions/setup-python/action.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: 'Setup Python from Makefile'
description: 'Extract and set up Python version from Makefile'
inputs:
python-version:
description: 'Override Python version (optional)'
required: false
default: ''
working-directory:
description: 'Directory containing the Makefile'
required: false
default: '.'
runs:
using: composite
steps:
- name: Get python version from Makefile
shell: bash
run: |
if [ -n "${{ inputs.python-version }}" ]; then
echo "py_version=${{ inputs.python-version }}" >> $GITHUB_ENV
else
cd ${{ inputs.working-directory }}
echo "py_version=`make PYTHON_VERSION`" >> $GITHUB_ENV
fi
- name: Install python
uses: actions/setup-python@v5
with:
python-version: ${{ env.py_version }}

View File

@@ -0,0 +1,29 @@
name: 'Setup SSH for GitHub'
description: 'Configure SSH for private repository access'
inputs:
ssh-private-key:
description: 'SSH private key for repository access'
required: false
default: ''
runs:
using: composite
steps:
- name: Generate placeholder SSH private key if SSH auth for private repos is not needed
id: generate_key
shell: bash
run: |
if [[ -z "${{ inputs.ssh-private-key }}" ]]; then
ssh-keygen -t ed25519 -C "github-actions" -N "" -f ~/.ssh/id_ed25519
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
cat ~/.ssh/id_ed25519 >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
else
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
echo "${{ inputs.ssh-private-key }}" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
fi
- name: Add private GitHub key to SSH agent
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ steps.generate_key.outputs.SSH_PRIVATE_KEY }}

View File

@@ -13,7 +13,7 @@ runs:
docker logs tools_awx_1 > ${{ inputs.log-filename }}
- name: Upload AWX logs as artifact
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: docker-compose-logs
name: docker-compose-logs-${{ inputs.log-filename }}
path: ${{ inputs.log-filename }}

10
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
version: 2
updates:
- package-ecosystem: "pip"
directory: "docs/docsite/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2
labels:
- "docs"
- "dependencies"

View File

@@ -6,8 +6,6 @@ needs_triage:
- "Feature Summary"
"component:ui":
- "\\[X\\] UI"
"component:ui_next":
- "\\[X\\] UI \\(tech preview\\)"
"component:api":
- "\\[X\\] API"
"component:docs":

View File

@@ -1,8 +1,5 @@
"component:api":
- any: ["awx/**/*", "!awx/ui/**"]
"component:ui":
- any: ["awx/ui/**/*"]
- any: ["awx/**/*"]
"component:docs":
- any: ["docs/**/*"]
@@ -14,6 +11,4 @@
- any: ["awx_collection/**/*"]
"dependencies":
- any: ["awx/ui/package.json"]
- any: ["requirements/*.txt"]
- any: ["requirements/requirements.in"]
- any: ["requirements/*"]

View File

@@ -1,7 +1,6 @@
## General
- For the roundup of all the different mailing lists available from AWX, Ansible, and beyond visit: https://docs.ansible.com/ansible/latest/community/communication.html
- Hello, we think your question is answered in our FAQ. Does this: https://www.ansible.com/products/awx-project/faq cover your question?
- You can find the latest documentation here: https://docs.ansible.com/automation-controller/latest/html/userguide/index.html
- You can find the latest documentation here: https://ansible.readthedocs.io/projects/awx/en/latest/userguide/index.html
@@ -83,7 +82,7 @@ The Ansible Community is looking at building an EE that corresponds to all of th
## Mailing List Triage
### Create an issue
- Hello, thanks for reaching out on list. We think this merits an issue on our Github, https://github.com/ansible/awx/issues. If you could open an issue up on Github it will get tagged and integrated into our planning and workflow. All future work will be tracked there. Issues should include as much information as possible, including screenshots, log outputs, or any reproducers.
- Hello, thanks for reaching out on list. We think this merits an issue on our GitHub, https://github.com/ansible/awx/issues. If you could open an issue up on GitHub it will get tagged and integrated into our planning and workflow. All future work will be tracked there. Issues should include as much information as possible, including screenshots, log outputs, or any reproducers.
### Create a Pull Request
- Hello, we think your idea is good! Please consider contributing a PR for this following our contributing guidelines: https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md
@@ -93,8 +92,8 @@ The Ansible Community is looking at building an EE that corresponds to all of th
- Hello, your issue seems related to receptor. Could you please open an issue in the receptor repository? https://github.com/ansible/receptor. Thanks!
### Ansible Engine not AWX
- Hello, your question seems to be about Ansible development, not about AWX. Try asking on the Ansible-devel specific mailing list: https://groups.google.com/g/ansible-devel
- Hello, your question seems to be about using Ansible, not about AWX. https://groups.google.com/g/ansible-project is the best place to visit for user questions about Ansible. Thanks!
- Hello, your question seems to be about Ansible development, not about AWX. Try asking on in the Forum https://forum.ansible.com/tag/development
- Hello, your question seems to be about using Ansible Core, not about AWX. https://forum.ansible.com/tag/ansible-core is the best place to visit for user questions about Ansible. Thanks!
### Ansible Galaxy not AWX
- Hey there. That sounds like an FAQ question. Did this: https://www.ansible.com/products/awx-project/faq cover your question?
@@ -104,7 +103,7 @@ The Ansible Community is looking at building an EE that corresponds to all of th
- AWX-Operator: https://github.com/ansible/awx-operator/blob/devel/CONTRIBUTING.md
### Oracle AWX
We'd be happy to help if you can reproduce this with AWX since we do not have Oracle's Linux Automation Manager. If you need help with this specific version of Oracles Linux Automation Manager you will need to contact your Oracle for support.
We'd be happy to help if you can reproduce this with AWX since we do not have Oracle's Linux Automation Manager. If you need help with this specific version of Oracles Linux Automation Manager you will need to contact your Oracle for support.
### Community Resolved
Hi,

View File

@@ -5,8 +5,12 @@ env:
CI_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DEV_DOCKER_OWNER: ${{ github.repository_owner }}
COMPOSE_TAG: ${{ github.base_ref || 'devel' }}
UPSTREAM_REPOSITORY_ID: 91594105
on:
pull_request:
push:
branches:
- devel # needed to publish code coverage post-merge
jobs:
common-tests:
name: ${{ matrix.tests.name }}
@@ -20,72 +24,157 @@ jobs:
matrix:
tests:
- name: api-test
command: /start_tests.sh
command: /start_tests.sh test_coverage
coverage-upload-name: ""
- name: api-migrations
command: /start_tests.sh test_migrations
coverage-upload-name: ""
- name: api-lint
command: /var/lib/awx/venv/awx/bin/tox -e linters
coverage-upload-name: ""
- name: api-swagger
command: /start_tests.sh swagger
coverage-upload-name: ""
- name: awx-collection
command: /start_tests.sh test_collection_all
coverage-upload-name: "awx-collection"
- name: api-schema
command: /start_tests.sh detect-schema-change SCHEMA_DIFF_BASE_BRANCH=${{ github.event.pull_request.base.ref }}
- name: ui-lint
command: make ui-lint
- name: ui-test-screens
command: make ui-test-screens
- name: ui-test-general
command: make ui-test-general
command: >-
/start_tests.sh detect-schema-change SCHEMA_DIFF_BASE_BRANCH=${{
github.event.pull_request.base.ref || github.ref_name
}}
coverage-upload-name: ""
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
show-progress: false
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
private-github-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Run check ${{ matrix.tests.name }}
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make docker-runner
id: make-run
run: >-
AWX_DOCKER_ARGS='-e GITHUB_ACTIONS -e GITHUB_OUTPUT -v "${GITHUB_OUTPUT}:${GITHUB_OUTPUT}:rw,Z"'
AWX_DOCKER_CMD='${{ matrix.tests.command }}'
make docker-runner
- name: Upload test coverage to Codecov
if: >-
!cancelled()
&& steps.make-run.outputs.cov-report-files != ''
uses: codecov/codecov-action@v4
with:
fail_ci_if_error: >-
${{
toJSON(env.UPSTREAM_REPOSITORY_ID == github.repository_id)
}}
files: >-
${{ steps.make-run.outputs.cov-report-files }}
flags: >-
CI-GHA,
pytest,
OS-${{
runner.os
}}
token: ${{ secrets.CODECOV_TOKEN }}
- name: Upload test results to Codecov
if: >-
!cancelled()
&& steps.make-run.outputs.test-result-files != ''
uses: codecov/test-results-action@v1
with:
fail_ci_if_error: >-
${{
toJSON(env.UPSTREAM_REPOSITORY_ID == github.repository_id)
}}
files: >-
${{ steps.make-run.outputs.test-result-files }}
flags: >-
CI-GHA,
pytest,
OS-${{
runner.os
}}
token: ${{ secrets.CODECOV_TOKEN }}
- name: Upload awx jUnit test reports
if: >-
!cancelled()
&& steps.make-run.outputs.test-result-files != ''
&& github.event_name == 'push'
&& env.UPSTREAM_REPOSITORY_ID == github.repository_id
&& github.ref_name == github.event.repository.default_branch
run: |
for junit_file in $(echo '${{ steps.make-run.outputs.test-result-files }}' | sed 's/,/ /')
do
curl \
-v \
--user "${{ vars.PDE_ORG_RESULTS_AGGREGATOR_UPLOAD_USER }}:${{ secrets.PDE_ORG_RESULTS_UPLOAD_PASSWORD }}" \
--form "xunit_xml=@${junit_file}" \
--form "component_name=${{ matrix.tests.coverage-upload-name || 'awx' }}" \
--form "git_commit_sha=${{ github.sha }}" \
--form "git_repository_url=https://github.com/${{ github.repository }}" \
"${{ vars.PDE_ORG_RESULTS_AGGREGATOR_UPLOAD_URL }}/api/results/upload/"
done
dev-env:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
show-progress: false
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
private-github-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Run smoke test
run: ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
- name: Run live dev env tests
run: docker exec tools_awx_1 /bin/bash -c "make live_test"
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: live-tests.log
awx-operator:
runs-on: ubuntu-latest
timeout-minutes: 60
env:
DEBUG_OUTPUT_DIR: /tmp/awx_operator_molecule_test
steps:
- name: Checkout awx
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
show-progress: false
path: awx
- name: Checkout awx-operator
uses: actions/checkout@v3
- uses: ./awx/.github/actions/setup-ssh-agent
with:
ssh-private-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Checkout awx-operator
uses: actions/checkout@v4
with:
show-progress: false\
repository: ansible/awx-operator
path: awx-operator
- name: Get python version from Makefile
working-directory: awx
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
- uses: ./awx/.github/actions/setup-python
with:
python-version: ${{ env.py_version }}
working-directory: awx
- name: Install playbook dependencies
run: |
@@ -94,23 +183,34 @@ jobs:
- name: Build AWX image
working-directory: awx
run: |
ansible-playbook -v tools/ansible/build.yml \
-e headless=yes \
-e awx_image=awx \
-e awx_image_tag=ci \
-e ansible_python_interpreter=$(which python3)
VERSION=`make version-for-buildyml` make awx-kube-build
env:
COMPOSE_TAG: ci
DEV_DOCKER_TAG_BASE: local
HEADLESS: yes
- name: Run test deployment with awx-operator
working-directory: awx-operator
run: |
python3 -m pip install -r molecule/requirements.txt
python3 -m pip install PyYAML # for awx/tools/scripts/rewrite-awx-operator-requirements.py
$(realpath ../awx/tools/scripts/rewrite-awx-operator-requirements.py) molecule/requirements.yml $(realpath ../awx)
ansible-galaxy collection install -r molecule/requirements.yml
sudo rm -f $(which kustomize)
make kustomize
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind -- --skip-tags=replicas
env:
AWX_TEST_IMAGE: awx
AWX_TEST_IMAGE: local/awx
AWX_TEST_VERSION: ci
AWX_EE_TEST_IMAGE: quay.io/ansible/awx-ee:latest
STORE_DEBUG_OUTPUT: true
- name: Upload debug output
if: failure()
uses: actions/upload-artifact@v4
with:
name: awx-operator-debug-output
path: ${{ env.DEBUG_OUTPUT_DIR }}
collection-sanity:
name: awx_collection sanity
@@ -118,19 +218,46 @@ jobs:
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
ansible:
- stable-2.17
# - devel
steps:
- uses: actions/checkout@v3
- name: Perform sanity testing
uses: ansible-community/ansible-test-gh-action@release/v1
with:
ansible-core-version: ${{ matrix.ansible }}
codecov-token: ${{ secrets.CODECOV_TOKEN }}
collection-root: awx_collection
pre-test-cmd: >-
ansible-playbook
-i localhost,
tools/template_galaxy.yml
-e collection_package=awx
-e collection_namespace=awx
-e collection_version=1.0.0
-e '{"awx_template_version": false}'
testing-type: sanity
# The containers that GitHub Actions use have Ansible installed, so upgrade to make sure we have the latest version.
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
- name: Run sanity tests
run: make test_collection_sanity
env:
# needed due to cgroupsv2. This is fixed, but a stable release
# with the fix has not been made yet.
ANSIBLE_TEST_PREFER_PODMAN: 1
- name: Upload awx jUnit test reports to the unified dashboard
if: >-
!cancelled()
&& steps.make-run.outputs.test-result-files != ''
&& github.event_name == 'push'
&& env.UPSTREAM_REPOSITORY_ID == github.repository_id
&& github.ref_name == github.event.repository.default_branch
run: |
for junit_file in $(echo '${{ steps.make-run.outputs.test-result-files }}' | sed 's/,/ /')
do
curl \
-v \
--user "${{ vars.PDE_ORG_RESULTS_AGGREGATOR_UPLOAD_USER }}:${{ secrets.PDE_ORG_RESULTS_UPLOAD_PASSWORD }}" \
--form "xunit_xml=@${junit_file}" \
--form "component_name=awx" \
--form "git_commit_sha=${{ github.sha }}" \
--form "git_repository_url=https://github.com/${{ github.repository }}" \
"${{ vars.PDE_ORG_RESULTS_AGGREGATOR_UPLOAD_URL }}/api/results/upload/"
done
collection-integration:
name: awx_collection integration
@@ -147,13 +274,20 @@ jobs:
- name: r-z0-9
regex: ^[r-z0-9]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
show-progress: false
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
private-github-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Install dependencies for running tests
run: |
@@ -161,19 +295,42 @@ jobs:
python3 -m pip install -r awx_collection/requirements.txt
- name: Run integration tests
id: make-run
run: |
echo "::remove-matcher owner=python::" # Disable annoying annotations from setup-python
echo '[general]' > ~/.tower_cli.cfg
echo 'host = https://${{ steps.awx.outputs.ip }}:8043' >> ~/.tower_cli.cfg
echo 'oauth_token = ${{ steps.awx.outputs.admin-token }}' >> ~/.tower_cli.cfg
echo 'username = admin' >> ~/.tower_cli.cfg
echo 'password = password' >> ~/.tower_cli.cfg
echo 'verify_ssl = false' >> ~/.tower_cli.cfg
TARGETS="$(ls awx_collection/tests/integration/targets | grep '${{ matrix.target-regex.regex }}' | tr '\n' ' ')"
make COLLECTION_VERSION=100.100.100-git COLLECTION_TEST_TARGET="--coverage --requirements $TARGETS" test_collection_integration
make COLLECTION_VERSION=100.100.100-git COLLECTION_TEST_TARGET="--requirements $TARGETS" test_collection_integration
env:
ANSIBLE_TEST_PREFER_PODMAN: 1
- name: Upload test coverage to Codecov
if: >-
!cancelled()
&& steps.make-run.outputs.cov-report-files != ''
uses: codecov/codecov-action@v4
with:
fail_ci_if_error: >-
${{
toJSON(env.UPSTREAM_REPOSITORY_ID == github.repository_id)
}}
files: >-
${{ steps.make-run.outputs.cov-report-files }}
flags: >-
CI-GHA,
ansible-test,
integration,
OS-${{
runner.os
}}
token: ${{ secrets.CODECOV_TOKEN }}
# Upload coverage report as artifact
- uses: actions/upload-artifact@v3
- uses: actions/upload-artifact@v4
if: always()
with:
name: coverage-${{ matrix.target-regex.name }}
@@ -193,24 +350,40 @@ jobs:
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
show-progress: false
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
- name: Download coverage artifacts
uses: actions/download-artifact@v3
- name: Download coverage artifacts A to H
uses: actions/download-artifact@v4
with:
name: coverage-a-h
path: coverage
- name: Download coverage artifacts I to P
uses: actions/download-artifact@v4
with:
name: coverage-i-p
path: coverage
- name: Download coverage artifacts Z to Z
uses: actions/download-artifact@v4
with:
name: coverage-r-z0-9
path: coverage
- name: Combine coverage
run: |
make COLLECTION_VERSION=100.100.100-git install_collection
mkdir -p ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage
cd coverage
for i in coverage-*; do
cp -rv $i/* ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
done
cp -rv coverage/* ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
cd ~/.ansible/collections/ansible_collections/awx/awx
ansible-test coverage combine --requirements
ansible-test coverage html
@@ -263,7 +436,7 @@ jobs:
done
- name: Upload coverage report as artifact
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: awx-collection-integration-coverage-html
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/reports/coverage

57
.github/workflows/dab-release.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
---
name: django-ansible-base requirements update
on:
workflow_dispatch:
schedule:
- cron: '0 6 * * *' # once an day @ 6 AM
permissions:
pull-requests: write
contents: write
jobs:
dab-pin-newest:
if: (github.repository_owner == 'ansible' && endsWith(github.repository, 'awx')) || github.event_name != 'schedule'
runs-on: ubuntu-latest
steps:
- id: dab-release
name: Get current django-ansible-base release version
uses: pozetroninc/github-action-get-latest-release@2a61c339ea7ef0a336d1daa35ef0cb1418e7676c # v0.8.0
with:
owner: ansible
repo: django-ansible-base
excludes: prerelease, draft
- name: Check out respository code
uses: actions/checkout@v4
- id: dab-pinned
name: Get current django-ansible-base pinned version
run:
echo "version=$(requirements/django-ansible-base-pinned-version.sh)" >> "$GITHUB_OUTPUT"
- name: Update django-ansible-base pinned version to upstream release
run:
requirements/django-ansible-base-pinned-version.sh -s ${{ steps.dab-release.outputs.release }}
- name: Create Pull Request
uses: peter-evans/create-pull-request@c5a7806660adbe173f04e3e038b0ccdcd758773c # v6
with:
base: devel
branch: bump-django-ansible-base
title: Bump django-ansible-base to ${{ steps.dab-release.outputs.release }}
body: |
##### SUMMARY
Automated .github/workflows/dab-release.yml
django-ansible-base upstream released version == ${{ steps.dab-release.outputs.release }}
requirements_git.txt django-ansible-base pinned version == ${{ steps.dab-pinned.outputs.version }}
##### ISSUE TYPE
- Bug, Docs Fix or other nominal change
##### COMPONENT NAME
- API
commit-message: |
Update django-ansible-base version to ${{ steps.dab-pinned.outputs.version }}
add-paths:
requirements/requirements_git.txt

View File

@@ -2,58 +2,77 @@
name: Build/Push Development Images
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
DOCKER_CACHE: "--no-cache" # using the cache will not rebuild git requirements and other things
on:
workflow_dispatch:
push:
branches:
- devel
- release_*
- feature_*
jobs:
push:
if: endsWith(github.repository, '/awx') || startsWith(github.ref, 'refs/heads/release_')
push-development-images:
runs-on: ubuntu-latest
timeout-minutes: 60
timeout-minutes: 120
permissions:
packages: write
contents: read
strategy:
fail-fast: false
matrix:
build-targets:
- image-name: awx_devel
make-target: docker-compose-buildx
- image-name: awx_kube_devel
make-target: awx-kube-dev-buildx
- image-name: awx
make-target: awx-kube-buildx
steps:
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Set lower case owner name
- name: Skipping build of awx image for non-awx repository
run: |
echo "OWNER_LC=${OWNER,,}" >>${GITHUB_ENV}
echo "Skipping build of awx image for non-awx repository"
exit 0
if: matrix.build-targets.image-name == 'awx' && !endsWith(github.repository, '/awx')
- uses: actions/checkout@v4
with:
show-progress: false
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Set GITHUB_ENV variables
run: |
echo "DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER,,}" >> $GITHUB_ENV
echo "COMPOSE_TAG=${GITHUB_REF##*/}" >> $GITHUB_ENV
env:
OWNER: '${{ github.repository_owner }}'
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
- uses: ./.github/actions/setup-python
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/} || :
- name: Setup node and npm for the new UI build
uses: actions/setup-node@v2
with:
node-version: '18'
if: matrix.build-targets.image-name == 'awx'
- name: Build images
- name: Prebuild new UI for awx image (to speed up build process)
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make docker-compose-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-dev-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-build
make ui
if: matrix.build-targets.image-name == 'awx'
- name: Push development images
- uses: ./.github/actions/setup-ssh-agent
with:
ssh-private-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Build and push AWX devel images
run: |
docker push ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/}
- name: Push AWX k8s image, only for upstream and feature branches
run: docker push ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/}
if: endsWith(github.repository, '/awx')
make ${{ matrix.build-targets.make-target }}

View File

@@ -8,7 +8,13 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
show-progress: false
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
- name: install tox
run: pip install tox

View File

@@ -1,75 +0,0 @@
---
name: E2E Tests
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on:
pull_request_target:
types: [labeled]
jobs:
e2e-test:
if: contains(github.event.pull_request.labels.*.name, 'qe:e2e')
runs-on: ubuntu-latest
timeout-minutes: 40
permissions:
packages: write
contents: read
strategy:
matrix:
job: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: true
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Pull awx_cypress_base image
run: |
docker pull quay.io/awx/awx_cypress_base:latest
- name: Checkout test project
uses: actions/checkout@v3
with:
repository: ${{ github.repository_owner }}/tower-qa
ssh-key: ${{ secrets.QA_REPO_KEY }}
path: tower-qa
ref: devel
- name: Build cypress
run: |
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
docker build -t awx-pf-tests .
- name: Run E2E tests
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
run: |
export COMMIT_INFO_BRANCH=$GITHUB_HEAD_REF
export COMMIT_INFO_AUTHOR=$GITHUB_ACTOR
export COMMIT_INFO_SHA=$GITHUB_SHA
export COMMIT_INFO_REMOTE=$GITHUB_REPOSITORY_OWNER
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
AWX_IP=${{ steps.awx.outputs.ip }}
printenv > .env
echo "Executing tests:"
docker run \
--network '_sources_default' \
--ipc=host \
--env-file=.env \
-e CYPRESS_baseUrl="https://$AWX_IP:8043" \
-e CYPRESS_AWX_E2E_USERNAME=admin \
-e CYPRESS_AWX_E2E_PASSWORD='password' \
-e COMMAND="npm run cypress-concurrently-gha" \
-v /dev/shm:/dev/shm \
-v $PWD:/e2e \
-w /e2e \
awx-pf-tests run --project .
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: e2e-${{ matrix.job }}.log

View File

@@ -2,12 +2,10 @@
name: Feature branch deletion cleanup
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on:
delete:
branches:
- feature_**
on: delete
jobs:
push:
branch_delete:
if: ${{ github.event.ref_type == 'branch' && startsWith(github.event.ref, 'feature_') }}
runs-on: ubuntu-latest
timeout-minutes: 20
permissions:
@@ -22,6 +20,4 @@ jobs:
run: |
ansible localhost -c local, -m command -a "{{ ansible_python_interpreter + ' -m pip install boto3'}}"
ansible localhost -c local -m aws_s3 \
-a "bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=delete permission=public-read"
-a "bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=delobj permission=public-read"

View File

@@ -30,10 +30,15 @@ jobs:
timeout-minutes: 20
name: Label Issue - Community
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- uses: actions/checkout@v4
with:
show-progress: false
- uses: ./.github/actions/setup-python
- name: Install python requests
run: pip install requests
- name: Check if user is a member of Ansible org
uses: jannekem/run-python-script-action@v1
id: check_user

View File

@@ -29,8 +29,14 @@ jobs:
timeout-minutes: 20
name: Label PR - Community
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- uses: actions/checkout@v4
with:
show-progress: false
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
- name: Install python requests
run: pip install requests
- name: Check if user is a member of Ansible org

View File

@@ -7,7 +7,11 @@ env:
on:
release:
types: [published]
workflow_dispatch:
inputs:
tag_name:
description: 'Name for the tag of the release.'
required: true
permissions:
contents: read # to fetch code (actions/checkout)
@@ -17,16 +21,22 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Set GitHub Env vars for workflow_dispatch event
if: ${{ github.event_name == 'workflow_dispatch' }}
run: |
echo "TAG_NAME=${{ github.event.inputs.tag_name }}" >> $GITHUB_ENV
- name: Set GitHub Env vars if release event
if: ${{ github.event_name == 'release' }}
run: |
echo "TAG_NAME=${{ github.event.release.tag_name }}" >> $GITHUB_ENV
- name: Checkout awx
uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
uses: actions/checkout@v4
with:
python-version: ${{ env.py_version }}
show-progress: false
- uses: ./.github/actions/setup-python
- name: Install dependencies
run: |
@@ -43,16 +53,21 @@ jobs:
- name: Build collection and publish to galaxy
env:
COLLECTION_NAMESPACE: ${{ env.collection_namespace }}
COLLECTION_VERSION: ${{ github.event.release.tag_name }}
COLLECTION_VERSION: ${{ env.TAG_NAME }}
COLLECTION_TEMPLATE_VERSION: true
run: |
sudo apt-get install jq
make build_collection
if [ "$(curl -L --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
echo "Galaxy release already done"; \
else \
count=$(curl -s https://galaxy.ansible.com/api/v3/plugin/ansible/search/collection-versions/\?namespace\=${COLLECTION_NAMESPACE}\&name\=awx\&version\=${COLLECTION_VERSION} | jq .meta.count)
if [[ "$count" == "1" ]]; then
echo "Galaxy release already done";
elif [[ "$count" == "0" ]]; then
ansible-galaxy collection publish \
--token=${{ secrets.GALAXY_TOKEN }} \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz; \
awx_collection_build/${COLLECTION_NAMESPACE}-awx-${COLLECTION_VERSION}.tar.gz;
else
echo "Unexpected count from galaxy search: $count";
exit 1;
fi
- name: Set official pypi info
@@ -64,9 +79,11 @@ jobs:
if: ${{ github.repository_owner != 'ansible' }}
- name: Build awxkit and upload to pypi
env:
SETUPTOOLS_SCM_PRETEND_VERSION: ${{ env.TAG_NAME }}
run: |
git reset --hard
cd awxkit && python3 setup.py bdist_wheel
cd awxkit && python3 setup.py sdist bdist_wheel
twine upload \
-r ${{ env.pypi_repo }} \
-u ${{ secrets.PYPI_USERNAME }} \
@@ -83,11 +100,15 @@ jobs:
- name: Re-tag and promote awx image
run: |
docker pull ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }}
docker tag ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }} quay.io/${{ github.repository }}:${{ github.event.release.tag_name }}
docker tag ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }} quay.io/${{ github.repository }}:latest
docker push quay.io/${{ github.repository }}:${{ github.event.release.tag_name }}
docker push quay.io/${{ github.repository }}:latest
docker pull ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker tag ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }} quay.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker push quay.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker buildx imagetools create \
ghcr.io/${{ github.repository }}:${{ env.TAG_NAME }} \
--tag quay.io/${{ github.repository }}:${{ env.TAG_NAME }}
docker buildx imagetools create \
ghcr.io/${{ github.repository }}:${{ env.TAG_NAME }} \
--tag quay.io/${{ github.repository }}:latest
- name: Re-tag and promote awx-ee image
run: |
docker buildx imagetools create \
ghcr.io/${{ github.repository_owner }}/awx-ee:${{ env.TAG_NAME }} \
--tag quay.io/${{ github.repository_owner }}/awx-ee:${{ env.TAG_NAME }}

View File

@@ -45,68 +45,95 @@ jobs:
exit 0
- name: Checkout awx
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
show-progress: false
path: awx
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
- name: Checkout awx-operator
uses: actions/checkout@v4
with:
python-version: ${{ env.py_version }}
show-progress: false
repository: ${{ github.repository_owner }}/awx-operator
path: awx-operator
- name: Checkout awx-logos
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
show-progress: false
repository: ansible/awx-logos
path: awx-logos
- name: Checkout awx-operator
uses: actions/checkout@v3
- uses: ./awx/.github/actions/setup-python
with:
repository: ${{ github.repository_owner }}/awx-operator
path: awx-operator
working-directory: awx
- name: Install playbook dependencies
run: |
python3 -m pip install docker
- name: Build and stage AWX
- name: Log into registry ghcr.io
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Copy logos for inclusion in sdist for official build
working-directory: awx
run: |
ansible-playbook -v tools/ansible/build.yml \
-e registry=ghcr.io \
-e registry_username=${{ github.actor }} \
-e registry_password=${{ secrets.GITHUB_TOKEN }} \
-e awx_image=${{ github.repository }} \
-e awx_version=${{ github.event.inputs.version }} \
-e ansible_python_interpreter=$(which python3) \
-e push=yes \
-e awx_official=yes
cp ../awx-logos/awx/ui/client/assets/* awx/ui/public/static/media/
- name: Log in to GHCR
run: |
echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Setup node and npm for new UI build
uses: actions/setup-node@v2
with:
node-version: '18'
- name: Log in to Quay
- name: Prebuild new UI for awx image (to speed up build process)
working-directory: awx
run: make ui
- name: Set build env variables
run: |
echo ${{ secrets.QUAY_TOKEN }} | docker login quay.io -u ${{ secrets.QUAY_USER }} --password-stdin
echo "DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER,,}" >> $GITHUB_ENV
echo "COMPOSE_TAG=${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "VERSION=${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "AWX_TEST_VERSION=${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "AWX_TEST_IMAGE=ghcr.io/${OWNER,,}/awx" >> $GITHUB_ENV
echo "AWX_EE_TEST_IMAGE=ghcr.io/${OWNER,,}/awx-ee:${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "AWX_OPERATOR_TEST_IMAGE=ghcr.io/${OWNER,,}/awx-operator:${{ github.event.inputs.operator_version }}" >> $GITHUB_ENV
env:
OWNER: ${{ github.repository_owner }}
- name: Build and stage AWX
working-directory: awx
env:
DOCKER_BUILDX_PUSH: true
HEADLESS: false
PLATFORMS: linux/amd64,linux/arm64
run: |
make awx-kube-buildx
- name: tag awx-ee:latest with version input
run: |
docker pull quay.io/ansible/awx-ee:latest
docker tag quay.io/ansible/awx-ee:latest ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
docker push ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
docker buildx imagetools create \
quay.io/ansible/awx-ee:latest \
--tag ${AWX_EE_TEST_IMAGE}
- name: Build and stage awx-operator
- name: Stage awx-operator image
working-directory: awx-operator
run: |
BUILD_ARGS="--build-arg DEFAULT_AWX_VERSION=${{ github.event.inputs.version }} \
--build-arg OPERATOR_VERSION=${{ github.event.inputs.operator_version }}" \
IMAGE_TAG_BASE=ghcr.io/${{ github.repository_owner }}/awx-operator \
VERSION=${{ github.event.inputs.operator_version }} make docker-build docker-push
BUILD_ARGS="--build-arg DEFAULT_AWX_VERSION=${{ github.event.inputs.version}} \
--build-arg OPERATOR_VERSION=${{ github.event.inputs.operator_version }}" \
IMG=${AWX_OPERATOR_TEST_IMAGE} \
make docker-buildx
- name: Pulling images for test deployment with awx-operator
# awx operator molecue test expect to kind load image and buildx exports image to registry and not local
run: |
docker pull -q ${AWX_OPERATOR_TEST_IMAGE}
docker pull -q ${AWX_EE_TEST_IMAGE}
docker pull -q ${AWX_TEST_IMAGE}:${AWX_TEST_VERSION}
- name: Run test deployment with awx-operator
working-directory: awx-operator
@@ -116,10 +143,6 @@ jobs:
sudo rm -f $(which kustomize)
make kustomize
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule test -s kind
env:
AWX_TEST_IMAGE: ${{ github.repository }}
AWX_TEST_VERSION: ${{ github.event.inputs.version }}
AWX_EE_TEST_IMAGE: ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
- name: Create draft release for AWX
working-directory: awx

View File

@@ -13,7 +13,9 @@ jobs:
steps:
- name: Checkout branch
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
show-progress: false
- name: Update PR Body
env:

View File

@@ -5,6 +5,7 @@ env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on:
workflow_dispatch:
push:
branches:
- devel
@@ -18,23 +19,23 @@ jobs:
packages: write
contents: read
steps:
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
- uses: actions/checkout@v4
with:
python-version: ${{ env.py_version }}
show-progress: false
- uses: ./.github/actions/setup-python
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- uses: ./.github/actions/setup-ssh-agent
with:
ssh-private-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${GITHUB_REF##*/} || :
docker pull -q ghcr.io/${{ github.repository_owner }}/awx_devel:${GITHUB_REF##*/} || :
- name: Build image
run: |
@@ -54,5 +55,3 @@ jobs:
ansible localhost -c local, -m command -a "{{ ansible_python_interpreter + ' -m pip install boto3'}}"
ansible localhost -c local -m aws_s3 \
-a "src=${{ github.workspace }}/schema.json bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=put permission=public-read"

32
.gitignore vendored
View File

@@ -20,23 +20,10 @@ awx/projects
awx/job_output
awx/public/media
awx/public/static
awx/ui/tests/test-results.xml
awx/ui/client/src/local_settings.json
awx/main/fixtures
awx/*.log
tower/tower_warnings.log
celerybeat-schedule
awx/ui/static
awx/ui/build_test
awx/ui/client/languages
awx/ui/templates/ui/index.html
awx/ui/templates/ui/installing.html
awx/ui/node_modules/
awx/ui/src/locales/*/messages.js
awx/ui/coverage/
awx/ui/build
awx/ui/.env.local
awx/ui/instrumented
rsyslog.pid
tools/docker-compose/ansible/awx_dump.sql
tools/docker-compose/Dockerfile
@@ -44,7 +31,11 @@ tools/docker-compose/_build
tools/docker-compose/_sources
tools/docker-compose/overrides/
tools/docker-compose-minikube/_sources
tools/docker-compose/keycloak.awx.realm.json
!tools/docker-compose/editable_dependencies
tools/docker-compose/editable_dependencies/*
!tools/docker-compose/editable_dependencies/README.md
!tools/docker-compose/editable_dependencies/install.sh
# Tower setup playbook testing
setup/test/roles/postgresql
@@ -74,11 +65,6 @@ __pycache__
/tmp
**/npm-debug.log*
# UI build flag files
awx/ui/.deps_built
awx/ui/.release_built
awx/ui/.release_deps_built
# Testing
.cache
.coverage
@@ -156,16 +142,18 @@ use_dev_supervisor.txt
.idea/*
*.unison.tmp
*.#
/awx/ui/.ui-built
/_build/
/_build_kube_dev/
/Dockerfile
/Dockerfile.dev
/Dockerfile.kube-dev
awx/ui_next/src
awx/ui_next/build
awx/ui/src
awx/ui/build
# Docs build stuff
docs/docsite/build/
_readthedocs/
# Pyenv
.python-version

113
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,113 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "run_ws_heartbeat",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_ws_heartbeat"],
"django": true,
"preLaunchTask": "stop awx-ws-heartbeat",
"postDebugTask": "start awx-ws-heartbeat"
},
{
"name": "run_cache_clear",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_cache_clear"],
"django": true,
"preLaunchTask": "stop awx-cache-clear",
"postDebugTask": "start awx-cache-clear"
},
{
"name": "run_callback_receiver",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_callback_receiver"],
"django": true,
"preLaunchTask": "stop awx-receiver",
"postDebugTask": "start awx-receiver"
},
{
"name": "run_dispatcher",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_dispatcher"],
"django": true,
"preLaunchTask": "stop awx-dispatcher",
"postDebugTask": "start awx-dispatcher"
},
{
"name": "run_rsyslog_configurer",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_rsyslog_configurer"],
"django": true,
"preLaunchTask": "stop awx-rsyslog-configurer",
"postDebugTask": "start awx-rsyslog-configurer"
},
{
"name": "run_cache_clear",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_cache_clear"],
"django": true,
"preLaunchTask": "stop awx-cache-clear",
"postDebugTask": "start awx-cache-clear"
},
{
"name": "run_wsrelay",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_wsrelay"],
"django": true,
"preLaunchTask": "stop awx-wsrelay",
"postDebugTask": "start awx-wsrelay"
},
{
"name": "daphne",
"type": "debugpy",
"request": "launch",
"program": "/var/lib/awx/venv/awx/bin/daphne",
"args": ["-b", "127.0.0.1", "-p", "8051", "awx.asgi:channel_layer"],
"django": true,
"preLaunchTask": "stop awx-daphne",
"postDebugTask": "start awx-daphne"
},
{
"name": "runserver(uwsgi alternative)",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["runserver", "127.0.0.1:8052"],
"django": true,
"preLaunchTask": "stop awx-uwsgi",
"postDebugTask": "start awx-uwsgi"
},
{
"name": "runserver_plus(uwsgi alternative)",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["runserver_plus", "127.0.0.1:8052"],
"django": true,
"preLaunchTask": "stop awx-uwsgi and install Werkzeug",
"postDebugTask": "start awx-uwsgi"
},
{
"name": "shell_plus",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["shell_plus"],
"django": true,
},
]
}

100
.vscode/tasks.json vendored Normal file
View File

@@ -0,0 +1,100 @@
{
"version": "2.0.0",
"tasks": [
{
"label": "start awx-cache-clear",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-cache-clear"
},
{
"label": "stop awx-cache-clear",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-cache-clear"
},
{
"label": "start awx-daphne",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-daphne"
},
{
"label": "stop awx-daphne",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-daphne"
},
{
"label": "start awx-dispatcher",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-dispatcher"
},
{
"label": "stop awx-dispatcher",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-dispatcher"
},
{
"label": "start awx-receiver",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-receiver"
},
{
"label": "stop awx-receiver",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-receiver"
},
{
"label": "start awx-rsyslog-configurer",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-rsyslog-configurer"
},
{
"label": "stop awx-rsyslog-configurer",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-rsyslog-configurer"
},
{
"label": "start awx-rsyslogd",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-rsyslogd"
},
{
"label": "stop awx-rsyslogd",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-rsyslogd"
},
{
"label": "start awx-uwsgi",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-uwsgi"
},
{
"label": "stop awx-uwsgi",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-uwsgi"
},
{
"label": "stop awx-uwsgi and install Werkzeug",
"type": "shell",
"command": "pip install Werkzeug; supervisorctl stop tower-processes:awx-uwsgi"
},
{
"label": "start awx-ws-heartbeat",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-ws-heartbeat"
},
{
"label": "stop awx-ws-heartbeat",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-ws-heartbeat"
},
{
"label": "start awx-wsrelay",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-wsrelay"
},
{
"label": "stop awx-wsrelay",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-wsrelay"
}
]
}

View File

@@ -5,12 +5,12 @@ ignore: |
awx/main/tests/data/inventory/plugins/**
# vault files
awx/main/tests/data/ansible_utils/playbooks/valid/vault.yml
awx/ui/test/e2e/tests/smoke-vars.yml
awx/ui/node_modules
tools/docker-compose/_sources
# django template files
awx/api/templates/instance_install_bundle/**
.readthedocs.yaml
tools/loki
tools/otel
extends: default

View File

@@ -2,7 +2,7 @@
Hi there! We're excited to have you as a contributor.
Have questions about this document or anything not covered here? Come chat with us at `#ansible-awx` on irc.libera.chat, or submit your question to the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
Have questions about this document or anything not covered here? Create a topic using the [AWX tag on the Ansible Forum](https://forum.ansible.com/tag/awx).
## Table of contents
@@ -30,7 +30,7 @@ Have questions about this document or anything not covered here? Come chat with
- You must use `git commit --signoff` for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md).
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see [git push docs](https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt).
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.libera.chat, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- If submitting a large code change, it's a good idea to create a [forum topic tagged with 'awx'](https://forum.ansible.com/tag/awx), and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
## Setting up your development environment
@@ -67,7 +67,7 @@ If you're not using Docker for Mac, or Docker for Windows, you may need, or choo
#### Frontend Development
See [the ui development documentation](awx/ui/CONTRIBUTING.md).
See [the ansible-ui development documentation](https://github.com/ansible/ansible-ui/blob/main/CONTRIBUTING.md).
#### Fork and clone the AWX repo
@@ -121,18 +121,18 @@ If it has someone assigned to it then that person is the person responsible for
**NOTES**
> Issue assignment will only be done for maintainers of the project. If you decide to work on an issue, please feel free to add a comment in the issue to let others know that you are working on it; but know that we will accept the first pull request from whomever is able to fix an issue. Once your PR is accepted we can add you as an assignee to an issue upon request.
> Issue assignment will only be done for maintainers of the project. If you decide to work on an issue, please feel free to add a comment in the issue to let others know that you are working on it; but know that we will accept the first pull request from whomever is able to fix an issue. Once your PR is accepted we can add you as an assignee to an issue upon request.
> If you work in a part of the codebase that is going through active development, your changes may be rejected, or you may be asked to `rebase`. A good idea before starting work is to have a discussion with us in the `#ansible-awx` channel on irc.libera.chat, or on the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
> If you work in a part of the codebase that is going through active development, your changes may be rejected, or you may be asked to `rebase`. A good idea before starting work is to have a discussion with us in the [Ansible Forum](https://forum.ansible.com/tag/awx).
> If you're planning to develop features or fixes for the UI, please review the [UI Developer doc](./awx/ui/README.md).
> If you're planning to develop features or fixes for the UI, please review the [UI Developer doc](https://github.com/ansible/ansible-ui/blob/main/CONTRIBUTING.md).
### Translations
At this time we do not accept PRs for adding additional language translations as we have an automated process for generating our translations. This is because translations require constant care as new strings are added and changed in the code base. Because of this the .po files are overwritten during every translation release cycle. We also can't support a lot of translations on AWX as its an open source project and each language adds time and cost to maintain. If you would like to see AWX translated into a new language please create an issue and ask others you know to upvote the issue. Our translation team will review the needs of the community and see what they can do around supporting additional language.
If you find an issue with an existing translation, please see the [Reporting Issues](#reporting-issues) section to open an issue and our translation team will work with you on a resolution.
If you find an issue with an existing translation, please see the [Reporting Issues](#reporting-issues) section to open an issue and our translation team will work with you on a resolution.
## Submitting Pull Requests
@@ -143,15 +143,13 @@ Here are a few things you can do to help the visibility of your change, and incr
- No issues when running linters/code checkers
- Python: black: `(container)/awx_devel$ make black`
- Javascript: `(container)/awx_devel$ make ui-lint`
- No issues from unit tests
- Python: py.test: `(container)/awx_devel$ make test`
- JavaScript: `(container)/awx_devel$ make ui-test`
- Write tests for new functionality, update/add tests for bug fixes
- Make the smallest change possible
- Write good commit messages. See [How to write a Git commit message](https://chris.beams.io/posts/git-commit/).
It's generally a good idea to discuss features with us first by engaging us in the `#ansible-awx` channel on irc.libera.chat, or on the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
It's generally a good idea to discuss features with us first by engaging on the [Ansible Forum](https://forum.ansible.com/tag/awx).
We like to keep our commit history clean, and will require resubmission of pull requests that contain merge commits. Use `git pull --rebase`, rather than
`git pull`, and `git rebase`, rather than `git merge`.
@@ -161,11 +159,11 @@ Sometimes it might take us a while to fully review your PR. We try to keep the `
When your PR is initially submitted the checks will not be run until a maintainer allows them to be. Once a maintainer has done a quick review of your work the PR will have the linter and unit tests run against them via GitHub Actions, and the status reported in the PR.
## Reporting Issues
We welcome your feedback, and encourage you to file an issue when you run into a problem. But before opening a new issues, we ask that you please view our [Issues guide](./ISSUES.md).
## Getting Help
If you require additional assistance, please reach out to us at `#ansible-awx` on irc.libera.chat, or submit your question to the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
If you require additional assistance, please submit your question to the [Ansible Forum](https://forum.ansible.com/tag/awx).
For extra information on debugging tools, see [Debugging](./docs/debugging/).

View File

@@ -1,11 +1,11 @@
# Issues
## Reporting
## Reporting
Use the GitHub [issue tracker](https://github.com/ansible/awx/issues) for filing bugs. In order to save time, and help us respond to issues quickly, make sure to fill out as much of the issue template
as possible. Version information, and an accurate reproducing scenario are critical to helping us identify the problem.
Please don't use the issue tracker as a way to ask how to do something. Instead, use the [mailing list](https://groups.google.com/forum/#!forum/awx-project) , and the `#ansible-awx` channel on irc.libera.chat to get help.
Please don't use the issue tracker as a way to ask how to do something. Instead, use the [Ansible Forum](https://forum.ansible.com/tag/awx).
Before opening a new issue, please use the issue search feature to see if what you're experiencing has already been reported. If you have any extra detail to provide, please comment. Otherwise, rather than posting a "me too" comment, please consider giving it a ["thumbs up"](https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comment) to give us an indication of the severity of the problem.
@@ -14,7 +14,7 @@ Before opening a new issue, please use the issue search feature to see if what y
When reporting issues for the UI, we also appreciate having screen shots and any error messages from the web browser's console. It's not unusual for browser extensions
and plugins to cause problems. Reporting those will also help speed up analyzing and resolving UI bugs.
### API and backend issues
### API and backend issues
For the API and backend services, please capture all of the logs that you can from the time the problem occurred.
@@ -80,7 +80,7 @@ If any of those items are missing your pull request will still get the `needs_tr
Currently you can expect awxbot to add common labels such as `state:needs_triage`, `type:bug`, `component:docs`, etc...
These labels are determined by the template data. Please use the template and fill it out as accurately as possible.
The `state:needs_triage` label will will remain on your pull request until a person has looked at it.
The `state:needs_triage` label will remain on your pull request until a person has looked at it.
You can also expect the bot to CC maintainers of specific areas of the code, this will notify them that there is a pull request by placing a comment on the pull request.
The comment will look something like `CC @matburt @wwitzel3 ...`.

View File

@@ -4,9 +4,7 @@ recursive-include awx *.mo
recursive-include awx/static *
recursive-include awx/templates *.html
recursive-include awx/api/templates *.md *.html *.yml
recursive-include awx/ui/build *.html
recursive-include awx/ui/build *
recursive-include awx/ui_next/build *
recursive-include awx/playbooks *.yml
recursive-include awx/lib/site-packages *
recursive-include awx/plugins *.ps1
@@ -17,12 +15,11 @@ recursive-include licenses *
recursive-exclude awx devonly.py*
recursive-exclude awx/api/tests *
recursive-exclude awx/main/tests *
recursive-exclude awx/ui/client *
recursive-exclude awx/settings local_settings.py*
include tools/scripts/request_tower_configuration.sh
include tools/scripts/request_tower_configuration.ps1
include tools/scripts/automation-controller-service
include tools/scripts/failure-event-handler
include tools/scripts/rsyslog-4xx-recovery
include tools/scripts/awx-python
include awx/playbooks/library/mkfifo.py
include tools/sosreport/*

329
Makefile
View File

@@ -1,16 +1,17 @@
-include awx/ui_next/Makefile
-include awx/ui/Makefile
PYTHON := $(notdir $(shell for i in python3.9 python3; do command -v $$i; done|sed 1q))
PYTHON := $(notdir $(shell for i in python3.11 python3; do command -v $$i; done|sed 1q))
SHELL := bash
DOCKER_COMPOSE ?= docker-compose
DOCKER_COMPOSE ?= docker compose
OFFICIAL ?= no
NODE ?= node
NPM_BIN ?= npm
KIND_BIN ?= $(shell which kind)
CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_REPO_NAME ?= $(shell basename `git rev-parse --show-toplevel`)
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py)
VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py 2> /dev/null)
# ansible-test requires semver compatable version, so we allow overrides to hack it
COLLECTION_VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py | cut -d . -f 1-3)
@@ -23,7 +24,7 @@ COLLECTION_TEST_TARGET ?=
# args for collection install
COLLECTION_PACKAGE ?= awx
COLLECTION_NAMESPACE ?= awx
COLLECTION_INSTALL = ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
COLLECTION_INSTALL = $(HOME)/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
COLLECTION_TEMPLATE_VERSION ?= false
# NOTE: This defaults the container image version to the branch that's active
@@ -31,10 +32,6 @@ COMPOSE_TAG ?= $(GIT_BRANCH)
MAIN_NODE_TYPE ?= hybrid
# If set to true docker-compose will also start a pgbouncer instance and use it
PGBOUNCER ?= false
# If set to true docker-compose will also start a keycloak instance
KEYCLOAK ?= false
# If set to true docker-compose will also start an ldap instance
LDAP ?= false
# If set to true docker-compose will also start a splunk instance
SPLUNK ?= false
# If set to true docker-compose will also start a prometheus instance
@@ -43,8 +40,16 @@ PROMETHEUS ?= false
GRAFANA ?= false
# If set to true docker-compose will also start a hashicorp vault instance
VAULT ?= false
# If set to true docker-compose will also start a tacacs+ instance
TACACS ?= false
# If set to true docker-compose will also start a hashicorp vault instance with TLS enabled
VAULT_TLS ?= false
# If set to true docker-compose will also start an OpenTelemetry Collector instance
OTEL ?= false
# If set to true docker-compose will also start a Loki instance
LOKI ?= false
# If set to true docker-compose will install editable dependencies
EDITABLE_DEPENDENCIES ?= false
# If set to true, use tls for postgres connection
PG_TLS ?= false
VENV_BASE ?= /var/lib/awx/venv
@@ -52,7 +57,12 @@ DEV_DOCKER_OWNER ?= ansible
# Docker will only accept lowercase, so github names like Paul need to be paul
DEV_DOCKER_OWNER_LOWER = $(shell echo $(DEV_DOCKER_OWNER) | tr A-Z a-z)
DEV_DOCKER_TAG_BASE ?= ghcr.io/$(DEV_DOCKER_OWNER_LOWER)
DEVEL_IMAGE_NAME ?= $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
DEVEL_IMAGE_NAME ?= $(DEV_DOCKER_TAG_BASE)/$(GIT_REPO_NAME)_devel:$(COMPOSE_TAG)
IMAGE_KUBE_DEV=$(DEV_DOCKER_TAG_BASE)/$(GIT_REPO_NAME)_kube_devel:$(COMPOSE_TAG)
IMAGE_KUBE=$(DEV_DOCKER_TAG_BASE)/$(GIT_REPO_NAME):$(COMPOSE_TAG)
# Common command to use for running ansible-playbook
ANSIBLE_PLAYBOOK ?= ansible-playbook -e ansible_python_interpreter=$(PYTHON)
RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
@@ -61,7 +71,7 @@ RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio
# These should be upgraded in the AWX and Ansible venv before attempting
# to install the actual requirements
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==65.6.3 setuptools_scm[toml]==7.0.5 wheel==0.38.4
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==70.3.0 setuptools_scm[toml]==8.1.0 wheel==0.45.1 cython==3.0.11
NAME ?= awx
@@ -73,11 +83,25 @@ SDIST_TAR_FILE ?= $(SDIST_TAR_NAME).tar.gz
I18N_FLAG_FILE = .i18n_built
## PLATFORMS defines the target platforms for the manager image be build to provide support to multiple
PLATFORMS ?= linux/amd64,linux/arm64 # linux/ppc64le,linux/s390x
# Set up cache variables for image builds, allowing to control whether cache is used or not, ex:
# DOCKER_CACHE=--no-cache make docker-compose-build
ifeq ($(DOCKER_CACHE),)
DOCKER_DEVEL_CACHE_FLAG=--cache-from=$(DEVEL_IMAGE_NAME)
DOCKER_KUBE_DEV_CACHE_FLAG=--cache-from=$(IMAGE_KUBE_DEV)
DOCKER_KUBE_CACHE_FLAG=--cache-from=$(IMAGE_KUBE)
else
DOCKER_DEVEL_CACHE_FLAG=$(DOCKER_CACHE)
DOCKER_KUBE_DEV_CACHE_FLAG=$(DOCKER_CACHE)
DOCKER_KUBE_CACHE_FLAG=$(DOCKER_CACHE)
endif
.PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \
develop refresh adduser migrate dbchange \
receiver test test_unit test_coverage coverage_html \
sdist \
ui-release ui-devel \
VERSION PYTHON_VERSION docker-compose-sources \
.git/hooks/pre-commit
@@ -100,7 +124,7 @@ clean-languages:
find ./awx/locale/ -type f -regex '.*\.mo$$' -delete
## Remove temporary build files, compiled Python files.
clean: clean-ui clean-api clean-awxkit clean-dist
clean: clean-api clean-awxkit clean-dist
rm -rf awx/public
rm -rf awx/lib/site-packages
rm -rf awx/job_status
@@ -199,20 +223,12 @@ migrate:
dbchange:
$(MANAGEMENT_COMMAND) makemigrations
supervisor:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
supervisord --pidfile=/tmp/supervisor_pid -n
collectstatic:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py collectstatic --clear --noinput > /dev/null 2>&1
DEV_RELOAD_COMMAND ?= supervisorctl restart tower-processes:*
uwsgi: collectstatic
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -220,7 +236,7 @@ uwsgi: collectstatic
uwsgi /etc/tower/uwsgi.ini
awx-autoreload:
@/awx_devel/tools/docker-compose/awx-autoreload /awx_devel/awx "$(DEV_RELOAD_COMMAND)"
@/awx_devel/tools/docker-compose/awx-autoreload /awx_devel/awx
daphne:
@if [ "$(VENV_BASE)" ]; then \
@@ -300,7 +316,12 @@ swagger: reports
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
(set -o pipefail && py.test $(PYTEST_ARGS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs --release=$(VERSION_TARGET) | tee reports/$@.report)
(set -o pipefail && py.test --cov --cov-report=xml --junitxml=reports/junit.xml $(PYTEST_ARGS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs | tee reports/$@.report)
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo 'cov-report-files=reports/coverage.xml' >> "${GITHUB_OUTPUT}"; \
echo 'test-result-files=reports/junit.xml' >> "${GITHUB_OUTPUT}"; \
fi
check: black
@@ -313,7 +334,7 @@ api-lint:
awx-link:
[ -d "/awx_devel/awx.egg-info" ] || $(PYTHON) /awx_devel/tools/scripts/egg_info_dev
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests
PYTEST_ARGS ?= -n auto
## Run all API unit tests.
test:
@@ -324,15 +345,29 @@ test:
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
live_test:
cd awx/main/tests/live && py.test tests/
## Run all API unit tests with coverage enabled.
test_coverage:
$(MAKE) test PYTEST_ARGS="--create-db --cov --cov-report=xml --junitxml=reports/junit.xml"
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo 'cov-report-files=awxkit/coverage.xml,reports/coverage.xml' >> "${GITHUB_OUTPUT}"; \
echo 'test-result-files=awxkit/report.xml,reports/junit.xml' >> "${GITHUB_OUTPUT}"; \
fi
test_migrations:
if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider --migrations -m migration_test $(PYTEST_ARGS) $(TEST_DIRS)
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider --migrations -m migration_test --create-db --cov=awx --cov-report=xml --junitxml=reports/junit.xml $(PYTEST_ARGS) $(TEST_DIRS)
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo 'cov-report-files=reports/coverage.xml' >> "${GITHUB_OUTPUT}"; \
echo 'test-result-files=reports/junit.xml' >> "${GITHUB_OUTPUT}"; \
fi
## Runs AWX_DOCKER_CMD inside a new docker container.
docker-runner:
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z $(AWX_DOCKER_ARGS) --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)
test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
@@ -341,7 +376,12 @@ test_collection:
fi && \
if ! [ -x "$(shell command -v ansible-playbook)" ]; then pip install ansible-core; fi
ansible --version
py.test $(COLLECTION_TEST_DIRS) -v
py.test $(COLLECTION_TEST_DIRS) --cov --cov-report=xml --junitxml=reports/junit.xml -v
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo 'cov-report-files=reports/coverage.xml' >> "${GITHUB_OUTPUT}"; \
echo 'test-result-files=reports/junit.xml' >> "${GITHUB_OUTPUT}"; \
fi
# The python path needs to be modified so that the tests can find Ansible within the container
# First we will use anything expility set as PYTHONPATH
# Second we will load any libraries out of the virtualenv (if it's unspecified that should be ok because python should not load out of an empty directory)
@@ -357,7 +397,7 @@ symlink_collection:
ln -s $(shell pwd)/awx_collection $(COLLECTION_INSTALL)
awx_collection_build: $(shell find awx_collection -type f)
ansible-playbook -i localhost, awx_collection/tools/template_galaxy.yml \
$(ANSIBLE_PLAYBOOK) -i localhost, awx_collection/tools/template_galaxy.yml \
-e collection_package=$(COLLECTION_PACKAGE) \
-e collection_namespace=$(COLLECTION_NAMESPACE) \
-e collection_version=$(COLLECTION_VERSION) \
@@ -376,23 +416,29 @@ test_collection_sanity:
if ! [ -x "$(shell command -v ansible-test)" ]; then pip install ansible-core; fi
ansible --version
COLLECTION_VERSION=1.0.0 $(MAKE) install_collection
cd $(COLLECTION_INSTALL) && ansible-test sanity $(COLLECTION_SANITY_ARGS)
cd $(COLLECTION_INSTALL) && \
ansible-test sanity $(COLLECTION_SANITY_ARGS) --coverage --junit && \
ansible-test coverage xml --requirements --group-by command --group-by version
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo cov-report-files="$$(find "$(COLLECTION_INSTALL)/tests/output/reports/" -type f -name 'coverage=sanity*.xml' -print0 | tr '\0' ',' | sed 's#,$$##')" >> "${GITHUB_OUTPUT}"; \
echo test-result-files="$$(find "$(COLLECTION_INSTALL)/tests/output/junit/" -type f -name '*.xml' -print0 | tr '\0' ',' | sed 's#,$$##')" >> "${GITHUB_OUTPUT}"; \
fi
test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && ansible-test integration -vvv $(COLLECTION_TEST_TARGET)
cd $(COLLECTION_INSTALL) && \
ansible-test integration --coverage -vvv $(COLLECTION_TEST_TARGET) && \
ansible-test coverage xml --requirements --group-by command --group-by version
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo cov-report-files="$$(find "$(COLLECTION_INSTALL)/tests/output/reports/" -type f -name 'coverage=integration*.xml' -print0 | tr '\0' ',' | sed 's#,$$##')" >> "${GITHUB_OUTPUT}"; \
fi
test_unit:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
py.test awx/main/tests/unit awx/conf/tests/unit awx/sso/tests/unit
## Run all API unit tests with coverage enabled.
test_coverage:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
py.test --create-db --cov=awx --cov-report=xml --junitxml=./reports/junit.xml $(TEST_DIRS)
py.test awx/main/tests/unit awx/conf/tests/unit
## Output test coverage as HTML (into htmlcov directory).
coverage_html:
@@ -411,76 +457,7 @@ bulk_data:
fi; \
$(PYTHON) tools/data_generators/rbac_dummy_data_generator.py --preset=$(DATA_GEN_PRESET)
# UI TASKS
# --------------------------------------
UI_BUILD_FLAG_FILE = awx/ui/.ui-built
clean-ui:
rm -rf node_modules
rm -rf awx/ui/node_modules
rm -rf awx/ui/build
rm -rf awx/ui/src/locales/_build
rm -rf $(UI_BUILD_FLAG_FILE)
# the collectstatic command doesn't like it if this dir doesn't exist.
mkdir -p awx/ui/build/static
awx/ui/node_modules:
NODE_OPTIONS=--max-old-space-size=6144 $(NPM_BIN) --prefix awx/ui --loglevel warn --force ci
$(UI_BUILD_FLAG_FILE):
$(MAKE) awx/ui/node_modules
$(PYTHON) tools/scripts/compilemessages.py
$(NPM_BIN) --prefix awx/ui --loglevel warn run compile-strings
$(NPM_BIN) --prefix awx/ui --loglevel warn run build
touch $@
ui-release: $(UI_BUILD_FLAG_FILE)
ui-devel: awx/ui/node_modules
@$(MAKE) -B $(UI_BUILD_FLAG_FILE)
@if [ -d "/var/lib/awx" ] ; then \
mkdir -p /var/lib/awx/public/static/css; \
mkdir -p /var/lib/awx/public/static/js; \
mkdir -p /var/lib/awx/public/static/media; \
cp -r awx/ui/build/static/css/* /var/lib/awx/public/static/css; \
cp -r awx/ui/build/static/js/* /var/lib/awx/public/static/js; \
cp -r awx/ui/build/static/media/* /var/lib/awx/public/static/media; \
fi
ui-devel-instrumented: awx/ui/node_modules
$(NPM_BIN) --prefix awx/ui --loglevel warn run start-instrumented
ui-devel-test: awx/ui/node_modules
$(NPM_BIN) --prefix awx/ui --loglevel warn run start
ui-lint:
$(NPM_BIN) --prefix awx/ui install
$(NPM_BIN) run --prefix awx/ui lint
$(NPM_BIN) run --prefix awx/ui prettier-check
ui-test:
$(NPM_BIN) --prefix awx/ui install
$(NPM_BIN) run --prefix awx/ui test
ui-test-screens:
$(NPM_BIN) --prefix awx/ui install
$(NPM_BIN) run --prefix awx/ui pretest
$(NPM_BIN) run --prefix awx/ui test-screens --runInBand
ui-test-general:
$(NPM_BIN) --prefix awx/ui install
$(NPM_BIN) run --prefix awx/ui pretest
$(NPM_BIN) run --prefix awx/ui/ test-general --runInBand
# NOTE: The make target ui-next is imported from awx/ui_next/Makefile
HEADLESS ?= no
ifeq ($(HEADLESS), yes)
dist/$(SDIST_TAR_FILE):
else
dist/$(SDIST_TAR_FILE): $(UI_BUILD_FLAG_FILE) ui-next
endif
$(PYTHON) -m build -s
ln -sf $(SDIST_TAR_FILE) dist/awx.tar.gz
@@ -511,32 +488,41 @@ endif
docker-compose-sources: .git/hooks/pre-commit
@if [ $(MINIKUBE_CONTAINER_GROUP) = true ]; then\
ansible-playbook -i tools/docker-compose/inventory -e minikube_setup=$(MINIKUBE_SETUP) tools/docker-compose-minikube/deploy.yml; \
$(ANSIBLE_PLAYBOOK) -i tools/docker-compose/inventory -e minikube_setup=$(MINIKUBE_SETUP) tools/docker-compose-minikube/deploy.yml; \
fi;
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/sources.yml \
-e awx_image=$(DEV_DOCKER_TAG_BASE)/awx_devel \
$(ANSIBLE_PLAYBOOK) -i tools/docker-compose/inventory tools/docker-compose/ansible/sources.yml \
-e awx_image=$(DEV_DOCKER_TAG_BASE)/$(GIT_REPO_NAME)_devel \
-e awx_image_tag=$(COMPOSE_TAG) \
-e receptor_image=$(RECEPTOR_IMAGE) \
-e control_plane_node_count=$(CONTROL_PLANE_NODE_COUNT) \
-e execution_node_count=$(EXECUTION_NODE_COUNT) \
-e minikube_container_group=$(MINIKUBE_CONTAINER_GROUP) \
-e enable_pgbouncer=$(PGBOUNCER) \
-e enable_keycloak=$(KEYCLOAK) \
-e enable_ldap=$(LDAP) \
-e enable_splunk=$(SPLUNK) \
-e enable_prometheus=$(PROMETHEUS) \
-e enable_grafana=$(GRAFANA) \
-e enable_vault=$(VAULT) \
-e enable_tacacs=$(TACACS) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)
-e vault_tls=$(VAULT_TLS) \
-e enable_otel=$(OTEL) \
-e enable_loki=$(LOKI) \
-e install_editable_dependencies=$(EDITABLE_DEPENDENCIES) \
-e pg_tls=$(PG_TLS) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)
docker-compose: awx/projects docker-compose-sources
ansible-galaxy install --ignore-certs -r tools/docker-compose/ansible/requirements.yml;
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
-e enable_vault=$(VAULT);
$(ANSIBLE_PLAYBOOK) -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS); \
$(MAKE) docker-compose-up
docker-compose-up:
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans
docker-compose-down:
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) down --remove-orphans
docker-compose-credential-plugins: awx/projects docker-compose-sources
echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m"
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx_1 --remove-orphans
@@ -568,7 +554,7 @@ docker-compose-container-group-clean:
.PHONY: Dockerfile.dev
## Generate Dockerfile.dev for awx_devel image
Dockerfile.dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml \
$(ANSIBLE_PLAYBOOK) tools/ansible/dockerfile.yml \
-e dockerfile_name=Dockerfile.dev \
-e build_dev=True \
-e receptor_image=$(RECEPTOR_IMAGE)
@@ -576,41 +562,39 @@ Dockerfile.dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
## Build awx_devel image for docker compose development environment
docker-compose-build: Dockerfile.dev
DOCKER_BUILDKIT=1 docker build \
--ssh default=$(SSH_AUTH_SOCK) \
-f Dockerfile.dev \
-t $(DEVEL_IMAGE_NAME) \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
$(DOCKER_DEVEL_CACHE_FLAG) .
.PHONY: docker-compose-buildx
## Build awx_devel image for docker compose development environment for multiple architectures
docker-compose-buildx: Dockerfile.dev
- docker buildx create --name docker-compose-buildx
docker buildx use docker-compose-buildx
- docker buildx build \
--ssh default=$(SSH_AUTH_SOCK) \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \
$(DOCKER_DEVEL_CACHE_FLAG) \
--platform=$(PLATFORMS) \
--tag $(DEVEL_IMAGE_NAME) \
-f Dockerfile.dev .
- docker buildx rm docker-compose-buildx
docker-clean:
-$(foreach container_id,$(shell docker ps -f name=tools_awx -aq && docker ps -f name=tools_receptor -aq),docker stop $(container_id); docker rm -f $(container_id);)
-$(foreach image_id,$(shell docker images --filter=reference='*/*/*awx_devel*' --filter=reference='*/*awx_devel*' --filter=reference='*awx_devel*' -aq),docker rmi --force $(image_id);)
docker-clean-volumes: docker-compose-clean docker-compose-container-group-clean
docker volume rm -f tools_awx_db tools_vault_1 tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q)
docker volume rm -f tools_var_lib_awx tools_awx_db tools_awx_db_15 tools_vault_1 tools_grafana_storage tools_prometheus_storage $(shell docker volume ls --filter name=tools_redis_socket_ -q)
docker-refresh: docker-clean docker-compose
## Docker Development Environment with Elastic Stack Connected
docker-compose-elk: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-cluster-elk: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-container-group:
MINIKUBE_CONTAINER_GROUP=true $(MAKE) docker-compose
clean-elk:
docker stop tools_kibana_1
docker stop tools_logstash_1
docker stop tools_elasticsearch_1
docker rm tools_logstash_1
docker rm tools_elasticsearch_1
docker rm tools_kibana_1
psql-container:
docker run -it --net tools_default --rm postgres:12 sh -c 'exec psql -h "postgres" -p "5432" -U postgres'
VERSION:
@echo "awx: $(VERSION)"
@@ -631,22 +615,41 @@ version-for-buildyml:
.PHONY: Dockerfile
## Generate Dockerfile for awx image
Dockerfile: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml \
$(ANSIBLE_PLAYBOOK) tools/ansible/dockerfile.yml \
-e receptor_image=$(RECEPTOR_IMAGE) \
-e headless=$(HEADLESS)
## Build awx image for deployment on Kubernetes environment.
awx-kube-build: Dockerfile
DOCKER_BUILDKIT=1 docker build -f Dockerfile \
--ssh default=$(SSH_AUTH_SOCK) \
--build-arg VERSION=$(VERSION) \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
--build-arg HEADLESS=$(HEADLESS) \
-t $(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG) .
$(DOCKER_KUBE_CACHE_FLAG) \
-t $(IMAGE_KUBE) .
## Build multi-arch awx image for deployment on Kubernetes environment.
awx-kube-buildx: Dockerfile
- docker buildx create --name awx-kube-buildx
docker buildx use awx-kube-buildx
- docker buildx build \
--ssh default=$(SSH_AUTH_SOCK) \
--push \
--build-arg VERSION=$(VERSION) \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
--build-arg HEADLESS=$(HEADLESS) \
--platform=$(PLATFORMS) \
$(DOCKER_KUBE_CACHE_FLAG) \
--tag $(IMAGE_KUBE) \
-f Dockerfile .
- docker buildx rm awx-kube-buildx
.PHONY: Dockerfile.kube-dev
## Generate Docker.kube-dev for awx_kube_devel image
Dockerfile.kube-dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml \
$(ANSIBLE_PLAYBOOK) tools/ansible/dockerfile.yml \
-e dockerfile_name=Dockerfile.kube-dev \
-e kube_dev=True \
-e template_dest=_build_kube_dev \
@@ -655,27 +658,31 @@ Dockerfile.kube-dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
## Build awx_kube_devel image for development on local Kubernetes environment.
awx-kube-dev-build: Dockerfile.kube-dev
DOCKER_BUILDKIT=1 docker build -f Dockerfile.kube-dev \
--ssh default=$(SSH_AUTH_SOCK) \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
$(DOCKER_KUBE_DEV_CACHE_FLAG) \
-t $(IMAGE_KUBE_DEV) .
## Build and push multi-arch awx_kube_devel image for development on local Kubernetes environment.
awx-kube-dev-buildx: Dockerfile.kube-dev
- docker buildx create --name awx-kube-dev-buildx
docker buildx use awx-kube-dev-buildx
- docker buildx build \
--ssh default=$(SSH_AUTH_SOCK) \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \
$(DOCKER_KUBE_DEV_CACHE_FLAG) \
--platform=$(PLATFORMS) \
--tag $(IMAGE_KUBE_DEV) \
-f Dockerfile.kube-dev .
- docker buildx rm awx-kube-dev-buildx
kind-dev-load: awx-kube-dev-build
$(KIND_BIN) load docker-image $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG)
$(KIND_BIN) load docker-image $(IMAGE_KUBE_DEV)
# Translation TASKS
# --------------------------------------
## generate UI .pot file, an empty template of strings yet to be translated
pot: $(UI_BUILD_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui --loglevel warn run extract-template --clean
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-template --clean
## generate UI .po files for each locale (will update translated strings for `en`)
po: $(UI_BUILD_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui --loglevel warn run extract-strings -- --clean
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-strings -- --clean
## generate API django .pot .po
messages:
@if [ "$(VENV_BASE)" ]; then \
@@ -722,6 +729,6 @@ help/generate:
{ lastLine = $$0 }' $(MAKEFILE_LIST) | sort -u
@printf "\n"
## Display help for ui-next targets
help/ui-next:
@$(MAKE) -s help MAKEFILE_LIST="awx/ui_next/Makefile"
## Display help for ui targets
help/ui:
@$(MAKE) -s help MAKEFILE_LIST="awx/ui/Makefile"

View File

@@ -1,13 +1,24 @@
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX Mailing List](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://groups.google.com/g/awx-project)
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![codecov](https://codecov.io/github/ansible/awx/graph/badge.svg?token=4L4GSP9IAR)](https://codecov.io/github/ansible/awx) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX on the Ansible Forum](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://forum.ansible.com/tag/awx)
[![Ansible Matrix](https://img.shields.io/badge/matrix-Ansible%20Community-blueviolet.svg?logo=matrix)](https://chat.ansible.im/#/welcome) [![Ansible Discourse](https://img.shields.io/badge/discourse-Ansible%20Community-yellowgreen.svg?logo=discourse)](https://forum.ansible.com)
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
> [!CAUTION]
> The last release of this repository was released on Jul 2, 2024.
> **Releases of this project are now paused during a large scale refactoring.**
> For more information, follow [the Forum](https://forum.ansible.com/) and - more specifically - see the various communications on the matter:
>
> * [Blog: Upcoming Changes to the AWX Project](https://www.ansible.com/blog/upcoming-changes-to-the-awx-project/)
> * [Streamlining AWX Releases](https://forum.ansible.com/t/streamlining-awx-releases/6894) Primary update
> * [Refactoring AWX into a Pluggable, Service-Oriented Architecture](https://forum.ansible.com/t/refactoring-awx-into-a-pluggable-service-oriented-architecture/7404)
> * [Upcoming changes to AWX Operator installation methods](https://forum.ansible.com/t/upcoming-changes-to-awx-operator-installation-methods/7598)
> * [AWX UI and credential types transitioning to the new pluggable architecture](https://forum.ansible.com/t/awx-ui-and-credential-types-transitioning-to-the-new-pluggable-architecture/8027)
AWX provides a web-based user interface, REST API, and task engine built on top of [Ansible](https://github.com/ansible/ansible). It is one of the upstream projects for [Red Hat Ansible Automation Platform](https://www.ansible.com/products/automation-platform).
To install AWX, please view the [Install guide](./INSTALL.md).
To learn more about using AWX, and Tower, view the [Tower docs site](http://docs.ansible.com/ansible-tower/index.html).
To learn more about using AWX, view the [AWX docs site](https://ansible.readthedocs.io/projects/awx/en/latest/).
The AWX Project Frequently Asked Questions can be found [here](https://www.ansible.com/awx-project-faq).
@@ -18,9 +29,9 @@ Contributing
- Refer to the [Contributing guide](./CONTRIBUTING.md) to get started developing, testing, and building AWX.
- All code submissions are made through pull requests against the `devel` branch.
- All contributors must use git commit --signoff for any commit to be merged and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md)
- All contributors must use `git commit --signoff` for any commit to be merged and agree that usage of `--signoff` constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md)
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs. `git merge` for this reason.
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on web.libera.chat and talk about what you would like to do or add first. This not only helps everyone know what's going on, but it also helps save time and effort if the community decides some changes are needed.
- If submitting a large code change, it's a good idea to join discuss via the [Ansible Forum](https://forum.ansible.com/tag/awx). This helps everyone know what's going on, and it also helps save time and effort if the community decides some changes are needed.
Reporting Issues
----------------
@@ -30,12 +41,11 @@ If you're experiencing a problem that you feel is a bug in AWX or have ideas for
Code of Conduct
---------------
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
We require all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
Get Involved
------------
We welcome your feedback and ideas. Here's how to reach us with feedback and questions:
We welcome your feedback and ideas via the [Ansible Forum](https://forum.ansible.com/tag/awx).
- Join the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com)
- Join the [Ansible Community Forum](https://forum.ansible.com)
For a full list of all the ways to talk with the Ansible Community, see the [AWX Communication guide](https://ansible.readthedocs.io/projects/awx/en/latest/contributor/communication.html).

View File

@@ -5,6 +5,7 @@ from __future__ import absolute_import, unicode_literals
import os
import sys
import warnings
from importlib.metadata import PackageNotFoundError, version as _get_version
def get_version():
@@ -34,10 +35,8 @@ def version_file():
try:
import pkg_resources
__version__ = pkg_resources.get_distribution('awx').version
except pkg_resources.DistributionNotFound:
__version__ = _get_version('awx')
except PackageNotFoundError:
__version__ = get_version()
__all__ = ['__version__']
@@ -61,90 +60,16 @@ else:
from django.db import connection
def find_commands(management_dir):
# Modified version of function from django/core/management/__init__.py.
command_dir = os.path.join(management_dir, 'commands')
commands = []
try:
for f in os.listdir(command_dir):
if f.startswith('_'):
continue
elif f.endswith('.py') and f[:-3] not in commands:
commands.append(f[:-3])
elif f.endswith('.pyc') and f[:-4] not in commands: # pragma: no cover
commands.append(f[:-4])
except OSError:
pass
return commands
def oauth2_getattribute(self, attr):
# Custom method to override
# oauth2_provider.settings.OAuth2ProviderSettings.__getattribute__
from django.conf import settings
from oauth2_provider.settings import DEFAULTS
val = None
if (isinstance(attr, str)) and (attr in DEFAULTS) and (not attr.startswith('_')):
# certain Django OAuth Toolkit migrations actually reference
# setting lookups for references to model classes (e.g.,
# oauth2_settings.REFRESH_TOKEN_MODEL)
# If we're doing an OAuth2 setting lookup *while running* a migration,
# don't do our usual database settings lookup
val = settings.OAUTH2_PROVIDER.get(attr)
if val is None:
val = object.__getattribute__(self, attr)
return val
def prepare_env():
# Update the default settings environment variable based on current mode.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'awx.settings.%s' % MODE)
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'awx.settings')
os.environ.setdefault('AWX_MODE', MODE)
# Hide DeprecationWarnings when running in production. Need to first load
# settings to apply our filter after Django's own warnings filter.
from django.conf import settings
if not settings.DEBUG: # pragma: no cover
warnings.simplefilter('ignore', DeprecationWarning)
# Monkeypatch Django find_commands to also work with .pyc files.
import django.core.management
django.core.management.find_commands = find_commands
# Monkeypatch Oauth2 toolkit settings class to check for settings
# in django.conf settings each time, not just once during import
import oauth2_provider.settings
oauth2_provider.settings.OAuth2ProviderSettings.__getattribute__ = oauth2_getattribute
# Use the AWX_TEST_DATABASE_* environment variables to specify the test
# database settings to use when management command is run as an external
# program via unit tests.
for opt in ('ENGINE', 'NAME', 'USER', 'PASSWORD', 'HOST', 'PORT'): # pragma: no cover
if os.environ.get('AWX_TEST_DATABASE_%s' % opt, None):
settings.DATABASES['default'][opt] = os.environ['AWX_TEST_DATABASE_%s' % opt]
# Disable capturing all SQL queries in memory when in DEBUG mode.
if settings.DEBUG and not getattr(settings, 'SQL_DEBUG', True):
from django.db.backends.base.base import BaseDatabaseWrapper
from django.db.backends.utils import CursorWrapper
BaseDatabaseWrapper.make_debug_cursor = lambda self, cursor: CursorWrapper(cursor, self)
# Use the default devserver addr/port defined in settings for runserver.
default_addr = getattr(settings, 'DEVSERVER_DEFAULT_ADDR', '127.0.0.1')
default_port = getattr(settings, 'DEVSERVER_DEFAULT_PORT', 8000)
from django.core.management.commands import runserver as core_runserver
original_handle = core_runserver.Command.handle
def handle(self, *args, **options):
if not options.get('addrport'):
options['addrport'] = '%s:%d' % (default_addr, int(default_port))
elif options.get('addrport').isdigit():
options['addrport'] = '%s:%d' % (default_addr, int(options['addrport']))
return original_handle(self, *args, **options)
core_runserver.Command.handle = handle
def manage():
@@ -154,10 +79,12 @@ def manage():
from django.conf import settings
from django.core.management import execute_from_command_line
# enforce the postgres version is equal to 12. if not, then terminate program with exit code of 1
# enforce the postgres version is a minimum of 12 (we need this for partitioning); if not, then terminate program with exit code of 1
# In the future if we require a feature of a version of postgres > 12 this should be updated to reflect that.
# The return of connection.pg_version is something like 12013
if not os.getenv('SKIP_PG_VERSION_CHECK', False) and not MODE == 'development':
if (connection.pg_version // 10000) < 12:
sys.stderr.write("Postgres version 12 is required\n")
sys.stderr.write("At a minimum, postgres version 12 is required\n")
sys.exit(1)
if len(sys.argv) >= 2 and sys.argv[1] in ('version', '--version'): # pragma: no cover

View File

@@ -11,9 +11,6 @@ from django.utils.encoding import smart_str
# Django REST Framework
from rest_framework import authentication
# Django-OAuth-Toolkit
from oauth2_provider.contrib.rest_framework import OAuth2Authentication
logger = logging.getLogger('awx.api.authentication')
@@ -36,16 +33,3 @@ class LoggedBasicAuthentication(authentication.BasicAuthentication):
class SessionAuthentication(authentication.SessionAuthentication):
def authenticate_header(self, request):
return 'Session'
class LoggedOAuth2Authentication(OAuth2Authentication):
def authenticate(self, request):
ret = super(LoggedOAuth2Authentication, self).authenticate(request)
if ret:
user, token = ret
username = user.username if user else '<none>'
logger.info(
smart_str(u"User {} performed a {} to {} through the API using OAuth 2 token {}.".format(username, request.method, request.path, token.pk))
)
setattr(user, 'oauth_scopes', [x for x in token.scope.split() if x])
return ret

View File

@@ -6,9 +6,6 @@ from rest_framework import serializers
# AWX
from awx.conf import fields, register, register_validate
from awx.api.fields import OAuth2ProviderField
from oauth2_provider.settings import oauth2_settings
from awx.sso.common import is_remote_auth_enabled
register(
@@ -35,10 +32,7 @@ register(
'DISABLE_LOCAL_AUTH',
field_class=fields.BooleanField,
label=_('Disable the built-in authentication system'),
help_text=_(
"Controls whether users are prevented from using the built-in authentication system. "
"You probably want to do this if you are using an LDAP or SAML integration."
),
help_text=_("Controls whether users are prevented from using the built-in authentication system. "),
category=_('Authentication'),
category_slug='authentication',
)
@@ -50,41 +44,6 @@ register(
category=_('Authentication'),
category_slug='authentication',
)
register(
'OAUTH2_PROVIDER',
field_class=OAuth2ProviderField,
default={
'ACCESS_TOKEN_EXPIRE_SECONDS': oauth2_settings.ACCESS_TOKEN_EXPIRE_SECONDS,
'AUTHORIZATION_CODE_EXPIRE_SECONDS': oauth2_settings.AUTHORIZATION_CODE_EXPIRE_SECONDS,
'REFRESH_TOKEN_EXPIRE_SECONDS': oauth2_settings.REFRESH_TOKEN_EXPIRE_SECONDS,
},
label=_('OAuth 2 Timeout Settings'),
help_text=_(
'Dictionary for customizing OAuth 2 timeouts, available items are '
'`ACCESS_TOKEN_EXPIRE_SECONDS`, the duration of access tokens in the number '
'of seconds, `AUTHORIZATION_CODE_EXPIRE_SECONDS`, the duration of '
'authorization codes in the number of seconds, and `REFRESH_TOKEN_EXPIRE_SECONDS`, '
'the duration of refresh tokens, after expired access tokens, '
'in the number of seconds.'
),
category=_('Authentication'),
category_slug='authentication',
unit=_('seconds'),
)
register(
'ALLOW_OAUTH2_FOR_EXTERNAL_USERS',
field_class=fields.BooleanField,
default=False,
label=_('Allow External Users to Create OAuth2 Tokens'),
help_text=_(
'For security reasons, users from external auth providers (LDAP, SAML, '
'SSO, Radius, and others) are not allowed to create OAuth2 tokens. '
'To change this behavior, enable this setting. Existing tokens will '
'not be deleted when this setting is toggled off.'
),
category=_('Authentication'),
category_slug='authentication',
)
register(
'LOGIN_REDIRECT_OVERRIDE',
field_class=fields.CharField,
@@ -93,6 +52,7 @@ register(
default='',
label=_('Login redirect override URL'),
help_text=_('URL to which unauthorized users will be redirected to log in. If blank, users will be sent to the login page.'),
warning_text=_('Changing the redirect URL could impact the ability to login if local authentication is also disabled.'),
category=_('Authentication'),
category_slug='authentication',
)
@@ -108,7 +68,7 @@ register(
def authentication_validate(serializer, attrs):
if attrs.get('DISABLE_LOCAL_AUTH', False) and not is_remote_auth_enabled():
if attrs.get('DISABLE_LOCAL_AUTH', False):
raise serializers.ValidationError(_("There are no remote authentication systems configured."))
return attrs

View File

@@ -9,7 +9,6 @@ from django.core.exceptions import ObjectDoesNotExist
from rest_framework import serializers
# AWX
from awx.conf import fields
from awx.main.models import Credential
__all__ = ['BooleanNullField', 'CharNullField', 'ChoiceNullField', 'VerbatimField']
@@ -79,19 +78,6 @@ class VerbatimField(serializers.Field):
return value
class OAuth2ProviderField(fields.DictField):
default_error_messages = {'invalid_key_names': _('Invalid key names: {invalid_key_names}')}
valid_key_names = {'ACCESS_TOKEN_EXPIRE_SECONDS', 'AUTHORIZATION_CODE_EXPIRE_SECONDS', 'REFRESH_TOKEN_EXPIRE_SECONDS'}
child = fields.IntegerField(min_value=1)
def to_internal_value(self, data):
data = super(OAuth2ProviderField, self).to_internal_value(data)
invalid_flags = set(data.keys()) - self.valid_key_names
if invalid_flags:
self.fail('invalid_key_names', invalid_key_names=', '.join(list(invalid_flags)))
return data
class DeprecatedCredentialField(serializers.IntegerField):
def __init__(self, **kwargs):
kwargs['allow_null'] = True

View File

@@ -1,450 +0,0 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
# Python
import re
import json
from functools import reduce
# Django
from django.core.exceptions import FieldError, ValidationError, FieldDoesNotExist
from django.db import models
from django.db.models import Q, CharField, IntegerField, BooleanField, TextField, JSONField
from django.db.models.fields.related import ForeignObjectRel, ManyToManyField, ForeignKey
from django.db.models.functions import Cast
from django.contrib.contenttypes.models import ContentType
from django.contrib.contenttypes.fields import GenericForeignKey
from django.utils.encoding import force_str
from django.utils.translation import gettext_lazy as _
# Django REST Framework
from rest_framework.exceptions import ParseError, PermissionDenied
from rest_framework.filters import BaseFilterBackend
# AWX
from awx.main.utils import get_type_for_model, to_python_boolean
from awx.main.utils.db import get_all_field_names
class TypeFilterBackend(BaseFilterBackend):
"""
Filter on type field now returned with all objects.
"""
def filter_queryset(self, request, queryset, view):
try:
types = None
for key, value in request.query_params.items():
if key == 'type':
if ',' in value:
types = value.split(',')
else:
types = (value,)
if types:
types_map = {}
for ct in ContentType.objects.filter(Q(app_label='main') | Q(app_label='auth', model='user')):
ct_model = ct.model_class()
if not ct_model:
continue
ct_type = get_type_for_model(ct_model)
types_map[ct_type] = ct.pk
model = queryset.model
model_type = get_type_for_model(model)
if 'polymorphic_ctype' in get_all_field_names(model):
types_pks = set([v for k, v in types_map.items() if k in types])
queryset = queryset.filter(polymorphic_ctype_id__in=types_pks)
elif model_type in types:
queryset = queryset
else:
queryset = queryset.none()
return queryset
except FieldError as e:
# Return a 400 for invalid field names.
raise ParseError(*e.args)
def get_fields_from_path(model, path):
"""
Given a Django ORM lookup path (possibly over multiple models)
Returns the fields in the line, and also the revised lookup path
ex., given
model=Organization
path='project__timeout'
returns tuple of fields traversed as well and a corrected path,
for special cases we do substitutions
([<IntegerField for timeout>], 'project__timeout')
"""
# Store of all the fields used to detect repeats
field_list = []
new_parts = []
for name in path.split('__'):
if model is None:
raise ParseError(_('No related model for field {}.').format(name))
# HACK: Make project and inventory source filtering by old field names work for backwards compatibility.
if model._meta.object_name in ('Project', 'InventorySource'):
name = {'current_update': 'current_job', 'last_update': 'last_job', 'last_update_failed': 'last_job_failed', 'last_updated': 'last_job_run'}.get(
name, name
)
if name == 'type' and 'polymorphic_ctype' in get_all_field_names(model):
name = 'polymorphic_ctype'
new_parts.append('polymorphic_ctype__model')
else:
new_parts.append(name)
if name in getattr(model, 'PASSWORD_FIELDS', ()):
raise PermissionDenied(_('Filtering on password fields is not allowed.'))
elif name == 'pk':
field = model._meta.pk
else:
name_alt = name.replace("_", "")
if name_alt in model._meta.fields_map.keys():
field = model._meta.fields_map[name_alt]
new_parts.pop()
new_parts.append(name_alt)
else:
field = model._meta.get_field(name)
if isinstance(field, ForeignObjectRel) and getattr(field.field, '__prevent_search__', False):
raise PermissionDenied(_('Filtering on %s is not allowed.' % name))
elif getattr(field, '__prevent_search__', False):
raise PermissionDenied(_('Filtering on %s is not allowed.' % name))
if field in field_list:
# Field traversed twice, could create infinite JOINs, DoSing Tower
raise ParseError(_('Loops not allowed in filters, detected on field {}.').format(field.name))
field_list.append(field)
model = getattr(field, 'related_model', None)
return field_list, '__'.join(new_parts)
def get_field_from_path(model, path):
"""
Given a Django ORM lookup path (possibly over multiple models)
Returns the last field in the line, and the revised lookup path
ex.
(<IntegerField for timeout>, 'project__timeout')
"""
field_list, new_path = get_fields_from_path(model, path)
return (field_list[-1], new_path)
class FieldLookupBackend(BaseFilterBackend):
"""
Filter using field lookups provided via query string parameters.
"""
RESERVED_NAMES = ('page', 'page_size', 'format', 'order', 'order_by', 'search', 'type', 'host_filter', 'count_disabled', 'no_truncate', 'limit')
SUPPORTED_LOOKUPS = (
'exact',
'iexact',
'contains',
'icontains',
'startswith',
'istartswith',
'endswith',
'iendswith',
'regex',
'iregex',
'gt',
'gte',
'lt',
'lte',
'in',
'isnull',
'search',
)
# A list of fields that we know can be filtered on without the possibility
# of introducing duplicates
NO_DUPLICATES_ALLOW_LIST = (CharField, IntegerField, BooleanField, TextField)
def get_fields_from_lookup(self, model, lookup):
if '__' in lookup and lookup.rsplit('__', 1)[-1] in self.SUPPORTED_LOOKUPS:
path, suffix = lookup.rsplit('__', 1)
else:
path = lookup
suffix = 'exact'
if not path:
raise ParseError(_('Query string field name not provided.'))
# FIXME: Could build up a list of models used across relationships, use
# those lookups combined with request.user.get_queryset(Model) to make
# sure user cannot query using objects he could not view.
field_list, new_path = get_fields_from_path(model, path)
new_lookup = new_path
new_lookup = '__'.join([new_path, suffix])
return field_list, new_lookup
def get_field_from_lookup(self, model, lookup):
'''Method to match return type of single field, if needed.'''
field_list, new_lookup = self.get_fields_from_lookup(model, lookup)
return (field_list[-1], new_lookup)
def to_python_related(self, value):
value = force_str(value)
if value.lower() in ('none', 'null'):
return None
else:
return int(value)
def value_to_python_for_field(self, field, value):
if isinstance(field, models.BooleanField):
return to_python_boolean(value)
elif isinstance(field, (ForeignObjectRel, ManyToManyField, GenericForeignKey, ForeignKey)):
try:
return self.to_python_related(value)
except ValueError:
raise ParseError(_('Invalid {field_name} id: {field_id}').format(field_name=getattr(field, 'name', 'related field'), field_id=value))
else:
return field.to_python(value)
def value_to_python(self, model, lookup, value):
try:
lookup.encode("ascii")
except UnicodeEncodeError:
raise ValueError("%r is not an allowed field name. Must be ascii encodable." % lookup)
field_list, new_lookup = self.get_fields_from_lookup(model, lookup)
field = field_list[-1]
needs_distinct = not all(isinstance(f, self.NO_DUPLICATES_ALLOW_LIST) for f in field_list)
# Type names are stored without underscores internally, but are presented and
# and serialized over the API containing underscores so we remove `_`
# for polymorphic_ctype__model lookups.
if new_lookup.startswith('polymorphic_ctype__model'):
value = value.replace('_', '')
elif new_lookup.endswith('__isnull'):
value = to_python_boolean(value)
elif new_lookup.endswith('__in'):
items = []
if not value:
raise ValueError('cannot provide empty value for __in')
for item in value.split(','):
items.append(self.value_to_python_for_field(field, item))
value = items
elif new_lookup.endswith('__regex') or new_lookup.endswith('__iregex'):
try:
re.compile(value)
except re.error as e:
raise ValueError(e.args[0])
elif new_lookup.endswith('__iexact'):
if not isinstance(field, (CharField, TextField)):
raise ValueError(f'{field.name} is not a text field and cannot be filtered by case-insensitive search')
elif new_lookup.endswith('__search'):
related_model = getattr(field, 'related_model', None)
if not related_model:
raise ValueError('%s is not searchable' % new_lookup[:-8])
new_lookups = []
for rm_field in related_model._meta.fields:
if rm_field.name in ('username', 'first_name', 'last_name', 'email', 'name', 'description', 'playbook'):
new_lookups.append('{}__{}__icontains'.format(new_lookup[:-8], rm_field.name))
return value, new_lookups, needs_distinct
else:
if isinstance(field, JSONField):
new_lookup = new_lookup.replace(field.name, f'{field.name}_as_txt')
value = self.value_to_python_for_field(field, value)
return value, new_lookup, needs_distinct
def filter_queryset(self, request, queryset, view):
try:
# Apply filters specified via query_params. Each entry in the lists
# below is (negate, field, value).
and_filters = []
or_filters = []
chain_filters = []
role_filters = []
search_filters = {}
needs_distinct = False
# Can only have two values: 'AND', 'OR'
# If 'AND' is used, an item must satisfy all conditions to show up in the results.
# If 'OR' is used, an item just needs to satisfy one condition to appear in results.
search_filter_relation = 'OR'
for key, values in request.query_params.lists():
if key in self.RESERVED_NAMES:
continue
# HACK: make `created` available via API for the Django User ORM model
# so it keep compatibility with other objects which exposes the `created` attr.
if queryset.model._meta.object_name == 'User' and key.startswith('created'):
key = key.replace('created', 'date_joined')
# HACK: Make job event filtering by host name mostly work even
# when not capturing job event hosts M2M.
if queryset.model._meta.object_name == 'JobEvent' and key.startswith('hosts__name'):
key = key.replace('hosts__name', 'or__host__name')
or_filters.append((False, 'host__name__isnull', True))
# Custom __int filter suffix (internal use only).
q_int = False
if key.endswith('__int'):
key = key[:-5]
q_int = True
# RBAC filtering
if key == 'role_level':
role_filters.append(values[0])
continue
# Search across related objects.
if key.endswith('__search'):
if values and ',' in values[0]:
search_filter_relation = 'AND'
values = reduce(lambda list1, list2: list1 + list2, [i.split(',') for i in values])
for value in values:
search_value, new_keys, _ = self.value_to_python(queryset.model, key, force_str(value))
assert isinstance(new_keys, list)
search_filters[search_value] = new_keys
# by definition, search *only* joins across relations,
# so it _always_ needs a .distinct()
needs_distinct = True
continue
# Custom chain__ and or__ filters, mutually exclusive (both can
# precede not__).
q_chain = False
q_or = False
if key.startswith('chain__'):
key = key[7:]
q_chain = True
elif key.startswith('or__'):
key = key[4:]
q_or = True
# Custom not__ filter prefix.
q_not = False
if key.startswith('not__'):
key = key[5:]
q_not = True
# Convert value(s) to python and add to the appropriate list.
for value in values:
if q_int:
value = int(value)
value, new_key, distinct = self.value_to_python(queryset.model, key, value)
if distinct:
needs_distinct = True
if '_as_txt' in new_key:
fname = next(item for item in new_key.split('__') if item.endswith('_as_txt'))
queryset = queryset.annotate(**{fname: Cast(fname[:-7], output_field=TextField())})
if q_chain:
chain_filters.append((q_not, new_key, value))
elif q_or:
or_filters.append((q_not, new_key, value))
else:
and_filters.append((q_not, new_key, value))
# Now build Q objects for database query filter.
if and_filters or or_filters or chain_filters or role_filters or search_filters:
args = []
for n, k, v in and_filters:
if n:
args.append(~Q(**{k: v}))
else:
args.append(Q(**{k: v}))
for role_name in role_filters:
if not hasattr(queryset.model, 'accessible_pk_qs'):
raise ParseError(_('Cannot apply role_level filter to this list because its model does not use roles for access control.'))
args.append(Q(pk__in=queryset.model.accessible_pk_qs(request.user, role_name)))
if or_filters:
q = Q()
for n, k, v in or_filters:
if n:
q |= ~Q(**{k: v})
else:
q |= Q(**{k: v})
args.append(q)
if search_filters and search_filter_relation == 'OR':
q = Q()
for term, constrains in search_filters.items():
for constrain in constrains:
q |= Q(**{constrain: term})
args.append(q)
elif search_filters and search_filter_relation == 'AND':
for term, constrains in search_filters.items():
q_chain = Q()
for constrain in constrains:
q_chain |= Q(**{constrain: term})
queryset = queryset.filter(q_chain)
for n, k, v in chain_filters:
if n:
q = ~Q(**{k: v})
else:
q = Q(**{k: v})
queryset = queryset.filter(q)
queryset = queryset.filter(*args)
if needs_distinct:
queryset = queryset.distinct()
return queryset
except (FieldError, FieldDoesNotExist, ValueError, TypeError) as e:
raise ParseError(e.args[0])
except ValidationError as e:
raise ParseError(json.dumps(e.messages, ensure_ascii=False))
class OrderByBackend(BaseFilterBackend):
"""
Filter to apply ordering based on query string parameters.
"""
def filter_queryset(self, request, queryset, view):
try:
order_by = None
for key, value in request.query_params.items():
if key in ('order', 'order_by'):
order_by = value
if ',' in value:
order_by = value.split(',')
else:
order_by = (value,)
default_order_by = self.get_default_ordering(view)
# glue the order by and default order by together so that the default is the backup option
order_by = list(order_by or []) + list(default_order_by or [])
if order_by:
order_by = self._validate_ordering_fields(queryset.model, order_by)
# Special handling of the type field for ordering. In this
# case, we're not sorting exactly on the type field, but
# given the limited number of views with multiple types,
# sorting on polymorphic_ctype.model is effectively the same.
new_order_by = []
if 'polymorphic_ctype' in get_all_field_names(queryset.model):
for field in order_by:
if field == 'type':
new_order_by.append('polymorphic_ctype__model')
elif field == '-type':
new_order_by.append('-polymorphic_ctype__model')
else:
new_order_by.append(field)
else:
for field in order_by:
if field not in ('type', '-type'):
new_order_by.append(field)
queryset = queryset.order_by(*new_order_by)
return queryset
except FieldError as e:
# Return a 400 for invalid field names.
raise ParseError(*e.args)
def get_default_ordering(self, view):
ordering = getattr(view, 'ordering', None)
if isinstance(ordering, str):
return (ordering,)
return ordering
def _validate_ordering_fields(self, model, order_by):
for field_name in order_by:
# strip off the negation prefix `-` if it exists
prefix = ''
path = field_name
if field_name[0] == '-':
prefix = field_name[0]
path = field_name[1:]
try:
field, new_path = get_field_from_path(model, path)
new_path = '{}{}'.format(prefix, new_path)
except (FieldError, FieldDoesNotExist) as e:
raise ParseError(e.args[0])
yield new_path

View File

@@ -13,8 +13,8 @@ from django.contrib.contenttypes.models import ContentType
from django.core.exceptions import FieldDoesNotExist
from django.db import connection, transaction
from django.db.models.fields.related import OneToOneRel
from django.http import QueryDict
from django.shortcuts import get_object_or_404
from django.http import QueryDict, JsonResponse
from django.shortcuts import get_object_or_404, redirect
from django.template.loader import render_to_string
from django.utils.encoding import smart_str
from django.utils.safestring import mark_safe
@@ -30,13 +30,23 @@ from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import StaticHTMLRenderer
from rest_framework.negotiation import DefaultContentNegotiation
# Shared code for the AWX platform
from awx_plugins.interfaces._temporary_private_licensing_api import detect_server_product_name
# django-ansible-base
from ansible_base.rest_filters.rest_framework.field_lookup_backend import FieldLookupBackend
from ansible_base.lib.utils.models import get_all_field_names
from ansible_base.lib.utils.requests import get_remote_host, is_proxied_request
from ansible_base.rbac.models import RoleEvaluation, RoleDefinition
from ansible_base.rbac.permission_registry import permission_registry
from ansible_base.jwt_consumer.common.util import validate_x_trusted_proxy_header
# AWX
from awx.api.filters import FieldLookupBackend
from awx.main.models import UnifiedJob, UnifiedJobTemplate, User, Role, Credential, WorkflowJobTemplateNode, WorkflowApprovalTemplate
from awx.main.models.rbac import give_creator_permissions
from awx.main.access import optimize_queryset
from awx.main.utils import camelcase_to_underscore, get_search_fields, getattrd, get_object_or_400, decrypt_field, get_awx_version
from awx.main.utils.db import get_all_field_names
from awx.main.utils.licensing import server_product_name
from awx.main.utils.proxy import is_proxy_in_headers, delete_headers_starting_with_http
from awx.main.views import ApiErrorView
from awx.api.serializers import ResourceAccessListElementSerializer, CopySerializer
from awx.api.versioning import URLPathVersioning
@@ -71,7 +81,14 @@ analytics_logger = logging.getLogger('awx.analytics.performance')
class LoggedLoginView(auth_views.LoginView):
def get(self, request, *args, **kwargs):
if is_proxied_request():
next = request.GET.get('next', "")
if next:
next = f"?next={next}"
return redirect(f"/{next}")
# The django.auth.contrib login form doesn't perform the content
# negotiation we've come to expect from DRF; add in code to catch
# situations where Accept != text/html (or */*) and reply with
@@ -87,26 +104,46 @@ class LoggedLoginView(auth_views.LoginView):
return super(LoggedLoginView, self).get(request, *args, **kwargs)
def post(self, request, *args, **kwargs):
if is_proxied_request():
# Give a message, saying to login via AAP
return JsonResponse(
{
'detail': _('Please log in via Platform Authentication.'),
},
status=status.HTTP_401_UNAUTHORIZED,
)
ret = super(LoggedLoginView, self).post(request, *args, **kwargs)
ip = get_remote_host(request) # request.META.get('REMOTE_ADDR', None)
if request.user.is_authenticated:
logger.info(smart_str(u"User {} logged in from {}".format(self.request.user.username, request.META.get('REMOTE_ADDR', None))))
ret.set_cookie('userLoggedIn', 'true')
logger.info(smart_str(u"User {} logged in from {}".format(self.request.user.username, ip)))
ret.set_cookie(
'userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False), samesite=getattr(settings, 'USER_COOKIE_SAMESITE', 'Lax')
)
ret.setdefault('X-API-Session-Cookie-Name', getattr(settings, 'SESSION_COOKIE_NAME', 'awx_sessionid'))
return ret
else:
if 'username' in self.request.POST:
logger.warning(smart_str(u"Login failed for user {} from {}".format(self.request.POST.get('username'), request.META.get('REMOTE_ADDR', None))))
logger.warning(smart_str(u"Login failed for user {} from {}".format(self.request.POST.get('username'), ip)))
ret.status_code = 401
return ret
class LoggedLogoutView(auth_views.LogoutView):
success_url_allowed_hosts = set(settings.LOGOUT_ALLOWED_HOSTS.split(",")) if settings.LOGOUT_ALLOWED_HOSTS else set()
def dispatch(self, request, *args, **kwargs):
if is_proxied_request():
# 1) We intentionally don't obey ?next= here, just always redirect to platform login
# 2) Hack to prevent rewrites of Location header
qs = "?__gateway_no_rewrite__=1&next=/"
return redirect(f"/api/gateway/v1/logout/{qs}")
original_user = getattr(request, 'user', None)
ret = super(LoggedLogoutView, self).dispatch(request, *args, **kwargs)
current_user = getattr(request, 'user', None)
ret.set_cookie('userLoggedIn', 'false')
ret.set_cookie('userLoggedIn', 'false', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False))
if (not current_user or not getattr(current_user, 'pk', True)) and current_user != original_user:
logger.info("User {} logged out.".format(original_user.username))
return ret
@@ -124,10 +161,10 @@ def get_view_description(view, html=False):
def get_default_schema():
if settings.SETTINGS_MODULE == 'awx.settings.development':
from awx.api.swagger import AutoSchema
if settings.DYNACONF.is_development_mode:
from awx.api.swagger import schema_view
return AutoSchema()
return schema_view
else:
return views.APIView.schema
@@ -141,22 +178,23 @@ class APIView(views.APIView):
Store the Django REST Framework Request object as an attribute on the
normal Django request, store time the request started.
"""
remote_headers = ['REMOTE_ADDR', 'REMOTE_HOST']
self.time_started = time.time()
if getattr(settings, 'SQL_DEBUG', False):
self.queries_before = len(connection.queries)
if 'HTTP_X_TRUSTED_PROXY' in request.environ:
if validate_x_trusted_proxy_header(request.environ['HTTP_X_TRUSTED_PROXY']):
remote_headers = settings.REMOTE_HOST_HEADERS
else:
logger.warning("Request appeared to be a trusted upstream proxy but failed to provide a matching shared secret.")
# If there are any custom headers in REMOTE_HOST_HEADERS, make sure
# they respect the allowed proxy list
if all(
[
settings.PROXY_IP_ALLOWED_LIST,
request.environ.get('REMOTE_ADDR') not in settings.PROXY_IP_ALLOWED_LIST,
request.environ.get('REMOTE_HOST') not in settings.PROXY_IP_ALLOWED_LIST,
]
):
for custom_header in settings.REMOTE_HOST_HEADERS:
if custom_header.startswith('HTTP_'):
request.environ.pop(custom_header, None)
if settings.PROXY_IP_ALLOWED_LIST:
if not is_proxy_in_headers(self.request, settings.PROXY_IP_ALLOWED_LIST, remote_headers):
delete_headers_starting_with_http(request, settings.REMOTE_HOST_HEADERS)
drf_request = super(APIView, self).initialize_request(request, *args, **kwargs)
request.drf_request = drf_request
@@ -201,17 +239,21 @@ class APIView(views.APIView):
return response
if response.status_code >= 400:
ip = get_remote_host(request) # request.META.get('REMOTE_ADDR', None)
msg_data = {
'status_code': response.status_code,
'user_name': request.user,
'url_path': request.path,
'remote_addr': request.META.get('REMOTE_ADDR', None),
'remote_addr': ip,
}
if type(response.data) is dict:
msg_data['error'] = response.data.get('error', response.status_text)
elif type(response.data) is list:
msg_data['error'] = ", ".join(list(map(lambda x: x.get('error', response.status_text), response.data)))
if len(response.data) > 0 and isinstance(response.data[0], str):
msg_data['error'] = str(response.data[0])
else:
msg_data['error'] = ", ".join(list(map(lambda x: x.get('error', response.status_text), response.data)))
else:
msg_data['error'] = response.status_text
@@ -225,7 +267,8 @@ class APIView(views.APIView):
if hasattr(self, '__init_request_error__'):
response = self.handle_exception(self.__init_request_error__)
if response.status_code == 401:
response.data['detail'] += _(' To establish a login session, visit') + ' /api/login/.'
if response.data and 'detail' in response.data:
response.data['detail'] += _(' To establish a login session, visit') + ' /api/login/.'
logger.info(status_msg)
else:
logger.warning(status_msg)
@@ -234,7 +277,7 @@ class APIView(views.APIView):
time_started = getattr(self, 'time_started', None)
if request.user.is_authenticated:
response['X-API-Product-Version'] = get_awx_version()
response['X-API-Product-Name'] = server_product_name()
response['X-API-Product-Name'] = detect_server_product_name()
response['X-API-Node'] = settings.CLUSTER_HOST_ID
if time_started:
@@ -331,12 +374,6 @@ class APIView(views.APIView):
kwargs.pop('version')
return super(APIView, self).dispatch(request, *args, **kwargs)
def check_permissions(self, request):
if request.method not in ('GET', 'OPTIONS', 'HEAD'):
if 'write' not in getattr(request.user, 'oauth_scopes', ['write']):
raise PermissionDenied()
return super(APIView, self).check_permissions(request)
class GenericAPIView(generics.GenericAPIView, APIView):
# Base class for all model-based views.
@@ -471,7 +508,11 @@ class ListAPIView(generics.ListAPIView, GenericAPIView):
class ListCreateAPIView(ListAPIView, generics.ListCreateAPIView):
# Base class for a list view that allows creating new objects.
pass
def perform_create(self, serializer):
super().perform_create(serializer)
if serializer.Meta.model in permission_registry.all_registered_models:
if self.request and self.request.user:
give_creator_permissions(self.request.user, serializer.instance)
class ParentMixin(object):
@@ -791,6 +832,7 @@ class RetrieveUpdateDestroyAPIView(RetrieveUpdateAPIView, DestroyAPIView):
class ResourceAccessList(ParentMixin, ListAPIView):
deprecated = True
serializer_class = ResourceAccessListElementSerializer
ordering = ('username',)
@@ -798,6 +840,15 @@ class ResourceAccessList(ParentMixin, ListAPIView):
obj = self.get_parent_object()
content_type = ContentType.objects.get_for_model(obj)
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
ancestors = set(RoleEvaluation.objects.filter(content_type_id=content_type.id, object_id=obj.id).values_list('role_id', flat=True))
qs = User.objects.filter(has_roles__in=ancestors) | User.objects.filter(is_superuser=True)
auditor_role = RoleDefinition.objects.filter(name="Controller System Auditor").first()
if auditor_role:
qs |= User.objects.filter(role_assignments__role_definition=auditor_role)
return qs.distinct()
roles = set(Role.objects.filter(content_type=content_type, object_id=obj.id))
ancestors = set()
@@ -957,7 +1008,7 @@ class CopyAPIView(GenericAPIView):
None, None, self.model, obj, request.user, create_kwargs=create_kwargs, copy_name=serializer.validated_data.get('name', '')
)
if hasattr(new_obj, 'admin_role') and request.user not in new_obj.admin_role.members.all():
new_obj.admin_role.members.add(request.user)
give_creator_permissions(request.user, new_obj)
if sub_objs:
permission_check_func = None
if hasattr(type(self), 'deep_copy_permission_check_func'):

View File

@@ -36,11 +36,13 @@ class Metadata(metadata.SimpleMetadata):
field_info = OrderedDict()
field_info['type'] = self.label_lookup[field]
field_info['required'] = getattr(field, 'required', False)
field_info['hidden'] = getattr(field, 'hidden', False)
text_attrs = [
'read_only',
'label',
'help_text',
'warning_text',
'min_length',
'max_length',
'min_value',
@@ -101,7 +103,7 @@ class Metadata(metadata.SimpleMetadata):
default = field.get_default()
if type(default) is UUID:
default = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
if field.field_name == 'TOWER_URL_BASE' and default == 'https://towerhost':
if field.field_name == 'TOWER_URL_BASE' and default == 'https://platformhost':
default = '{}://{}'.format(self.request.scheme, self.request.get_host())
field_info['default'] = default
except serializers.SkipField:

File diff suppressed because it is too large Load Diff

View File

@@ -1,62 +1,54 @@
import warnings
from rest_framework.permissions import AllowAny
from rest_framework.schemas import SchemaGenerator, AutoSchema as DRFAuthSchema
from drf_yasg.views import get_schema_view
from drf_yasg import openapi
from drf_yasg.inspectors import SwaggerAutoSchema
from drf_yasg.views import get_schema_view
class SuperUserSchemaGenerator(SchemaGenerator):
def has_view_permissions(self, path, method, view):
#
# Generate the Swagger schema as if you were a superuser and
# permissions didn't matter; this short-circuits the schema path
# discovery to include _all_ potential paths in the API.
#
return True
class CustomSwaggerAutoSchema(SwaggerAutoSchema):
"""Custom SwaggerAutoSchema to add swagger_topic to tags."""
class AutoSchema(DRFAuthSchema):
def get_link(self, path, method, base_url):
link = super(AutoSchema, self).get_link(path, method, base_url)
def get_tags(self, operation_keys=None):
tags = []
try:
serializer = self.view.get_serializer()
if hasattr(self.view, 'get_serializer'):
serializer = self.view.get_serializer()
else:
serializer = None
except Exception:
serializer = None
warnings.warn(
'{}.get_serializer() raised an exception during '
'schema generation. Serializer fields will not be '
'generated for {} {}.'.format(self.view.__class__.__name__, method, path)
'generated for {}.'.format(self.view.__class__.__name__, operation_keys)
)
link.__dict__['deprecated'] = getattr(self.view, 'deprecated', False)
# auto-generate a topic/tag for the serializer based on its model
if hasattr(self.view, 'swagger_topic'):
link.__dict__['topic'] = str(self.view.swagger_topic).title()
tags.append(str(self.view.swagger_topic).title())
elif serializer and hasattr(serializer, 'Meta'):
link.__dict__['topic'] = str(serializer.Meta.model._meta.verbose_name_plural).title()
tags.append(str(serializer.Meta.model._meta.verbose_name_plural).title())
elif hasattr(self.view, 'model'):
link.__dict__['topic'] = str(self.view.model._meta.verbose_name_plural).title()
tags.append(str(self.view.model._meta.verbose_name_plural).title())
else:
warnings.warn('Could not determine a Swagger tag for path {}'.format(path))
return link
tags = ['api'] # Fallback to default value
def get_description(self, path, method):
setattr(self.view.request, 'swagger_method', method)
description = super(AutoSchema, self).get_description(path, method)
return description
if not tags:
warnings.warn(f'Could not determine tags for {self.view.__class__.__name__}')
return tags
def is_deprecated(self):
"""Return `True` if this operation is to be marked as deprecated."""
return getattr(self.view, 'deprecated', False)
schema_view = get_schema_view(
openapi.Info(
title="Snippets API",
default_version='v1',
description="Test description",
terms_of_service="https://www.google.com/policies/terms/",
contact=openapi.Contact(email="contact@snippets.local"),
license=openapi.License(name="BSD License"),
title='AWX API',
default_version='v2',
description='AWX API Documentation',
terms_of_service='https://www.google.com/policies/terms/',
contact=openapi.Contact(email='contact@snippets.local'),
license=openapi.License(name='Apache License'),
),
public=True,
permission_classes=[AllowAny],

View File

@@ -1,114 +0,0 @@
# Token Handling using OAuth2
This page lists OAuth 2 utility endpoints used for authorization, token refresh and revoke.
Note endpoints other than `/api/o/authorize/` are not meant to be used in browsers and do not
support HTTP GET. The endpoints here strictly follow
[RFC specs for OAuth2](https://tools.ietf.org/html/rfc6749), so please use that for detailed
reference. Note AWX net location default to `http://localhost:8013` in examples:
## Create Token for an Application using Authorization code grant type
Given an application "AuthCodeApp" of grant type `authorization-code`,
from the client app, the user makes a GET to the Authorize endpoint with
* `response_type`
* `client_id`
* `redirect_uris`
* `scope`
AWX will respond with the authorization `code` and `state`
to the redirect_uri specified in the application. The client application will then make a POST to the
`api/o/token/` endpoint on AWX with
* `code`
* `client_id`
* `client_secret`
* `grant_type`
* `redirect_uri`
AWX will respond with the `access_token`, `token_type`, `refresh_token`, and `expires_in`. For more
information on testing this flow, refer to [django-oauth-toolkit](http://django-oauth-toolkit.readthedocs.io/en/latest/tutorial/tutorial_01.html#test-your-authorization-server).
## Create Token for an Application using Password grant type
Log in is not required for `password` grant type, so a simple `curl` can be used to acquire a personal access token
via `/api/o/token/` with
* `grant_type`: Required to be "password"
* `username`
* `password`
* `client_id`: Associated application must have grant_type "password"
* `client_secret`
For example:
```bash
curl -X POST \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=password&username=<username>&password=<password>&scope=read" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569e
IaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://localhost:8013/api/o/token/ -i
```
In the above post request, parameters `username` and `password` are username and password of the related
AWX user of the underlying application, and the authentication information is of format
`<client_id>:<client_secret>`, where `client_id` and `client_secret` are the corresponding fields of
underlying application.
Upon success, access token, refresh token and other information are given in the response body in JSON
format:
```text
{
"access_token": "9epHOqHhnXUcgYK8QanOmUQPSgX92g",
"token_type": "Bearer",
"expires_in": 31536000000,
"refresh_token": "jMRX6QvzOTf046KHee3TU5mT3nyXsz",
"scope": "read"
}
```
## Refresh an existing access token
The `/api/o/token/` endpoint is used for refreshing access token:
```bash
curl -X POST \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=refresh_token&refresh_token=AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://localhost:8013/api/o/token/ -i
```
In the above post request, `refresh_token` is provided by `refresh_token` field of the access token
above. The authentication information is of format `<client_id>:<client_secret>`, where `client_id`
and `client_secret` are the corresponding fields of underlying related application of the access token.
Upon success, the new (refreshed) access token with the same scope information as the previous one is
given in the response body in JSON format:
```text
{
"access_token": "NDInWxGJI4iZgqpsreujjbvzCfJqgR",
"token_type": "Bearer",
"expires_in": 31536000000,
"refresh_token": "DqOrmz8bx3srlHkZNKmDpqA86bnQkT",
"scope": "read write"
}
```
Internally, the refresh operation deletes the existing token and a new token is created immediately
after, with information like scope and related application identical to the original one. We can
verify by checking the new token is present at the `api/v2/tokens` endpoint.
## Revoke an access token
Revoking an access token is the same as deleting the token resource object.
Revoking is done by POSTing to `/api/o/revoke_token/` with the token to revoke as parameter:
```bash
curl -X POST -d "token=rQONsve372fQwuc2pn76k3IHDCYpi7" \
-H "Content-Type: application/x-www-form-urlencoded" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://localhost:8013/api/o/revoke_token/ -i
```
`200 OK` means a successful delete.

View File

@@ -0,0 +1,22 @@
# Bulk Host Delete
This endpoint allows the client to delete multiple hosts from inventories.
They may do this by providing a list of hosts ID's to be deleted.
Example:
{
"hosts": [1, 2, 3, 4, 5]
}
Return data:
{
"hosts": {
"1": "The host a1 was deleted",
"2": "The host a2 was deleted",
"3": "The host a3 was deleted",
"4": "The host a4 was deleted",
"5": "The host a5 was deleted",
}
}

View File

@@ -17,19 +17,18 @@ custom_worksign_public_keyfile: receptor/work_public_key.pem
custom_tls_certfile: receptor/tls/receptor.crt
custom_tls_keyfile: receptor/tls/receptor.key
custom_ca_certfile: receptor/tls/ca/mesh-CA.crt
receptor_protocol: 'tcp'
{% if instance.listener_port %}
{% if listener_port %}
receptor_protocol: {{ listener_protocol }}
receptor_listener: true
receptor_port: {{ instance.listener_port }}
receptor_port: {{ listener_port }}
{% else %}
receptor_listener: false
{% endif %}
{% if peers %}
receptor_peers:
{% for peer in peers %}
- host: {{ peer.host }}
port: {{ peer.port }}
protocol: tcp
- address: {{ peer.address }}
protocol: {{ peer.protocol }}
{% endfor %}
{% endif %}
{% verbatim %}

View File

@@ -2,6 +2,12 @@
- hosts: all
become: yes
tasks:
- name: Create the receptor group
group:
{% verbatim %}
name: "{{ receptor_group }}"
{% endverbatim %}
state: present
- name: Create the receptor user
user:
{% verbatim %}

View File

@@ -1,4 +1,4 @@
---
collections:
- name: ansible.receptor
version: 2.0.2
version: 2.0.3

View File

@@ -10,6 +10,7 @@ from awx.api.views import (
InstanceInstanceGroupsList,
InstanceHealthCheck,
InstancePeersList,
InstanceReceptorAddressesList,
)
from awx.api.views.instance_install_bundle import InstanceInstallBundle
@@ -21,6 +22,7 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/instance_groups/$', InstanceInstanceGroupsList.as_view(), name='instance_instance_groups_list'),
re_path(r'^(?P<pk>[0-9]+)/health_check/$', InstanceHealthCheck.as_view(), name='instance_health_check'),
re_path(r'^(?P<pk>[0-9]+)/peers/$', InstancePeersList.as_view(), name='instance_peers_list'),
re_path(r'^(?P<pk>[0-9]+)/receptor_addresses/$', InstanceReceptorAddressesList.as_view(), name='instance_receptor_addresses_list'),
re_path(r'^(?P<pk>[0-9]+)/install_bundle/$', InstanceInstallBundle.as_view(), name='instance_install_bundle'),
]

View File

@@ -1,27 +0,0 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.urls import re_path
from awx.api.views import (
OAuth2ApplicationList,
OAuth2ApplicationDetail,
ApplicationOAuth2TokenList,
OAuth2ApplicationActivityStreamList,
OAuth2TokenList,
OAuth2TokenDetail,
OAuth2TokenActivityStreamList,
)
urls = [
re_path(r'^applications/$', OAuth2ApplicationList.as_view(), name='o_auth2_application_list'),
re_path(r'^applications/(?P<pk>[0-9]+)/$', OAuth2ApplicationDetail.as_view(), name='o_auth2_application_detail'),
re_path(r'^applications/(?P<pk>[0-9]+)/tokens/$', ApplicationOAuth2TokenList.as_view(), name='o_auth2_application_token_list'),
re_path(r'^applications/(?P<pk>[0-9]+)/activity_stream/$', OAuth2ApplicationActivityStreamList.as_view(), name='o_auth2_application_activity_stream_list'),
re_path(r'^tokens/$', OAuth2TokenList.as_view(), name='o_auth2_token_list'),
re_path(r'^tokens/(?P<pk>[0-9]+)/$', OAuth2TokenDetail.as_view(), name='o_auth2_token_detail'),
re_path(r'^tokens/(?P<pk>[0-9]+)/activity_stream/$', OAuth2TokenActivityStreamList.as_view(), name='o_auth2_token_activity_stream_list'),
]
__all__ = ['urls']

View File

@@ -1,45 +0,0 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from datetime import timedelta
from django.utils.timezone import now
from django.conf import settings
from django.urls import re_path
from oauthlib import oauth2
from oauth2_provider import views
from awx.main.models import RefreshToken
from awx.api.views.root import ApiOAuthAuthorizationRootView
class TokenView(views.TokenView):
def create_token_response(self, request):
# Django OAuth2 Toolkit has a bug whereby refresh tokens are *never*
# properly expired (ugh):
#
# https://github.com/jazzband/django-oauth-toolkit/issues/746
#
# This code detects and auto-expires them on refresh grant
# requests.
if request.POST.get('grant_type') == 'refresh_token' and 'refresh_token' in request.POST:
refresh_token = RefreshToken.objects.filter(token=request.POST['refresh_token']).first()
if refresh_token:
expire_seconds = settings.OAUTH2_PROVIDER.get('REFRESH_TOKEN_EXPIRE_SECONDS', 0)
if refresh_token.created + timedelta(seconds=expire_seconds) < now():
return request.build_absolute_uri(), {}, 'The refresh token has expired.', '403'
try:
return super(TokenView, self).create_token_response(request)
except oauth2.AccessDeniedError as e:
return request.build_absolute_uri(), {}, str(e), '403'
urls = [
re_path(r'^$', ApiOAuthAuthorizationRootView.as_view(), name='oauth_authorization_root_view'),
re_path(r"^authorize/$", views.AuthorizationView.as_view(), name="authorize"),
re_path(r"^token/$", TokenView.as_view(), name="token"),
re_path(r"^revoke_token/$", views.RevokeTokenView.as_view(), name="revoke-token"),
]
__all__ = ['urls']

View File

@@ -25,7 +25,7 @@ from awx.api.views.organization import (
OrganizationObjectRolesList,
OrganizationAccessList,
)
from awx.api.views import OrganizationCredentialList, OrganizationApplicationList
from awx.api.views import OrganizationCredentialList
urls = [
@@ -66,7 +66,6 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/galaxy_credentials/$', OrganizationGalaxyCredentialsList.as_view(), name='organization_galaxy_credentials_list'),
re_path(r'^(?P<pk>[0-9]+)/object_roles/$', OrganizationObjectRolesList.as_view(), name='organization_object_roles_list'),
re_path(r'^(?P<pk>[0-9]+)/access_list/$', OrganizationAccessList.as_view(), name='organization_access_list'),
re_path(r'^(?P<pk>[0-9]+)/applications/$', OrganizationApplicationList.as_view(), name='organization_applications_list'),
]
__all__ = ['urls']

View File

@@ -0,0 +1,17 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.urls import re_path
from awx.api.views import (
ReceptorAddressesList,
ReceptorAddressDetail,
)
urls = [
re_path(r'^$', ReceptorAddressesList.as_view(), name='receptor_addresses_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ReceptorAddressDetail.as_view(), name='receptor_address_detail'),
]
__all__ = ['urls']

View File

@@ -15,7 +15,6 @@ from awx.api.views.root import (
ApiV2AttachView,
)
from awx.api.views import (
AuthView,
UserMeList,
DashboardView,
DashboardJobsGraphView,
@@ -26,16 +25,13 @@ from awx.api.views import (
JobTemplateCredentialsList,
SchedulePreview,
ScheduleZoneInfo,
OAuth2ApplicationList,
OAuth2TokenList,
ApplicationOAuth2TokenList,
OAuth2ApplicationDetail,
HostMetricSummaryMonthlyList,
)
from awx.api.views.bulk import (
BulkView,
BulkHostCreateView,
BulkHostDeleteView,
BulkJobLaunchView,
)
@@ -79,11 +75,10 @@ from .schedule import urls as schedule_urls
from .activity_stream import urls as activity_stream_urls
from .instance import urls as instance_urls
from .instance_group import urls as instance_group_urls
from .oauth2 import urls as oauth2_urls
from .oauth2_root import urls as oauth2_root_urls
from .workflow_approval_template import urls as workflow_approval_template_urls
from .workflow_approval import urls as workflow_approval_urls
from .analytics import urls as analytics_urls
from .receptor_address import urls as receptor_address_urls
v2_urls = [
re_path(r'^$', ApiV2RootView.as_view(), name='api_v2_root_view'),
@@ -94,17 +89,11 @@ v2_urls = [
re_path(r'^job_templates/(?P<pk>[0-9]+)/credentials/$', JobTemplateCredentialsList.as_view(), name='job_template_credentials_list'),
re_path(r'^schedules/preview/$', SchedulePreview.as_view(), name='schedule_rrule'),
re_path(r'^schedules/zoneinfo/$', ScheduleZoneInfo.as_view(), name='schedule_zoneinfo'),
re_path(r'^applications/$', OAuth2ApplicationList.as_view(), name='o_auth2_application_list'),
re_path(r'^applications/(?P<pk>[0-9]+)/$', OAuth2ApplicationDetail.as_view(), name='o_auth2_application_detail'),
re_path(r'^applications/(?P<pk>[0-9]+)/tokens/$', ApplicationOAuth2TokenList.as_view(), name='application_o_auth2_token_list'),
re_path(r'^tokens/$', OAuth2TokenList.as_view(), name='o_auth2_token_list'),
re_path(r'^', include(oauth2_urls)),
re_path(r'^metrics/$', MetricsView.as_view(), name='metrics_view'),
re_path(r'^ping/$', ApiV2PingView.as_view(), name='api_v2_ping_view'),
re_path(r'^config/$', ApiV2ConfigView.as_view(), name='api_v2_config_view'),
re_path(r'^config/subscriptions/$', ApiV2SubscriptionView.as_view(), name='api_v2_subscription_view'),
re_path(r'^config/attach/$', ApiV2AttachView.as_view(), name='api_v2_attach_view'),
re_path(r'^auth/$', AuthView.as_view()),
re_path(r'^me/$', UserMeList.as_view(), name='user_me_list'),
re_path(r'^dashboard/$', DashboardView.as_view(), name='dashboard_view'),
re_path(r'^dashboard/graphs/jobs/$', DashboardJobsGraphView.as_view(), name='dashboard_jobs_graph_view'),
@@ -152,7 +141,9 @@ v2_urls = [
re_path(r'^workflow_approvals/', include(workflow_approval_urls)),
re_path(r'^bulk/$', BulkView.as_view(), name='bulk'),
re_path(r'^bulk/host_create/$', BulkHostCreateView.as_view(), name='bulk_host_create'),
re_path(r'^bulk/host_delete/$', BulkHostDeleteView.as_view(), name='bulk_host_delete'),
re_path(r'^bulk/job_launch/$', BulkJobLaunchView.as_view(), name='bulk_job_launch'),
re_path(r'^receptor_addresses/', include(receptor_address_urls)),
]
@@ -162,7 +153,6 @@ urlpatterns = [
re_path(r'^(?P<version>(v2))/', include(v2_urls)),
re_path(r'^login/$', LoggedLoginView.as_view(template_name='rest_framework/login.html', extra_context={'inside_login_context': True}), name='login'),
re_path(r'^logout/$', LoggedLogoutView.as_view(next_page='/api/', redirect_field_name='next'), name='logout'),
re_path(r'^o/', include(oauth2_root_urls)),
]
if MODE == 'development':
# Only include these if we are in the development environment

View File

@@ -14,10 +14,6 @@ from awx.api.views import (
UserRolesList,
UserActivityStreamList,
UserAccessList,
OAuth2ApplicationList,
OAuth2UserTokenList,
UserPersonalTokenList,
UserAuthorizedTokenList,
)
urls = [
@@ -31,10 +27,6 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/roles/$', UserRolesList.as_view(), name='user_roles_list'),
re_path(r'^(?P<pk>[0-9]+)/activity_stream/$', UserActivityStreamList.as_view(), name='user_activity_stream_list'),
re_path(r'^(?P<pk>[0-9]+)/access_list/$', UserAccessList.as_view(), name='user_access_list'),
re_path(r'^(?P<pk>[0-9]+)/applications/$', OAuth2ApplicationList.as_view(), name='o_auth2_application_list'),
re_path(r'^(?P<pk>[0-9]+)/tokens/$', OAuth2UserTokenList.as_view(), name='o_auth2_token_list'),
re_path(r'^(?P<pk>[0-9]+)/authorized_tokens/$', UserAuthorizedTokenList.as_view(), name='user_authorized_token_list'),
re_path(r'^(?P<pk>[0-9]+)/personal_tokens/$', UserPersonalTokenList.as_view(), name='user_personal_token_list'),
]
__all__ = ['urls']

View File

@@ -1,10 +1,11 @@
from django.urls import re_path
from awx.api.views.webhooks import WebhookKeyView, GithubWebhookReceiver, GitlabWebhookReceiver
from awx.api.views.webhooks import WebhookKeyView, GithubWebhookReceiver, GitlabWebhookReceiver, BitbucketDcWebhookReceiver
urlpatterns = [
re_path(r'^webhook_key/$', WebhookKeyView.as_view(), name='webhook_key'),
re_path(r'^github/$', GithubWebhookReceiver.as_view(), name='webhook_receiver_github'),
re_path(r'^gitlab/$', GitlabWebhookReceiver.as_view(), name='webhook_receiver_gitlab'),
re_path(r'^bitbucket_dc/$', BitbucketDcWebhookReceiver.as_view(), name='webhook_receiver_bitbucket_dc'),
]

View File

@@ -2,28 +2,21 @@
# All Rights Reserved.
from django.conf import settings
from django.urls import NoReverseMatch
from rest_framework.reverse import _reverse
from rest_framework.reverse import reverse as drf_reverse
from rest_framework.versioning import URLPathVersioning as BaseVersioning
def drf_reverse(viewname, args=None, kwargs=None, request=None, format=None, **extra):
"""
Copy and monkey-patch `rest_framework.reverse.reverse` to prevent adding unwarranted
query string parameters.
"""
scheme = getattr(request, 'versioning_scheme', None)
if scheme is not None:
try:
url = scheme.reverse(viewname, args, kwargs, request, format, **extra)
except NoReverseMatch:
# In case the versioning scheme reversal fails, fallback to the
# default implementation
url = _reverse(viewname, args, kwargs, request, format, **extra)
else:
url = _reverse(viewname, args, kwargs, request, format, **extra)
def is_optional_api_urlpattern_prefix_request(request):
if settings.OPTIONAL_API_URLPATTERN_PREFIX and request:
if request.path.startswith(f"/api/{settings.OPTIONAL_API_URLPATTERN_PREFIX}"):
return True
return False
def transform_optional_api_urlpattern_prefix_url(request, url):
if is_optional_api_urlpattern_prefix_request(request):
url = url.replace('/api', f"/api/{settings.OPTIONAL_API_URLPATTERN_PREFIX}")
return url

View File

@@ -33,11 +33,10 @@ from django.http import HttpResponse, HttpResponseRedirect
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import gettext_lazy as _
# Django REST Framework
from rest_framework.exceptions import APIException, PermissionDenied, ParseError, NotFound
from rest_framework.parsers import FormParser
from rest_framework.permissions import AllowAny, IsAuthenticated
from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import JSONRenderer, StaticHTMLRenderer
from rest_framework.response import Response
from rest_framework.settings import api_settings
@@ -48,18 +47,17 @@ from rest_framework import status
from rest_framework_yaml.parsers import YAMLParser
from rest_framework_yaml.renderers import YAMLRenderer
# ANSIConv
import ansiconv
# Python Social Auth
from social_core.backends.utils import load_backends
# Django OAuth Toolkit
from oauth2_provider.models import get_access_token_model
# ansi2html
from ansi2html import Ansi2HTMLConverter
import pytz
from wsgiref.util import FileWrapper
# django-ansible-base
from ansible_base.lib.utils.requests import get_remote_hosts
from ansible_base.rbac.models import RoleEvaluation, ObjectRole
from ansible_base.resource_registry.shared_types import OrganizationType, TeamType, UserType
# AWX
from awx.main.tasks.system import send_notifications, update_inventory_computed_fields
from awx.main.access import get_user_queryset
@@ -87,6 +85,7 @@ from awx.api.generics import (
from awx.api.views.labels import LabelSubListCreateAttachDetachView
from awx.api.versioning import reverse
from awx.main import models
from awx.main.models.rbac import get_role_definition
from awx.main.utils import (
camelcase_to_underscore,
extract_ansible_vars,
@@ -98,6 +97,7 @@ from awx.main.utils import (
)
from awx.main.utils.encryption import encrypt_value
from awx.main.utils.filters import SmartFilter
from awx.main.utils.plugins import compute_cloud_inventory_sources
from awx.main.redact import UriCleaner
from awx.api.permissions import (
JobTemplateCallbackPermission,
@@ -272,16 +272,24 @@ class DashboardJobsGraphView(APIView):
success_query = user_unified_jobs.filter(status='successful')
failed_query = user_unified_jobs.filter(status='failed')
canceled_query = user_unified_jobs.filter(status='canceled')
error_query = user_unified_jobs.filter(status='error')
if job_type == 'inv_sync':
success_query = success_query.filter(instance_of=models.InventoryUpdate)
failed_query = failed_query.filter(instance_of=models.InventoryUpdate)
canceled_query = canceled_query.filter(instance_of=models.InventoryUpdate)
error_query = error_query.filter(instance_of=models.InventoryUpdate)
elif job_type == 'playbook_run':
success_query = success_query.filter(instance_of=models.Job)
failed_query = failed_query.filter(instance_of=models.Job)
canceled_query = canceled_query.filter(instance_of=models.Job)
error_query = error_query.filter(instance_of=models.Job)
elif job_type == 'scm_update':
success_query = success_query.filter(instance_of=models.ProjectUpdate)
failed_query = failed_query.filter(instance_of=models.ProjectUpdate)
canceled_query = canceled_query.filter(instance_of=models.ProjectUpdate)
error_query = error_query.filter(instance_of=models.ProjectUpdate)
end = now()
interval = 'day'
@@ -297,10 +305,12 @@ class DashboardJobsGraphView(APIView):
else:
return Response({'error': _('Unknown period "%s"') % str(period)}, status=status.HTTP_400_BAD_REQUEST)
dashboard_data = {"jobs": {"successful": [], "failed": []}}
dashboard_data = {"jobs": {"successful": [], "failed": [], "canceled": [], "error": []}}
succ_list = dashboard_data['jobs']['successful']
fail_list = dashboard_data['jobs']['failed']
canceled_list = dashboard_data['jobs']['canceled']
error_list = dashboard_data['jobs']['error']
qs_s = (
success_query.filter(finished__range=(start, end))
@@ -318,6 +328,22 @@ class DashboardJobsGraphView(APIView):
.annotate(agg=Count('id', distinct=True))
)
data_f = {item['d']: item['agg'] for item in qs_f}
qs_c = (
canceled_query.filter(finished__range=(start, end))
.annotate(d=Trunc('finished', interval, tzinfo=end.tzinfo))
.order_by()
.values('d')
.annotate(agg=Count('id', distinct=True))
)
data_c = {item['d']: item['agg'] for item in qs_c}
qs_e = (
error_query.filter(finished__range=(start, end))
.annotate(d=Trunc('finished', interval, tzinfo=end.tzinfo))
.order_by()
.values('d')
.annotate(agg=Count('id', distinct=True))
)
data_e = {item['d']: item['agg'] for item in qs_e}
start_date = start.replace(hour=0, minute=0, second=0, microsecond=0)
for d in itertools.count():
@@ -326,6 +352,8 @@ class DashboardJobsGraphView(APIView):
break
succ_list.append([time.mktime(date.timetuple()), data_s.get(date, 0)])
fail_list.append([time.mktime(date.timetuple()), data_f.get(date, 0)])
canceled_list.append([time.mktime(date.timetuple()), data_c.get(date, 0)])
error_list.append([time.mktime(date.timetuple()), data_e.get(date, 0)])
return Response(dashboard_data)
@@ -337,12 +365,20 @@ class InstanceList(ListCreateAPIView):
search_fields = ('hostname',)
ordering = ('id',)
def get_queryset(self):
qs = super().get_queryset().prefetch_related('receptor_addresses')
return qs
class InstanceDetail(RetrieveUpdateAPIView):
name = _("Instance Detail")
model = models.Instance
serializer_class = serializers.InstanceSerializer
def get_queryset(self):
qs = super().get_queryset().prefetch_related('receptor_addresses')
return qs
def update_raw_data(self, data):
# these fields are only valid on creation of an instance, so they unwanted on detail view
data.pop('node_type', None)
@@ -375,13 +411,37 @@ class InstanceUnifiedJobsList(SubListAPIView):
class InstancePeersList(SubListAPIView):
name = _("Instance Peers")
name = _("Peers")
model = models.ReceptorAddress
serializer_class = serializers.ReceptorAddressSerializer
parent_model = models.Instance
model = models.Instance
serializer_class = serializers.InstanceSerializer
parent_access = 'read'
search_fields = {'hostname'}
relationship = 'peers'
search_fields = ('address',)
class InstanceReceptorAddressesList(SubListAPIView):
name = _("Receptor Addresses")
model = models.ReceptorAddress
parent_key = 'instance'
parent_model = models.Instance
serializer_class = serializers.ReceptorAddressSerializer
search_fields = ('address',)
class ReceptorAddressesList(ListAPIView):
name = _("Receptor Addresses")
model = models.ReceptorAddress
serializer_class = serializers.ReceptorAddressSerializer
search_fields = ('address',)
class ReceptorAddressDetail(RetrieveAPIView):
name = _("Receptor Address Detail")
model = models.ReceptorAddress
serializer_class = serializers.ReceptorAddressSerializer
parent_model = models.Instance
relationship = 'receptor_addresses'
class InstanceInstanceGroupsList(InstanceGroupMembershipMixin, SubListCreateAttachDetachAPIView):
@@ -476,6 +536,7 @@ class InstanceGroupAccessList(ResourceAccessList):
class InstanceGroupObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.InstanceGroup
@@ -610,51 +671,81 @@ class ScheduleUnifiedJobsList(SubListAPIView):
name = _('Schedule Jobs List')
class AuthView(APIView):
'''List enabled single-sign-on endpoints'''
def immutablesharedfields(cls):
'''
Class decorator to prevent modifying shared resources when ALLOW_LOCAL_RESOURCE_MANAGEMENT setting is set to False.
authentication_classes = []
permission_classes = (AllowAny,)
swagger_topic = 'System Configuration'
Works by overriding these view methods:
- create
- delete
- perform_update
create and delete are overridden to raise a PermissionDenied exception.
perform_update is overridden to check if any shared fields are being modified,
and raise a PermissionDenied exception if so.
'''
# create instead of perform_create because some of our views
# override create instead of perform_create
if hasattr(cls, 'create'):
cls.original_create = cls.create
def get(self, request):
from rest_framework.reverse import reverse
@functools.wraps(cls.create)
def create_wrapper(*args, **kwargs):
if settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
return cls.original_create(*args, **kwargs)
raise PermissionDenied({'detail': _('Creation of this resource is not allowed. Create this resource via the platform ingress.')})
data = OrderedDict()
err_backend, err_message = request.session.get('social_auth_error', (None, None))
auth_backends = list(load_backends(settings.AUTHENTICATION_BACKENDS, force_load=True).items())
# Return auth backends in consistent order: Google, GitHub, SAML.
auth_backends.sort(key=lambda x: 'g' if x[0] == 'google-oauth2' else x[0])
for name, backend in auth_backends:
login_url = reverse('social:begin', args=(name,))
complete_url = request.build_absolute_uri(reverse('social:complete', args=(name,)))
backend_data = {'login_url': login_url, 'complete_url': complete_url}
if name == 'saml':
backend_data['metadata_url'] = reverse('sso:saml_metadata')
for idp in sorted(settings.SOCIAL_AUTH_SAML_ENABLED_IDPS.keys()):
saml_backend_data = dict(backend_data.items())
saml_backend_data['login_url'] = '%s?idp=%s' % (login_url, idp)
full_backend_name = '%s:%s' % (name, idp)
if (err_backend == full_backend_name or err_backend == name) and err_message:
saml_backend_data['error'] = err_message
data[full_backend_name] = saml_backend_data
else:
if err_backend == name and err_message:
backend_data['error'] = err_message
data[name] = backend_data
return Response(data)
cls.create = create_wrapper
if hasattr(cls, 'delete'):
cls.original_delete = cls.delete
@functools.wraps(cls.delete)
def delete_wrapper(*args, **kwargs):
if settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
return cls.original_delete(*args, **kwargs)
raise PermissionDenied({'detail': _('Deletion of this resource is not allowed. Delete this resource via the platform ingress.')})
cls.delete = delete_wrapper
if hasattr(cls, 'perform_update'):
cls.original_perform_update = cls.perform_update
@functools.wraps(cls.perform_update)
def update_wrapper(*args, **kwargs):
if not settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
view, serializer = args
instance = view.get_object()
if instance:
if isinstance(instance, models.Organization):
shared_fields = OrganizationType._declared_fields.keys()
elif isinstance(instance, models.User):
shared_fields = UserType._declared_fields.keys()
elif isinstance(instance, models.Team):
shared_fields = TeamType._declared_fields.keys()
attrs = serializer.validated_data
for field in shared_fields:
if field in attrs and getattr(instance, field) != attrs[field]:
raise PermissionDenied({field: _(f"Cannot change shared field '{field}'. Alter this field via the platform ingress.")})
return cls.original_perform_update(*args, **kwargs)
cls.perform_update = update_wrapper
return cls
@immutablesharedfields
class TeamList(ListCreateAPIView):
model = models.Team
serializer_class = serializers.TeamSerializer
@immutablesharedfields
class TeamDetail(RetrieveUpdateDestroyAPIView):
model = models.Team
serializer_class = serializers.TeamSerializer
@immutablesharedfields
class TeamUsersList(BaseUsersList):
model = models.User
serializer_class = serializers.UserSerializer
@@ -664,6 +755,7 @@ class TeamUsersList(BaseUsersList):
class TeamRolesList(SubListAttachDetachAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializerWithParentAccess
metadata_class = RoleMetadata
@@ -703,10 +795,12 @@ class TeamRolesList(SubListAttachDetachAPIView):
class TeamObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Team
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -724,8 +818,15 @@ class TeamProjectsList(SubListAPIView):
self.check_parent_access(team)
model_ct = ContentType.objects.get_for_model(self.model)
parent_ct = ContentType.objects.get_for_model(self.parent_model)
proj_roles = models.Role.objects.filter(Q(ancestors__content_type=parent_ct) & Q(ancestors__object_id=team.pk), content_type=model_ct)
return self.model.accessible_objects(self.request.user, 'read_role').filter(pk__in=[t.content_object.pk for t in proj_roles])
rd = get_role_definition(team.member_role)
role = ObjectRole.objects.filter(object_id=team.id, content_type=parent_ct, role_definition=rd).first()
if role is None:
# Team has no permissions, therefore team has no projects
return self.model.objects.none()
else:
project_qs = self.model.accessible_objects(self.request.user, 'read_role')
return project_qs.filter(id__in=RoleEvaluation.objects.filter(content_type_id=model_ct.id, role=role).values_list('object_id'))
class TeamActivityStreamList(SubListAPIView):
@@ -740,10 +841,23 @@ class TeamActivityStreamList(SubListAPIView):
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(
Q(team=parent)
| Q(project__in=models.Project.accessible_objects(parent, 'read_role'))
| Q(credential__in=models.Credential.accessible_objects(parent, 'read_role'))
| Q(
project__in=RoleEvaluation.objects.filter(
role__in=parent.has_roles.all(), content_type_id=ContentType.objects.get_for_model(models.Project).id, codename='view_project'
)
.values_list('object_id')
.distinct()
)
| Q(
credential__in=RoleEvaluation.objects.filter(
role__in=parent.has_roles.all(), content_type_id=ContentType.objects.get_for_model(models.Credential).id, codename='view_credential'
)
.values_list('object_id')
.distinct()
)
)
@@ -995,10 +1109,12 @@ class ProjectAccessList(ResourceAccessList):
class ProjectObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Project
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -1011,6 +1127,7 @@ class ProjectCopy(CopyAPIView):
copy_return_serializer_class = serializers.ProjectSerializer
@immutablesharedfields
class UserList(ListCreateAPIView):
model = models.User
serializer_class = serializers.UserSerializer
@@ -1028,121 +1145,6 @@ class UserMeList(ListAPIView):
return self.model.objects.filter(pk=self.request.user.pk)
class OAuth2ApplicationList(ListCreateAPIView):
name = _("OAuth 2 Applications")
model = models.OAuth2Application
serializer_class = serializers.OAuth2ApplicationSerializer
swagger_topic = 'Authentication'
class OAuth2ApplicationDetail(RetrieveUpdateDestroyAPIView):
name = _("OAuth 2 Application Detail")
model = models.OAuth2Application
serializer_class = serializers.OAuth2ApplicationSerializer
swagger_topic = 'Authentication'
def update_raw_data(self, data):
data.pop('client_secret', None)
return super(OAuth2ApplicationDetail, self).update_raw_data(data)
class ApplicationOAuth2TokenList(SubListCreateAPIView):
name = _("OAuth 2 Application Tokens")
model = models.OAuth2AccessToken
serializer_class = serializers.OAuth2TokenSerializer
parent_model = models.OAuth2Application
relationship = 'oauth2accesstoken_set'
parent_key = 'application'
swagger_topic = 'Authentication'
class OAuth2ApplicationActivityStreamList(SubListAPIView):
model = models.ActivityStream
serializer_class = serializers.ActivityStreamSerializer
parent_model = models.OAuth2Application
relationship = 'activitystream_set'
swagger_topic = 'Authentication'
search_fields = ('changes',)
class OAuth2TokenList(ListCreateAPIView):
name = _("OAuth2 Tokens")
model = models.OAuth2AccessToken
serializer_class = serializers.OAuth2TokenSerializer
swagger_topic = 'Authentication'
class OAuth2UserTokenList(SubListCreateAPIView):
name = _("OAuth2 User Tokens")
model = models.OAuth2AccessToken
serializer_class = serializers.OAuth2TokenSerializer
parent_model = models.User
relationship = 'main_oauth2accesstoken'
parent_key = 'user'
swagger_topic = 'Authentication'
class UserAuthorizedTokenList(SubListCreateAPIView):
name = _("OAuth2 User Authorized Access Tokens")
model = models.OAuth2AccessToken
serializer_class = serializers.UserAuthorizedTokenSerializer
parent_model = models.User
relationship = 'oauth2accesstoken_set'
parent_key = 'user'
swagger_topic = 'Authentication'
def get_queryset(self):
return get_access_token_model().objects.filter(application__isnull=False, user=self.request.user)
class OrganizationApplicationList(SubListCreateAPIView):
name = _("Organization OAuth2 Applications")
model = models.OAuth2Application
serializer_class = serializers.OAuth2ApplicationSerializer
parent_model = models.Organization
relationship = 'applications'
parent_key = 'organization'
swagger_topic = 'Authentication'
class UserPersonalTokenList(SubListCreateAPIView):
name = _("OAuth2 Personal Access Tokens")
model = models.OAuth2AccessToken
serializer_class = serializers.UserPersonalTokenSerializer
parent_model = models.User
relationship = 'main_oauth2accesstoken'
parent_key = 'user'
swagger_topic = 'Authentication'
def get_queryset(self):
return get_access_token_model().objects.filter(application__isnull=True, user=self.request.user)
class OAuth2TokenDetail(RetrieveUpdateDestroyAPIView):
name = _("OAuth Token Detail")
model = models.OAuth2AccessToken
serializer_class = serializers.OAuth2TokenDetailSerializer
swagger_topic = 'Authentication'
class OAuth2TokenActivityStreamList(SubListAPIView):
model = models.ActivityStream
serializer_class = serializers.ActivityStreamSerializer
parent_model = models.OAuth2AccessToken
relationship = 'activitystream_set'
swagger_topic = 'Authentication'
search_fields = ('changes',)
class UserTeamsList(SubListAPIView):
model = models.Team
serializer_class = serializers.TeamSerializer
@@ -1156,6 +1158,7 @@ class UserTeamsList(SubListAPIView):
class UserRolesList(SubListAttachDetachAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializerWithParentAccess
metadata_class = RoleMetadata
@@ -1180,7 +1183,16 @@ class UserRolesList(SubListAttachDetachAPIView):
user = get_object_or_400(models.User, pk=self.kwargs['pk'])
role = get_object_or_400(models.Role, pk=sub_id)
credential_content_type = ContentType.objects.get_for_model(models.Credential)
content_types = ContentType.objects.get_for_models(models.Organization, models.Team, models.Credential) # dict of {model: content_type}
# Prevent user to be associated with team/org when ALLOW_LOCAL_RESOURCE_MANAGEMENT is False
if not settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
for model in [models.Organization, models.Team]:
ct = content_types[model]
if role.content_type == ct and role.role_field in ['member_role', 'admin_role']:
data = dict(msg=_(f"Cannot directly modify user membership to {ct.model}. Direct shared resource management disabled"))
return Response(data, status=status.HTTP_403_FORBIDDEN)
credential_content_type = content_types[models.Credential]
if role.content_type == credential_content_type:
if 'disassociate' not in request.data and role.content_object.organization and user not in role.content_object.organization.member_role:
data = dict(msg=_("You cannot grant credential access to a user not in the credentials' organization"))
@@ -1252,6 +1264,7 @@ class UserActivityStreamList(SubListAPIView):
return qs.filter(Q(actor=parent) | Q(user__in=[parent]))
@immutablesharedfields
class UserDetail(RetrieveUpdateDestroyAPIView):
model = models.User
serializer_class = serializers.UserSerializer
@@ -1397,7 +1410,7 @@ class OrganizationCredentialList(SubListCreateAPIView):
self.check_parent_access(organization)
user_visible = models.Credential.accessible_objects(self.request.user, 'read_role').all()
org_set = models.Credential.accessible_objects(organization.admin_role, 'read_role').all()
org_set = models.Credential.objects.filter(organization=organization)
if self.request.user.is_superuser or self.request.user.is_system_auditor:
return org_set
@@ -1430,10 +1443,12 @@ class CredentialAccessList(ResourceAccessList):
class CredentialObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Credential
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -2064,9 +2079,9 @@ class InventorySourceNotificationTemplatesAnyList(SubListCreateAttachDetachAPIVi
def post(self, request, *args, **kwargs):
parent = self.get_parent_object()
if parent.source not in models.CLOUD_INVENTORY_SOURCES:
if parent.source not in compute_cloud_inventory_sources():
return Response(
dict(msg=_("Notification Templates can only be assigned when source is one of {}.").format(models.CLOUD_INVENTORY_SOURCES, parent.source)),
dict(msg=_("Notification Templates can only be assigned when source is one of {}.").format(compute_cloud_inventory_sources(), parent.source)),
status=status.HTTP_400_BAD_REQUEST,
)
return super(InventorySourceNotificationTemplatesAnyList, self).post(request, *args, **kwargs)
@@ -2220,12 +2235,16 @@ class JobTemplateList(ListCreateAPIView):
serializer_class = serializers.JobTemplateSerializer
always_allow_superuser = False
def post(self, request, *args, **kwargs):
ret = super(JobTemplateList, self).post(request, *args, **kwargs)
if ret.status_code == 201:
job_template = models.JobTemplate.objects.get(id=ret.data['id'])
job_template.admin_role.members.add(request.user)
return ret
def check_permissions(self, request):
if request.method == 'POST':
if request.user.is_anonymous:
self.permission_denied(request)
else:
can_access, messages = request.user.can_access_with_errors(self.model, 'add', request.data)
if not can_access:
self.permission_denied(request, message=messages)
super(JobTemplateList, self).check_permissions(request)
class JobTemplateDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
@@ -2606,12 +2625,7 @@ class JobTemplateCallback(GenericAPIView):
host for the current request.
"""
# Find the list of remote host names/IPs to check.
remote_hosts = set()
for header in settings.REMOTE_HOST_HEADERS:
for value in self.request.META.get(header, '').split(','):
value = value.strip()
if value:
remote_hosts.add(value)
remote_hosts = set(get_remote_hosts(self.request))
# Add the reverse lookup of IP addresses.
for rh in list(remote_hosts):
try:
@@ -2772,10 +2786,12 @@ class JobTemplateAccessList(ResourceAccessList):
class JobTemplateObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.JobTemplate
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -2949,6 +2965,17 @@ class WorkflowJobTemplateList(ListCreateAPIView):
serializer_class = serializers.WorkflowJobTemplateSerializer
always_allow_superuser = False
def check_permissions(self, request):
if request.method == 'POST':
if request.user.is_anonymous:
self.permission_denied(request)
else:
can_access, messages = request.user.can_access_with_errors(self.model, 'add', request.data)
if not can_access:
self.permission_denied(request, message=messages)
super(WorkflowJobTemplateList, self).check_permissions(request)
class WorkflowJobTemplateDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = models.WorkflowJobTemplate
@@ -3158,10 +3185,12 @@ class WorkflowJobTemplateAccessList(ResourceAccessList):
class WorkflowJobTemplateObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.WorkflowJobTemplate
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -3406,6 +3435,7 @@ class JobRelaunch(RetrieveAPIView):
copy_kwargs = {}
retry_hosts = serializer.validated_data.get('hosts', None)
job_type = serializer.validated_data.get('job_type', None)
if retry_hosts and retry_hosts != 'all':
if obj.status in ACTIVE_STATES:
return Response(
@@ -3426,6 +3456,8 @@ class JobRelaunch(RetrieveAPIView):
)
copy_kwargs['limit'] = ','.join(retry_host_list)
if job_type:
copy_kwargs['job_type'] = job_type
new_job = obj.copy_unified_job(**copy_kwargs)
result = new_job.signal_start(**serializer.validated_data['credential_passwords'])
if not result:
@@ -4025,7 +4057,8 @@ class UnifiedJobStdout(RetrieveAPIView):
# Remove any ANSI escape sequences containing job event data.
content = re.sub(r'\x1b\[K(?:[A-Za-z0-9+/=]+\x1b\[\d+D)+\x1b\[K', '', content)
body = ansiconv.to_html(html.escape(content))
conv = Ansi2HTMLConverter()
body = conv.convert(html.escape(content))
context = {'title': get_view_name(self.__class__), 'body': mark_safe(body), 'dark': dark_bg, 'content_only': content_only}
data = render_to_string('api/stdout.html', context).strip()
@@ -4170,6 +4203,7 @@ class ActivityStreamDetail(RetrieveAPIView):
class RoleList(ListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
permission_classes = (IsAuthenticated,)
@@ -4177,11 +4211,13 @@ class RoleList(ListAPIView):
class RoleDetail(RetrieveAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
class RoleUsersList(SubListAttachDetachAPIView):
deprecated = True
model = models.User
serializer_class = serializers.UserSerializer
parent_model = models.Role
@@ -4202,7 +4238,15 @@ class RoleUsersList(SubListAttachDetachAPIView):
user = get_object_or_400(models.User, pk=sub_id)
role = self.get_parent_object()
credential_content_type = ContentType.objects.get_for_model(models.Credential)
content_types = ContentType.objects.get_for_models(models.Organization, models.Team, models.Credential) # dict of {model: content_type}
if not settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
for model in [models.Organization, models.Team]:
ct = content_types[model]
if role.content_type == ct and role.role_field in ['member_role', 'admin_role']:
data = dict(msg=_(f"Cannot directly modify user membership to {ct.model}. Direct shared resource management disabled"))
return Response(data, status=status.HTTP_403_FORBIDDEN)
credential_content_type = content_types[models.Credential]
if role.content_type == credential_content_type:
if 'disassociate' not in request.data and role.content_object.organization and user not in role.content_object.organization.member_role:
data = dict(msg=_("You cannot grant credential access to a user not in the credentials' organization"))
@@ -4216,6 +4260,7 @@ class RoleUsersList(SubListAttachDetachAPIView):
class RoleTeamsList(SubListAttachDetachAPIView):
deprecated = True
model = models.Team
serializer_class = serializers.TeamSerializer
parent_model = models.Role
@@ -4260,10 +4305,12 @@ class RoleTeamsList(SubListAttachDetachAPIView):
team.member_role.children.remove(role)
else:
team.member_role.children.add(role)
return Response(status=status.HTTP_204_NO_CONTENT)
class RoleParentsList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Role
@@ -4277,6 +4324,7 @@ class RoleParentsList(SubListAPIView):
class RoleChildrenList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Role

View File

@@ -10,6 +10,7 @@ from awx.api.generics import APIView, Response
from awx.api.permissions import AnalyticsPermission
from awx.api.versioning import reverse
from awx.main.utils import get_awx_version
from awx.main.utils.analytics_proxy import OIDCClient, DEFAULT_OIDC_TOKEN_ENDPOINT
from rest_framework import status
from collections import OrderedDict
@@ -48,23 +49,23 @@ class AnalyticsRootView(APIView):
def get(self, request, format=None):
data = OrderedDict()
data['authorized'] = reverse('api:analytics_authorized')
data['reports'] = reverse('api:analytics_reports_list')
data['report_options'] = reverse('api:analytics_report_options_list')
data['adoption_rate'] = reverse('api:analytics_adoption_rate')
data['adoption_rate_options'] = reverse('api:analytics_adoption_rate_options')
data['event_explorer'] = reverse('api:analytics_event_explorer')
data['event_explorer_options'] = reverse('api:analytics_event_explorer_options')
data['host_explorer'] = reverse('api:analytics_host_explorer')
data['host_explorer_options'] = reverse('api:analytics_host_explorer_options')
data['job_explorer'] = reverse('api:analytics_job_explorer')
data['job_explorer_options'] = reverse('api:analytics_job_explorer_options')
data['probe_templates'] = reverse('api:analytics_probe_templates_explorer')
data['probe_templates_options'] = reverse('api:analytics_probe_templates_options')
data['probe_template_for_hosts'] = reverse('api:analytics_probe_template_for_hosts_explorer')
data['probe_template_for_hosts_options'] = reverse('api:analytics_probe_template_for_hosts_options')
data['roi_templates'] = reverse('api:analytics_roi_templates_explorer')
data['roi_templates_options'] = reverse('api:analytics_roi_templates_options')
data['authorized'] = reverse('api:analytics_authorized', request=request)
data['reports'] = reverse('api:analytics_reports_list', request=request)
data['report_options'] = reverse('api:analytics_report_options_list', request=request)
data['adoption_rate'] = reverse('api:analytics_adoption_rate', request=request)
data['adoption_rate_options'] = reverse('api:analytics_adoption_rate_options', request=request)
data['event_explorer'] = reverse('api:analytics_event_explorer', request=request)
data['event_explorer_options'] = reverse('api:analytics_event_explorer_options', request=request)
data['host_explorer'] = reverse('api:analytics_host_explorer', request=request)
data['host_explorer_options'] = reverse('api:analytics_host_explorer_options', request=request)
data['job_explorer'] = reverse('api:analytics_job_explorer', request=request)
data['job_explorer_options'] = reverse('api:analytics_job_explorer_options', request=request)
data['probe_templates'] = reverse('api:analytics_probe_templates_explorer', request=request)
data['probe_templates_options'] = reverse('api:analytics_probe_templates_options', request=request)
data['probe_template_for_hosts'] = reverse('api:analytics_probe_template_for_hosts_explorer', request=request)
data['probe_template_for_hosts_options'] = reverse('api:analytics_probe_template_for_hosts_options', request=request)
data['roi_templates'] = reverse('api:analytics_roi_templates_explorer', request=request)
data['roi_templates_options'] = reverse('api:analytics_roi_templates_options', request=request)
return Response(data)
@@ -179,28 +180,48 @@ class AnalyticsGenericView(APIView):
return Response(response.content, status=response.status_code)
@staticmethod
def _base_auth_request(request: requests.Request, method: str, url: str, user: str, pw: str, headers: dict[str, str]) -> requests.Response:
response = requests.request(
method,
url,
auth=(user, pw),
verify=settings.INSIGHTS_CERT_PATH,
params=getattr(request, 'query_params', {}),
headers=headers,
json=getattr(request, 'data', {}),
timeout=(31, 31),
)
return response
def _send_to_analytics(self, request, method):
try:
headers = self._request_headers(request)
self._get_setting('INSIGHTS_TRACKING_STATE', False, ERROR_UPLOAD_NOT_ENABLED)
url = self._get_analytics_url(request.path)
rh_user = self._get_setting('REDHAT_USERNAME', None, ERROR_MISSING_USER)
rh_password = self._get_setting('REDHAT_PASSWORD', None, ERROR_MISSING_PASSWORD)
if method not in ["GET", "POST", "OPTIONS"]:
return self._error_response(ERROR_UNSUPPORTED_METHOD, method, remote=False, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
else:
response = requests.request(
url = self._get_analytics_url(request.path)
try:
rh_user = self._get_setting('REDHAT_USERNAME', None, ERROR_MISSING_USER)
rh_password = self._get_setting('REDHAT_PASSWORD', None, ERROR_MISSING_PASSWORD)
client = OIDCClient(rh_user, rh_password, DEFAULT_OIDC_TOKEN_ENDPOINT, ['api.console'])
response = client.make_request(
method,
url,
auth=(rh_user, rh_password),
verify=settings.INSIGHTS_CERT_PATH,
params=request.query_params,
headers=headers,
json=request.data,
verify=settings.INSIGHTS_CERT_PATH,
params=getattr(request, 'query_params', {}),
json=getattr(request, 'data', {}),
timeout=(31, 31),
)
except requests.RequestException:
logger.error("Automation Analytics API request failed, trying base auth method")
response = self._base_auth_request(request, method, url, rh_user, rh_password, headers)
except MissingSettings:
rh_user = self._get_setting('SUBSCRIPTIONS_USERNAME', None, ERROR_MISSING_USER)
rh_password = self._get_setting('SUBSCRIPTIONS_PASSWORD', None, ERROR_MISSING_PASSWORD)
response = self._base_auth_request(request, method, url, rh_user, rh_password, headers)
#
# Missing or wrong user/pass
#

View File

@@ -34,6 +34,7 @@ class BulkView(APIView):
'''List top level resources'''
data = OrderedDict()
data['host_create'] = reverse('api:bulk_host_create', request=request)
data['host_delete'] = reverse('api:bulk_host_delete', request=request)
data['job_launch'] = reverse('api:bulk_job_launch', request=request)
return Response(data)
@@ -72,3 +73,20 @@ class BulkHostCreateView(GenericAPIView):
result = serializer.create(serializer.validated_data)
return Response(result, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
class BulkHostDeleteView(GenericAPIView):
permission_classes = [IsAuthenticated]
model = Host
serializer_class = serializers.BulkHostDeleteSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
def get(self, request):
return Response({"detail": "Bulk delete hosts with this endpoint"}, status=status.HTTP_200_OK)
def post(self, request):
serializer = serializers.BulkHostDeleteSerializer(data=request.data, context={'request': request})
if serializer.is_valid():
result = serializer.delete(serializer.validated_data)
return Response(result, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)

View File

@@ -124,10 +124,19 @@ def generate_inventory_yml(instance_obj):
def generate_group_vars_all_yml(instance_obj):
# get peers
peers = []
for instance in instance_obj.peers.all():
peers.append(dict(host=instance.hostname, port=instance.listener_port))
all_yaml = render_to_string("instance_install_bundle/group_vars/all.yml", context=dict(instance=instance_obj, peers=peers))
for addr in instance_obj.peers.select_related('instance'):
peers.append(dict(address=addr.get_full_address(), protocol=addr.protocol))
context = dict(instance=instance_obj, peers=peers)
canonical_addr = instance_obj.canonical_address
if canonical_addr:
context['listener_port'] = canonical_addr.port
protocol = canonical_addr.protocol if canonical_addr.protocol != 'wss' else 'ws'
context['listener_protocol'] = protocol
all_yaml = render_to_string("instance_install_bundle/group_vars/all.yml", context=context)
# convert consecutive newlines with a single newline
return re.sub(r'\n+', '\n', all_yaml)

View File

@@ -152,6 +152,7 @@ class InventoryObjectRolesList(SubListAPIView):
serializer_class = RoleSerializer
parent_model = Inventory
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()

View File

@@ -17,7 +17,7 @@ class MeshVisualizer(APIView):
def get(self, request, format=None):
data = {
'nodes': InstanceNodeSerializer(Instance.objects.all(), many=True).data,
'links': InstanceLinkSerializer(InstanceLink.objects.select_related('target', 'source'), many=True).data,
'links': InstanceLinkSerializer(InstanceLink.objects.select_related('target__instance', 'source'), many=True).data,
}
return Response(data)

View File

@@ -15,6 +15,7 @@ from rest_framework.response import Response
from rest_framework import status
from awx.main.constants import ACTIVE_STATES
from awx.main.models import Organization
from awx.main.utils import get_object_or_400
from awx.main.models.ha import Instance, InstanceGroup, schedule_policy_task
from awx.main.models.organization import Team
@@ -60,6 +61,21 @@ class UnifiedJobDeletionMixin(object):
return Response(status=status.HTTP_204_NO_CONTENT)
class OrganizationInstanceGroupMembershipMixin(object):
"""
This mixin overloads attach/detach so that it calls Organization.save(),
to ensure instance group updates are persisted
"""
def unattach(self, request, *args, **kwargs):
with transaction.atomic():
organization_queryset = Organization.objects.select_for_update()
organization = organization_queryset.get(pk=self.get_parent_object().id)
response = super(OrganizationInstanceGroupMembershipMixin, self).unattach(request, *args, **kwargs)
organization.save()
return response
class InstanceGroupMembershipMixin(object):
"""
This mixin overloads attach/detach so that it calls InstanceGroup.save(),

View File

@@ -52,16 +52,19 @@ from awx.api.serializers import (
WorkflowJobTemplateSerializer,
CredentialSerializer,
)
from awx.api.views.mixin import RelatedJobsPreventDeleteMixin, OrganizationCountsMixin
from awx.api.views.mixin import RelatedJobsPreventDeleteMixin, OrganizationCountsMixin, OrganizationInstanceGroupMembershipMixin
from awx.api.views import immutablesharedfields
logger = logging.getLogger('awx.api.views.organization')
@immutablesharedfields
class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization
serializer_class = OrganizationSerializer
@immutablesharedfields
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization
serializer_class = OrganizationSerializer
@@ -104,6 +107,7 @@ class OrganizationInventoriesList(SubListAPIView):
relationship = 'inventories'
@immutablesharedfields
class OrganizationUsersList(BaseUsersList):
model = User
serializer_class = UserSerializer
@@ -112,6 +116,7 @@ class OrganizationUsersList(BaseUsersList):
ordering = ('username',)
@immutablesharedfields
class OrganizationAdminsList(BaseUsersList):
model = User
serializer_class = UserSerializer
@@ -150,6 +155,7 @@ class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
parent_key = 'organization'
@immutablesharedfields
class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
model = Team
serializer_class = TeamSerializer
@@ -196,7 +202,7 @@ class OrganizationNotificationTemplatesApprovalList(OrganizationNotificationTemp
relationship = 'notification_templates_approvals'
class OrganizationInstanceGroupsList(SubListAttachDetachAPIView):
class OrganizationInstanceGroupsList(OrganizationInstanceGroupMembershipMixin, SubListAttachDetachAPIView):
model = InstanceGroup
serializer_class = InstanceGroupSerializer
parent_model = Organization
@@ -226,6 +232,7 @@ class OrganizationObjectRolesList(SubListAPIView):
serializer_class = RoleSerializer
parent_model = Organization
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()

View File

@@ -13,6 +13,7 @@ from django.utils.decorators import method_decorator
from django.views.decorators.csrf import ensure_csrf_cookie
from django.template.loader import render_to_string
from django.utils.translation import gettext_lazy as _
from django.urls import reverse as django_reverse
from rest_framework.permissions import AllowAny, IsAuthenticated
from rest_framework.response import Response
@@ -27,7 +28,7 @@ from awx.main.analytics import all_collectors
from awx.main.ha import is_ha_environment
from awx.main.utils import get_awx_version, get_custom_venv_choices
from awx.main.utils.licensing import validate_entitlement_manifest
from awx.api.versioning import reverse, drf_reverse
from awx.api.versioning import URLPathVersioning, reverse, drf_reverse
from awx.main.constants import PRIVILEGE_ESCALATION_METHODS
from awx.main.models import Project, Organization, Instance, InstanceGroup, JobTemplate
from awx.main.utils import set_environ
@@ -39,19 +40,17 @@ logger = logging.getLogger('awx.api.views.root')
class ApiRootView(APIView):
permission_classes = (AllowAny,)
name = _('REST API')
versioning_class = None
versioning_class = URLPathVersioning
swagger_topic = 'Versioning'
@method_decorator(ensure_csrf_cookie)
def get(self, request, format=None):
'''List supported API versions'''
v2 = reverse('api:api_v2_root_view', kwargs={'version': 'v2'})
v2 = reverse('api:api_v2_root_view', request=request, kwargs={'version': 'v2'})
data = OrderedDict()
data['description'] = _('AWX REST API')
data['current_version'] = v2
data['available_versions'] = dict(v2=v2)
data['oauth2'] = drf_reverse('api:oauth_authorization_root_view')
data['custom_logo'] = settings.CUSTOM_LOGO
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
data['login_redirect_override'] = settings.LOGIN_REDIRECT_OVERRIDE
@@ -60,20 +59,6 @@ class ApiRootView(APIView):
return Response(data)
class ApiOAuthAuthorizationRootView(APIView):
permission_classes = (AllowAny,)
name = _("API OAuth 2 Authorization Root")
versioning_class = None
swagger_topic = 'Authentication'
def get(self, request, format=None):
data = OrderedDict()
data['authorize'] = drf_reverse('api:authorize')
data['token'] = drf_reverse('api:token')
data['revoke_token'] = drf_reverse('api:revoke-token')
return Response(data)
class ApiVersionRootView(APIView):
permission_classes = (AllowAny,)
swagger_topic = 'Versioning'
@@ -84,6 +69,7 @@ class ApiVersionRootView(APIView):
data['ping'] = reverse('api:api_v2_ping_view', request=request)
data['instances'] = reverse('api:instance_list', request=request)
data['instance_groups'] = reverse('api:instance_group_list', request=request)
data['receptor_addresses'] = reverse('api:receptor_addresses_list', request=request)
data['config'] = reverse('api:api_v2_config_view', request=request)
data['settings'] = reverse('api:setting_category_list', request=request)
data['me'] = reverse('api:user_me_list', request=request)
@@ -97,8 +83,6 @@ class ApiVersionRootView(APIView):
data['credentials'] = reverse('api:credential_list', request=request)
data['credential_types'] = reverse('api:credential_type_list', request=request)
data['credential_input_sources'] = reverse('api:credential_input_source_list', request=request)
data['applications'] = reverse('api:o_auth2_application_list', request=request)
data['tokens'] = reverse('api:o_auth2_token_list', request=request)
data['metrics'] = reverse('api:metrics_view', request=request)
data['inventory'] = reverse('api:inventory_list', request=request)
data['constructed_inventory'] = reverse('api:constructed_inventory_list', request=request)
@@ -129,6 +113,10 @@ class ApiVersionRootView(APIView):
data['mesh_visualizer'] = reverse('api:mesh_visualizer_view', request=request)
data['bulk'] = reverse('api:bulk', request=request)
data['analytics'] = reverse('api:analytics_root_view', request=request)
data['service_index'] = django_reverse('service-index-root')
data['role_definitions'] = django_reverse('roledefinition-list')
data['role_user_assignments'] = django_reverse('roleuserassignment-list')
data['role_team_assignments'] = django_reverse('roleteamassignment-list')
return Response(data)
@@ -279,9 +267,6 @@ class ApiV2ConfigView(APIView):
pendo_state = settings.PENDO_TRACKING_STATE if settings.PENDO_TRACKING_STATE in ('off', 'anonymous', 'detailed') else 'off'
# Guarding against settings.UI_NEXT being set to a non-boolean value
ui_next_state = settings.UI_NEXT if settings.UI_NEXT in (True, False) else False
data = dict(
time_zone=settings.TIME_ZONE,
license_info=license_data,
@@ -290,18 +275,8 @@ class ApiV2ConfigView(APIView):
analytics_status=pendo_state,
analytics_collectors=all_collectors(),
become_methods=PRIVILEGE_ESCALATION_METHODS,
ui_next=ui_next_state,
)
# If LDAP is enabled, user_ldap_fields will return a list of field
# names that are managed by LDAP and should be read-only for users with
# a non-empty ldap_dn attribute.
if getattr(settings, 'AUTH_LDAP_SERVER_URI', None):
user_ldap_fields = ['username', 'password']
user_ldap_fields.extend(getattr(settings, 'AUTH_LDAP_USER_ATTR_MAP', {}).keys())
user_ldap_fields.extend(getattr(settings, 'AUTH_LDAP_USER_FLAGS_BY_GROUP', {}).keys())
data['user_ldap_fields'] = user_ldap_fields
if (
request.user.is_superuser
or request.user.is_system_auditor

View File

@@ -1,4 +1,4 @@
from hashlib import sha1
from hashlib import sha1, sha256
import hmac
import logging
import urllib.parse
@@ -99,14 +99,31 @@ class WebhookReceiverBase(APIView):
def get_signature(self):
raise NotImplementedError
def must_check_signature(self):
return True
def is_ignored_request(self):
return False
def check_signature(self, obj):
if not obj.webhook_key:
raise PermissionDenied
if not self.must_check_signature():
logger.debug("skipping signature validation")
return
mac = hmac.new(force_bytes(obj.webhook_key), msg=force_bytes(self.request.body), digestmod=sha1)
logger.debug("header signature: %s", self.get_signature())
hash_alg, expected_digest = self.get_signature()
if hash_alg == 'sha1':
mac = hmac.new(force_bytes(obj.webhook_key), msg=force_bytes(self.request.body), digestmod=sha1)
elif hash_alg == 'sha256':
mac = hmac.new(force_bytes(obj.webhook_key), msg=force_bytes(self.request.body), digestmod=sha256)
else:
logger.debug("Unsupported signature type, supported: sha1, sha256, received: {}".format(hash_alg))
raise PermissionDenied
logger.debug("header signature: %s", expected_digest)
logger.debug("calculated signature: %s", force_bytes(mac.hexdigest()))
if not hmac.compare_digest(force_bytes(mac.hexdigest()), self.get_signature()):
if not hmac.compare_digest(force_bytes(mac.hexdigest()), expected_digest):
raise PermissionDenied
@csrf_exempt
@@ -118,6 +135,10 @@ class WebhookReceiverBase(APIView):
obj = self.get_object()
self.check_signature(obj)
if self.is_ignored_request():
# This was an ignored request type (e.g. ping), don't act on it
return Response({'message': _("Webhook ignored")}, status=status.HTTP_200_OK)
event_type = self.get_event_type()
event_guid = self.get_event_guid()
event_ref = self.get_event_ref()
@@ -186,7 +207,7 @@ class GithubWebhookReceiver(WebhookReceiverBase):
if hash_alg != 'sha1':
logger.debug("Unsupported signature type, expected: sha1, received: {}".format(hash_alg))
raise PermissionDenied
return force_bytes(signature)
return hash_alg, force_bytes(signature)
class GitlabWebhookReceiver(WebhookReceiverBase):
@@ -214,15 +235,73 @@ class GitlabWebhookReceiver(WebhookReceiverBase):
return "{}://{}/api/v4/projects/{}/statuses/{}".format(parsed.scheme, parsed.netloc, project['id'], self.get_event_ref())
def get_signature(self):
return force_bytes(self.request.META.get('HTTP_X_GITLAB_TOKEN') or '')
def check_signature(self, obj):
if not obj.webhook_key:
raise PermissionDenied
token_from_request = force_bytes(self.request.META.get('HTTP_X_GITLAB_TOKEN') or '')
# GitLab only returns the secret token, not an hmac hash. Use
# the hmac `compare_digest` helper function to prevent timing
# analysis by attackers.
if not hmac.compare_digest(force_bytes(obj.webhook_key), self.get_signature()):
if not hmac.compare_digest(force_bytes(obj.webhook_key), token_from_request):
raise PermissionDenied
class BitbucketDcWebhookReceiver(WebhookReceiverBase):
service = 'bitbucket_dc'
ref_keys = {
'repo:refs_changed': 'changes.0.toHash',
'mirror:repo_synchronized': 'changes.0.toHash',
'pr:opened': 'pullRequest.toRef.latestCommit',
'pr:from_ref_updated': 'pullRequest.toRef.latestCommit',
'pr:modified': 'pullRequest.toRef.latestCommit',
}
def get_event_type(self):
return self.request.META.get('HTTP_X_EVENT_KEY')
def get_event_guid(self):
return self.request.META.get('HTTP_X_REQUEST_ID')
def get_event_status_api(self):
# https://<bitbucket-base-url>/rest/build-status/1.0/commits/<commit-hash>
if self.get_event_type() not in self.ref_keys.keys():
return
if self.get_event_ref() is None:
return
any_url = None
if 'actor' in self.request.data:
any_url = self.request.data['actor'].get('links', {}).get('self')
if any_url is None and 'repository' in self.request.data:
any_url = self.request.data['repository'].get('links', {}).get('self')
if any_url is None:
return
any_url = any_url[0].get('href')
if any_url is None:
return
parsed = urllib.parse.urlparse(any_url)
return "{}://{}/rest/build-status/1.0/commits/{}".format(parsed.scheme, parsed.netloc, self.get_event_ref())
def is_ignored_request(self):
return self.get_event_type() not in [
'repo:refs_changed',
'mirror:repo_synchronized',
'pr:opened',
'pr:from_ref_updated',
'pr:modified',
]
def must_check_signature(self):
# Bitbucket does not sign ping requests...
return self.get_event_type() != 'diagnostics:ping'
def get_signature(self):
header_sig = self.request.META.get('HTTP_X_HUB_SIGNATURE')
if not header_sig:
logger.debug("Expected signature missing from header key HTTP_X_HUB_SIGNATURE")
raise PermissionDenied
hash_alg, signature = header_sig.split('=')
return hash_alg, force_bytes(signature)

View File

@@ -55,6 +55,7 @@ register(
# Optional; category_slug will be slugified version of category if not
# explicitly provided.
category_slug='cows',
hidden=True,
)

View File

@@ -61,6 +61,10 @@ class StringListBooleanField(ListField):
def to_representation(self, value):
try:
if isinstance(value, str):
# https://github.com/encode/django-rest-framework/commit/a180bde0fd965915718b070932418cabc831cee1
# DRF changed truthy and falsy lists to be capitalized
value = value.lower()
if isinstance(value, (list, tuple)):
return super(StringListBooleanField, self).to_representation(value)
elif value in BooleanField.TRUE_VALUES:
@@ -78,6 +82,8 @@ class StringListBooleanField(ListField):
def to_internal_value(self, data):
try:
if isinstance(data, str):
data = data.lower()
if isinstance(data, (list, tuple)):
return super(StringListBooleanField, self).to_internal_value(data)
elif data in BooleanField.TRUE_VALUES:

View File

@@ -1,13 +1,11 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
# AWX
from awx.conf.migrations._ldap_group_type import fill_ldap_group_type_params
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [('conf', '0005_v330_rename_two_session_settings')]
operations = [migrations.RunPython(fill_ldap_group_type_params)]
# this migration is doing nothing, and is here to preserve migrations files integrity
operations = []

View File

@@ -0,0 +1,115 @@
from django.db import migrations
LDAP_AUTH_CONF_KEYS = [
'AUTH_LDAP_SERVER_URI',
'AUTH_LDAP_BIND_DN',
'AUTH_LDAP_BIND_PASSWORD',
'AUTH_LDAP_START_TLS',
'AUTH_LDAP_CONNECTION_OPTIONS',
'AUTH_LDAP_USER_SEARCH',
'AUTH_LDAP_USER_DN_TEMPLATE',
'AUTH_LDAP_USER_ATTR_MAP',
'AUTH_LDAP_GROUP_SEARCH',
'AUTH_LDAP_GROUP_TYPE',
'AUTH_LDAP_GROUP_TYPE_PARAMS',
'AUTH_LDAP_REQUIRE_GROUP',
'AUTH_LDAP_DENY_GROUP',
'AUTH_LDAP_USER_FLAGS_BY_GROUP',
'AUTH_LDAP_ORGANIZATION_MAP',
'AUTH_LDAP_TEAM_MAP',
'AUTH_LDAP_1_SERVER_URI',
'AUTH_LDAP_1_BIND_DN',
'AUTH_LDAP_1_BIND_PASSWORD',
'AUTH_LDAP_1_START_TLS',
'AUTH_LDAP_1_CONNECTION_OPTIONS',
'AUTH_LDAP_1_USER_SEARCH',
'AUTH_LDAP_1_USER_DN_TEMPLATE',
'AUTH_LDAP_1_USER_ATTR_MAP',
'AUTH_LDAP_1_GROUP_SEARCH',
'AUTH_LDAP_1_GROUP_TYPE',
'AUTH_LDAP_1_GROUP_TYPE_PARAMS',
'AUTH_LDAP_1_REQUIRE_GROUP',
'AUTH_LDAP_1_DENY_GROUP',
'AUTH_LDAP_1_USER_FLAGS_BY_GROUP',
'AUTH_LDAP_1_ORGANIZATION_MAP',
'AUTH_LDAP_1_TEAM_MAP',
'AUTH_LDAP_2_SERVER_URI',
'AUTH_LDAP_2_BIND_DN',
'AUTH_LDAP_2_BIND_PASSWORD',
'AUTH_LDAP_2_START_TLS',
'AUTH_LDAP_2_CONNECTION_OPTIONS',
'AUTH_LDAP_2_USER_SEARCH',
'AUTH_LDAP_2_USER_DN_TEMPLATE',
'AUTH_LDAP_2_USER_ATTR_MAP',
'AUTH_LDAP_2_GROUP_SEARCH',
'AUTH_LDAP_2_GROUP_TYPE',
'AUTH_LDAP_2_GROUP_TYPE_PARAMS',
'AUTH_LDAP_2_REQUIRE_GROUP',
'AUTH_LDAP_2_DENY_GROUP',
'AUTH_LDAP_2_USER_FLAGS_BY_GROUP',
'AUTH_LDAP_2_ORGANIZATION_MAP',
'AUTH_LDAP_2_TEAM_MAP',
'AUTH_LDAP_3_SERVER_URI',
'AUTH_LDAP_3_BIND_DN',
'AUTH_LDAP_3_BIND_PASSWORD',
'AUTH_LDAP_3_START_TLS',
'AUTH_LDAP_3_CONNECTION_OPTIONS',
'AUTH_LDAP_3_USER_SEARCH',
'AUTH_LDAP_3_USER_DN_TEMPLATE',
'AUTH_LDAP_3_USER_ATTR_MAP',
'AUTH_LDAP_3_GROUP_SEARCH',
'AUTH_LDAP_3_GROUP_TYPE',
'AUTH_LDAP_3_GROUP_TYPE_PARAMS',
'AUTH_LDAP_3_REQUIRE_GROUP',
'AUTH_LDAP_3_DENY_GROUP',
'AUTH_LDAP_3_USER_FLAGS_BY_GROUP',
'AUTH_LDAP_3_ORGANIZATION_MAP',
'AUTH_LDAP_3_TEAM_MAP',
'AUTH_LDAP_4_SERVER_URI',
'AUTH_LDAP_4_BIND_DN',
'AUTH_LDAP_4_BIND_PASSWORD',
'AUTH_LDAP_4_START_TLS',
'AUTH_LDAP_4_CONNECTION_OPTIONS',
'AUTH_LDAP_4_USER_SEARCH',
'AUTH_LDAP_4_USER_DN_TEMPLATE',
'AUTH_LDAP_4_USER_ATTR_MAP',
'AUTH_LDAP_4_GROUP_SEARCH',
'AUTH_LDAP_4_GROUP_TYPE',
'AUTH_LDAP_4_GROUP_TYPE_PARAMS',
'AUTH_LDAP_4_REQUIRE_GROUP',
'AUTH_LDAP_4_DENY_GROUP',
'AUTH_LDAP_4_USER_FLAGS_BY_GROUP',
'AUTH_LDAP_4_ORGANIZATION_MAP',
'AUTH_LDAP_4_TEAM_MAP',
'AUTH_LDAP_5_SERVER_URI',
'AUTH_LDAP_5_BIND_DN',
'AUTH_LDAP_5_BIND_PASSWORD',
'AUTH_LDAP_5_START_TLS',
'AUTH_LDAP_5_CONNECTION_OPTIONS',
'AUTH_LDAP_5_USER_SEARCH',
'AUTH_LDAP_5_USER_DN_TEMPLATE',
'AUTH_LDAP_5_USER_ATTR_MAP',
'AUTH_LDAP_5_GROUP_SEARCH',
'AUTH_LDAP_5_GROUP_TYPE',
'AUTH_LDAP_5_GROUP_TYPE_PARAMS',
'AUTH_LDAP_5_REQUIRE_GROUP',
'AUTH_LDAP_5_DENY_GROUP',
'AUTH_LDAP_5_USER_FLAGS_BY_GROUP',
'AUTH_LDAP_5_ORGANIZATION_MAP',
'AUTH_LDAP_5_TEAM_MAP',
]
def remove_ldap_auth_conf(apps, scheme_editor):
setting = apps.get_model('conf', 'Setting')
setting.objects.filter(key__in=LDAP_AUTH_CONF_KEYS).delete()
class Migration(migrations.Migration):
dependencies = [
('conf', '0010_change_to_JSONField'),
]
operations = [
migrations.RunPython(remove_ldap_auth_conf),
]

View File

@@ -0,0 +1,20 @@
# Generated by Django 4.2.10 on 2024-08-27 19:31
from django.db import migrations
OIDC_AUTH_CONF_KEYS = ['SOCIAL_AUTH_OIDC_KEY', 'SOCIAL_AUTH_OIDC_SECRET', 'SOCIAL_AUTH_OIDC_OIDC_ENDPOINT', 'SOCIAL_AUTH_OIDC_VERIFY_SSL']
def remove_oidc_auth_conf(apps, scheme_editor):
setting = apps.get_model('conf', 'Setting')
setting.objects.filter(key__in=OIDC_AUTH_CONF_KEYS).delete()
class Migration(migrations.Migration):
dependencies = [
('conf', '0011_remove_ldap_auth_conf'),
]
operations = [
migrations.RunPython(remove_oidc_auth_conf),
]

View File

@@ -0,0 +1,22 @@
from django.db import migrations
RADIUS_AUTH_CONF_KEYS = [
'RADIUS_SERVER',
'RADIUS_PORT',
'RADIUS_SECRET',
]
def remove_radius_auth_conf(apps, scheme_editor):
setting = apps.get_model('conf', 'Setting')
setting.objects.filter(key__in=RADIUS_AUTH_CONF_KEYS).delete()
class Migration(migrations.Migration):
dependencies = [
('conf', '0012_remove_oidc_auth_conf'),
]
operations = [
migrations.RunPython(remove_radius_auth_conf),
]

View File

@@ -0,0 +1,39 @@
# Generated by Django 4.2.10 on 2024-08-27 14:20
from django.db import migrations
SAML_AUTH_CONF_KEYS = [
'SAML_AUTO_CREATE_OBJECTS',
'SOCIAL_AUTH_SAML_CALLBACK_URL',
'SOCIAL_AUTH_SAML_METADATA_URL',
'SOCIAL_AUTH_SAML_SP_ENTITY_ID',
'SOCIAL_AUTH_SAML_SP_PUBLIC_CERT',
'SOCIAL_AUTH_SAML_SP_PRIVATE_KEY',
'SOCIAL_AUTH_SAML_ORG_INFO',
'SOCIAL_AUTH_SAML_TECHNICAL_CONTACT',
'SOCIAL_AUTH_SAML_SUPPORT_CONTACT',
'SOCIAL_AUTH_SAML_ENABLED_IDPS',
'SOCIAL_AUTH_SAML_SECURITY_CONFIG',
'SOCIAL_AUTH_SAML_SP_EXTRA',
'SOCIAL_AUTH_SAML_EXTRA_DATA',
'SOCIAL_AUTH_SAML_ORGANIZATION_MAP',
'SOCIAL_AUTH_SAML_TEAM_MAP',
'SOCIAL_AUTH_SAML_ORGANIZATION_ATTR',
'SOCIAL_AUTH_SAML_TEAM_ATTR',
'SOCIAL_AUTH_SAML_USER_FLAGS_BY_ATTR',
]
def remove_saml_auth_conf(apps, scheme_editor):
setting = apps.get_model('conf', 'Setting')
setting.objects.filter(key__in=SAML_AUTH_CONF_KEYS).delete()
class Migration(migrations.Migration):
dependencies = [
('conf', '0013_remove_radius_auth_conf'),
]
operations = [
migrations.RunPython(remove_saml_auth_conf),
]

View File

@@ -0,0 +1,81 @@
# Generated by Django 4.2.10 on 2024-08-13 11:14
from django.db import migrations
SOCIAL_OAUTH_CONF_KEYS = [
# MICROSOFT AZURE ACTIVE DIRECTORY SETTINGS
'SOCIAL_AUTH_AZUREAD_OAUTH2_CALLBACK_URL',
'SOCIAL_AUTH_AZUREAD_OAUTH2_KEY',
'SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET',
'SOCIAL_AUTH_AZUREAD_OAUTH2_ORGANIZATION_MAP',
'SOCIAL_AUTH_AZUREAD_OAUTH2_TEAM_MAP',
# GOOGLE OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GOOGLE_OAUTH2_CALLBACK_URL',
'SOCIAL_AUTH_GOOGLE_OAUTH2_KEY',
'SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET',
'SOCIAL_AUTH_GOOGLE_OAUTH2_WHITELISTED_DOMAINS',
'SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS',
'SOCIAL_AUTH_GOOGLE_OAUTH2_ORGANIZATION_MAP',
'SOCIAL_AUTH_GOOGLE_OAUTH2_TEAM_MAP',
# GITHUB OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GITHUB_CALLBACK_URL',
'SOCIAL_AUTH_GITHUB_KEY',
'SOCIAL_AUTH_GITHUB_SECRET',
'SOCIAL_AUTH_GITHUB_ORGANIZATION_MAP',
'SOCIAL_AUTH_GITHUB_TEAM_MAP',
# GITHUB ORG OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GITHUB_ORG_CALLBACK_URL',
'SOCIAL_AUTH_GITHUB_ORG_KEY',
'SOCIAL_AUTH_GITHUB_ORG_SECRET',
'SOCIAL_AUTH_GITHUB_ORG_NAME',
'SOCIAL_AUTH_GITHUB_ORG_ORGANIZATION_MAP',
'SOCIAL_AUTH_GITHUB_ORG_TEAM_MAP',
# GITHUB TEAM OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GITHUB_TEAM_CALLBACK_URL',
'SOCIAL_AUTH_GITHUB_TEAM_KEY',
'SOCIAL_AUTH_GITHUB_TEAM_SECRET',
'SOCIAL_AUTH_GITHUB_TEAM_ID',
'SOCIAL_AUTH_GITHUB_TEAM_ORGANIZATION_MAP',
'SOCIAL_AUTH_GITHUB_TEAM_TEAM_MAP',
# GITHUB ENTERPRISE OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GITHUB_ENTERPRISE_CALLBACK_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_KEY',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_SECRET',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORGANIZATION_MAP',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_MAP',
# GITHUB ENTERPRISE ORG OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_CALLBACK_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_API_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_KEY',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_SECRET',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_NAME',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_ORGANIZATION_MAP',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_TEAM_MAP',
# GITHUB ENTERPRISE TEAM OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_CALLBACK_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_API_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_KEY',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_SECRET',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_ID',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_ORGANIZATION_MAP',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_TEAM_MAP',
]
def remove_social_oauth_conf(apps, scheme_editor):
setting = apps.get_model('conf', 'Setting')
setting.objects.filter(key__in=SOCIAL_OAUTH_CONF_KEYS).delete()
class Migration(migrations.Migration):
dependencies = [
('conf', '0014_remove_saml_auth_conf'),
]
operations = [
migrations.RunPython(remove_social_oauth_conf),
]

View File

@@ -0,0 +1,25 @@
from django.db import migrations
TACACS_PLUS_AUTH_CONF_KEYS = [
'TACACSPLUS_HOST',
'TACACSPLUS_PORT',
'TACACSPLUS_SECRET',
'TACACSPLUS_SESSION_TIMEOUT',
'TACACSPLUS_AUTH_PROTOCOL',
'TACACSPLUS_REM_ADDR',
]
def remove_tacacs_plus_auth_conf(apps, scheme_editor):
setting = apps.get_model('conf', 'Setting')
setting.objects.filter(key__in=TACACS_PLUS_AUTH_CONF_KEYS).delete()
class Migration(migrations.Migration):
dependencies = [
('conf', '0015_remove_social_oauth_conf'),
]
operations = [
migrations.RunPython(remove_tacacs_plus_auth_conf),
]

View File

@@ -1,31 +0,0 @@
import inspect
from django.conf import settings
import logging
logger = logging.getLogger('awx.conf.migrations')
def fill_ldap_group_type_params(apps, schema_editor):
group_type = getattr(settings, 'AUTH_LDAP_GROUP_TYPE', None)
Setting = apps.get_model('conf', 'Setting')
group_type_params = {'name_attr': 'cn', 'member_attr': 'member'}
qs = Setting.objects.filter(key='AUTH_LDAP_GROUP_TYPE_PARAMS')
entry = None
if qs.exists():
entry = qs[0]
group_type_params = entry.value
else:
return # for new installs we prefer to use the default value
init_attrs = set(inspect.getfullargspec(group_type.__init__).args[1:])
for k in list(group_type_params.keys()):
if k not in init_attrs:
del group_type_params[k]
entry.value = group_type_params
logger.warning(f'Migration updating AUTH_LDAP_GROUP_TYPE_PARAMS with value {entry.value}')
entry.save()

View File

@@ -7,8 +7,10 @@ import json
# Django
from django.db import models
from ansible_base.lib.utils.models import prevent_search
# AWX
from awx.main.models.base import CreatedModifiedModel, prevent_search
from awx.main.models.base import CreatedModifiedModel
from awx.main.utils import encrypt_field
from awx.conf import settings_registry

View File

@@ -127,6 +127,8 @@ class SettingsRegistry(object):
encrypted = bool(field_kwargs.pop('encrypted', False))
defined_in_file = bool(field_kwargs.pop('defined_in_file', False))
unit = field_kwargs.pop('unit', None)
hidden = field_kwargs.pop('hidden', False)
warning_text = field_kwargs.pop('warning_text', None)
if getattr(field_kwargs.get('child', None), 'source', None) is not None:
field_kwargs['child'].source = None
field_instance = field_class(**field_kwargs)
@@ -134,12 +136,14 @@ class SettingsRegistry(object):
field_instance.category = category
field_instance.depends_on = depends_on
field_instance.unit = unit
field_instance.hidden = hidden
if placeholder is not empty:
field_instance.placeholder = placeholder
field_instance.defined_in_file = defined_in_file
if field_instance.defined_in_file:
field_instance.help_text = str(_('This value has been set manually in a settings file.')) + '\n\n' + str(field_instance.help_text)
field_instance.encrypted = encrypted
field_instance.warning_text = warning_text
original_field_instance = field_instance
if field_class != original_field_class:
original_field_instance = original_field_class(**field_kwargs)

View File

@@ -1,6 +1,7 @@
# Python
import contextlib
import logging
import psycopg
import threading
import time
import os
@@ -13,7 +14,7 @@ from django.conf import settings, UserSettingsHolder
from django.core.cache import cache as django_cache
from django.core.exceptions import ImproperlyConfigured, SynchronousOnlyOperation
from django.db import transaction, connection
from django.db.utils import Error as DBError, ProgrammingError
from django.db.utils import DatabaseError, ProgrammingError
from django.utils.functional import cached_property
# Django REST Framework
@@ -80,18 +81,29 @@ def _ctit_db_wrapper(trans_safe=False):
logger.debug('Obtaining database settings in spite of broken transaction.')
transaction.set_rollback(False)
yield
except DBError as exc:
except ProgrammingError as e:
# Exception raised for programming errors
# Examples may be table not found or already exists,
# this generally means we can't fetch Tower configuration
# because the database hasn't actually finished migrating yet;
# this is usually a sign that a service in a container (such as ws_broadcast)
# has come up *before* the database has finished migrating, and
# especially that the conf.settings table doesn't exist yet
# syntax error in the SQL statement, wrong number of parameters specified, etc.
if trans_safe:
level = logger.warning
if isinstance(exc, ProgrammingError):
if 'relation' in str(exc) and 'does not exist' in str(exc):
# this generally means we can't fetch Tower configuration
# because the database hasn't actually finished migrating yet;
# this is usually a sign that a service in a container (such as ws_broadcast)
# has come up *before* the database has finished migrating, and
# especially that the conf.settings table doesn't exist yet
level = logger.debug
level(f'Database settings are not available, using defaults. error: {str(exc)}')
logger.debug(f'Database settings are not available, using defaults. error: {str(e)}')
else:
logger.exception('Error modifying something related to database settings.')
except DatabaseError as e:
if trans_safe:
cause = e.__cause__
sqlstate = getattr(cause, 'sqlstate', None)
if cause and sqlstate:
sqlstate = cause.sqlstate
sqlstate_str = psycopg.errors.lookup(sqlstate)
logger.error('SQL Error state: {} - {}'.format(sqlstate, sqlstate_str))
else:
logger.error(f'Error reading something related to database settings: {str(e)}.')
else:
logger.exception('Error modifying something related to database settings.')
finally:

View File

@@ -61,18 +61,3 @@ def on_post_delete_setting(sender, **kwargs):
key = getattr(instance, '_saved_key_', None)
if key:
handle_setting_change(key, True)
@receiver(setting_changed)
def disable_local_auth(**kwargs):
if (kwargs['setting'], kwargs['value']) == ('DISABLE_LOCAL_AUTH', True):
from django.contrib.auth.models import User
from oauth2_provider.models import RefreshToken
from awx.main.models.oauth import OAuth2AccessToken
from awx.main.management.commands.revoke_oauth2_tokens import revoke_tokens
logger.warning("Triggering token invalidation for local users.")
qs = User.objects.filter(profile__ldap_dn='', enterprise_auth__isnull=True, social_auth__isnull=True)
revoke_tokens(RefreshToken.objects.filter(revoked=None, user__in=qs))
revoke_tokens(OAuth2AccessToken.objects.filter(user__in=qs))

View File

@@ -8,7 +8,6 @@ from awx.main.utils.encryption import decrypt_field
from awx.conf import fields
from awx.conf.registry import settings_registry
from awx.conf.models import Setting
from awx.sso import fields as sso_fields
@pytest.fixture
@@ -103,24 +102,6 @@ def test_setting_singleton_update(api_request, dummy_setting):
assert response.data['FOO_BAR'] == 4
@pytest.mark.django_db
def test_setting_singleton_update_hybriddictfield_with_forbidden(api_request, dummy_setting):
# Some HybridDictField subclasses have a child of _Forbidden,
# indicating that only the defined fields can be filled in. Make
# sure that the _Forbidden validator doesn't get used for the
# fields. See also https://github.com/ansible/awx/issues/4099.
with dummy_setting('FOO_BAR', field_class=sso_fields.SAMLOrgAttrField, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.clear_setting_cache'
):
api_request(
'patch',
reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}),
data={'FOO_BAR': {'saml_admin_attr': 'Admins', 'saml_attr': 'Orgs'}},
)
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
assert response.data['FOO_BAR'] == {'saml_admin_attr': 'Admins', 'saml_attr': 'Orgs'}
@pytest.mark.django_db
def test_setting_singleton_update_dont_change_readonly_fields(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, read_only=True, default=4, category='FooBar', category_slug='foobar'), mock.patch(

View File

@@ -1,25 +0,0 @@
import pytest
from awx.conf.migrations._ldap_group_type import fill_ldap_group_type_params
from awx.conf.models import Setting
from django.apps import apps
@pytest.mark.django_db
def test_fill_group_type_params_no_op():
fill_ldap_group_type_params(apps, 'dont-use-me')
assert Setting.objects.count() == 0
@pytest.mark.django_db
def test_keep_old_setting_with_default_value():
Setting.objects.create(key='AUTH_LDAP_GROUP_TYPE', value={'name_attr': 'cn', 'member_attr': 'member'})
fill_ldap_group_type_params(apps, 'dont-use-me')
assert Setting.objects.count() == 1
s = Setting.objects.first()
assert s.value == {'name_attr': 'cn', 'member_attr': 'member'}
# NOTE: would be good to test the removal of attributes by migration
# but this requires fighting with the validator and is not done here

View File

@@ -111,7 +111,6 @@ class TestURLField:
@pytest.mark.parametrize(
"url,schemes,regex, allow_numbers_in_top_level_domain, expect_no_error",
[
("ldap://www.example.org42", "ldap", None, True, True),
("https://www.example.org42", "https", None, False, False),
("https://www.example.org", None, regex, None, True),
("https://www.example3.org", None, regex, None, False),

View File

@@ -130,9 +130,9 @@ def test_default_setting(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system', default='DEFAULT')
settings_to_cache = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache):
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.cache.get('AWX_SOME_SETTING') == 'DEFAULT'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache)
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.cache.get('AWX_SOME_SETTING') == 'DEFAULT'
@pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT')
@@ -146,9 +146,9 @@ def test_setting_is_not_from_setting_file(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system', default='DEFAULT')
settings_to_cache = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache):
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.registry.get_setting_field('AWX_SOME_SETTING').defined_in_file is False
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache)
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.registry.get_setting_field('AWX_SOME_SETTING').defined_in_file is False
def test_empty_setting(settings, mocker):
@@ -156,10 +156,10 @@ def test_empty_setting(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([]), 'first.return_value': None})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
with pytest.raises(AttributeError):
settings.AWX_SOME_SETTING
assert settings.cache.get('AWX_SOME_SETTING') == SETTING_CACHE_NOTSET
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
with pytest.raises(AttributeError):
settings.AWX_SOME_SETTING
assert settings.cache.get('AWX_SOME_SETTING') == SETTING_CACHE_NOTSET
def test_setting_from_db(settings, mocker):
@@ -168,9 +168,9 @@ def test_setting_from_db(settings, mocker):
setting_from_db = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([setting_from_db]), 'first.return_value': setting_from_db})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
assert settings.AWX_SOME_SETTING == 'FROM_DB'
assert settings.cache.get('AWX_SOME_SETTING') == 'FROM_DB'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
assert settings.AWX_SOME_SETTING == 'FROM_DB'
assert settings.cache.get('AWX_SOME_SETTING') == 'FROM_DB'
@pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT')
@@ -205,8 +205,8 @@ def test_db_setting_update(settings, mocker):
existing_setting = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
setting_list = mocker.Mock(**{'order_by.return_value.first.return_value': existing_setting})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=setting_list):
settings.AWX_SOME_SETTING = 'NEW-VALUE'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=setting_list)
settings.AWX_SOME_SETTING = 'NEW-VALUE'
assert existing_setting.value == 'NEW-VALUE'
existing_setting.save.assert_called_with(update_fields=['value'])
@@ -217,8 +217,8 @@ def test_db_setting_deletion(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system')
existing_setting = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=[existing_setting]):
del settings.AWX_SOME_SETTING
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=[existing_setting])
del settings.AWX_SOME_SETTING
assert existing_setting.delete.call_count == 1
@@ -283,10 +283,10 @@ def test_sensitive_cache_data_is_encrypted(settings, mocker):
# use its primary key as part of the encryption key
setting_from_db = mocker.Mock(pk=123, key='AWX_ENCRYPTED', value='SECRET!')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([setting_from_db]), 'first.return_value': setting_from_db})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
cache.set('AWX_ENCRYPTED', 'SECRET!')
assert cache.get('AWX_ENCRYPTED') == 'SECRET!'
assert native_cache.get('AWX_ENCRYPTED') == 'FRPERG!'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
cache.set('AWX_ENCRYPTED', 'SECRET!')
assert cache.get('AWX_ENCRYPTED') == 'SECRET!'
assert native_cache.get('AWX_ENCRYPTED') == 'FRPERG!'
def test_readonly_sensitive_cache_data_is_encrypted(settings):

View File

@@ -17,14 +17,15 @@ from django.core.exceptions import ObjectDoesNotExist, FieldDoesNotExist
# Django REST Framework
from rest_framework.exceptions import ParseError, PermissionDenied
# Django OAuth Toolkit
from awx.main.models.oauth import OAuth2Application, OAuth2AccessToken
# django-ansible-base
from ansible_base.lib.utils.validation import to_python_boolean
from ansible_base.rbac.models import RoleEvaluation
from ansible_base.rbac import permission_registry
# AWX
from awx.main.utils import (
get_object_or_400,
get_pk_from_dict,
to_python_boolean,
get_licenser,
)
from awx.main.models import (
@@ -56,6 +57,7 @@ from awx.main.models import (
Project,
ProjectUpdate,
ProjectUpdateEvent,
ReceptorAddress,
Role,
Schedule,
SystemJob,
@@ -70,8 +72,6 @@ from awx.main.models import (
WorkflowJobTemplateNode,
WorkflowApproval,
WorkflowApprovalTemplate,
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
ROLE_SINGLETON_SYSTEM_AUDITOR,
)
from awx.main.models.mixins import ResourceMixin
@@ -239,9 +239,10 @@ class BaseAccess(object):
return qs
def filtered_queryset(self):
# Override in subclasses
# filter objects according to user's read access
return self.model.objects.none()
if permission_registry.is_registered(self.model):
return self.model.access_qs(self.user, 'view')
else:
raise NotImplementedError('Filtered queryset for model is not written')
def can_read(self, obj):
return bool(obj and self.get_queryset().filter(pk=obj.pk).exists())
@@ -262,7 +263,11 @@ class BaseAccess(object):
return self.can_change(obj, data)
def can_delete(self, obj):
return self.user.is_superuser
if self.user.is_superuser:
return True
if obj._meta.model_name in [cls._meta.model_name for cls in permission_registry.all_registered_models]:
return self.user.has_obj_perm(obj, 'delete')
return False
def can_copy(self, obj):
return self.can_add({'reference_obj': obj})
@@ -433,10 +438,7 @@ class BaseAccess(object):
# Actions not possible for reason unrelated to RBAC
# Cannot copy with validation errors, or update a manual group/project
if 'write' not in getattr(self.user, 'oauth_scopes', ['write']):
user_capabilities[display_method] = False # Read tokens cannot take any actions
continue
elif display_method in ['copy', 'start', 'schedule'] and isinstance(obj, JobTemplate):
if display_method in ['copy', 'start', 'schedule'] and isinstance(obj, JobTemplate):
if obj.validation_errors:
user_capabilities[display_method] = False
continue
@@ -591,7 +593,7 @@ class InstanceGroupAccess(BaseAccess):
- a superuser
- admin role on the Instance group
I can add/delete Instance Groups:
- a superuser(system administrator)
- a superuser(system administrator), because these are not org-scoped
I can use Instance Groups when I have:
- use_role on the instance group
"""
@@ -599,9 +601,6 @@ class InstanceGroupAccess(BaseAccess):
model = InstanceGroup
prefetch_related = ('instances',)
def filtered_queryset(self):
return self.model.accessible_objects(self.user, 'read_role')
@check_superuser
def can_use(self, obj):
return self.user in obj.use_role
@@ -620,7 +619,7 @@ class InstanceGroupAccess(BaseAccess):
def can_delete(self, obj):
if obj.name in [settings.DEFAULT_EXECUTION_QUEUE_NAME, settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]:
return False
return self.user.is_superuser
return self.user.has_obj_perm(obj, 'delete')
class UserAccess(BaseAccess):
@@ -637,18 +636,16 @@ class UserAccess(BaseAccess):
"""
model = User
prefetch_related = ('profile',)
prefetch_related = ('resource',)
def filtered_queryset(self):
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and (self.user.admin_of_organizations.exists() or self.user.auditor_of_organizations.exists()):
qs = User.objects.all()
else:
qs = (
User.objects.filter(pk__in=Organization.accessible_objects(self.user, 'read_role').values('member_role__members'))
User.objects.filter(pk__in=Organization.access_qs(self.user, 'view').values('member_role__members'))
| User.objects.filter(pk=self.user.id)
| User.objects.filter(
pk__in=Role.objects.filter(singleton_name__in=[ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR]).values('members')
)
| User.objects.filter(is_superuser=True)
).distinct()
return qs
@@ -663,7 +660,7 @@ class UserAccess(BaseAccess):
return True
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
return Organization.accessible_objects(self.user, 'admin_role').exists()
return Organization.access_qs(self.user, 'change').exists()
def can_change(self, obj, data):
if data is not None and ('is_superuser' in data or 'is_system_auditor' in data):
@@ -683,7 +680,7 @@ class UserAccess(BaseAccess):
"""
Returns all organizations that count `u` as a member
"""
return Organization.accessible_objects(u, 'member_role')
return Organization.access_qs(u, 'member')
def is_all_org_admin(self, u):
"""
@@ -706,6 +703,15 @@ class UserAccess(BaseAccess):
if not allow_orphans:
# in these cases only superusers can modify orphan users
return False
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
# Permission granted if the user has all permissions that the target user has
target_perms = set(
RoleEvaluation.objects.filter(role__in=obj.has_roles.all()).values_list('object_id', 'content_type_id', 'codename').distinct()
)
user_perms = set(
RoleEvaluation.objects.filter(role__in=self.user.has_roles.all()).values_list('object_id', 'content_type_id', 'codename').distinct()
)
return not (target_perms - user_perms)
return not obj.roles.all().exclude(ancestors__in=self.user.roles.all()).exists()
else:
return self.is_all_org_admin(obj)
@@ -741,82 +747,6 @@ class UserAccess(BaseAccess):
return False
class OAuth2ApplicationAccess(BaseAccess):
"""
I can read, change or delete OAuth 2 applications when:
- I am a superuser.
- I am the admin of the organization of the user of the application.
- I am a user in the organization of the application.
I can create OAuth 2 applications when:
- I am a superuser.
- I am the admin of the organization of the application.
"""
model = OAuth2Application
select_related = ('user',)
prefetch_related = ('organization', 'oauth2accesstoken_set')
def filtered_queryset(self):
org_access_qs = Organization.accessible_objects(self.user, 'member_role')
return self.model.objects.filter(organization__in=org_access_qs)
def can_change(self, obj, data):
return self.user.is_superuser or self.check_related('organization', Organization, data, obj=obj, role_field='admin_role', mandatory=True)
def can_delete(self, obj):
return self.user.is_superuser or obj.organization in self.user.admin_of_organizations
def can_add(self, data):
if self.user.is_superuser:
return True
if not data:
return Organization.accessible_objects(self.user, 'admin_role').exists()
return self.check_related('organization', Organization, data, role_field='admin_role', mandatory=True)
class OAuth2TokenAccess(BaseAccess):
"""
I can read, change or delete an app token when:
- I am a superuser.
- I am the admin of the organization of the application of the token.
- I am the user of the token.
I can create an OAuth2 app token when:
- I have the read permission of the related application.
I can read, change or delete a personal token when:
- I am the user of the token
- I am the superuser
I can create an OAuth2 Personal Access Token when:
- I am a user. But I can only create a PAT for myself.
"""
model = OAuth2AccessToken
select_related = ('user', 'application')
prefetch_related = ('refresh_token',)
def filtered_queryset(self):
org_access_qs = Organization.objects.filter(Q(admin_role__members=self.user) | Q(auditor_role__members=self.user))
return self.model.objects.filter(application__organization__in=org_access_qs) | self.model.objects.filter(user__id=self.user.pk)
def can_delete(self, obj):
if (self.user.is_superuser) | (obj.user == self.user):
return True
elif not obj.application:
return False
return self.user in obj.application.organization.admin_role
def can_change(self, obj, data):
return self.can_delete(obj)
def can_add(self, data):
if 'application' in data:
app = get_object_from_data('application', OAuth2Application, data)
if app is None:
return True
return OAuth2ApplicationAccess(self.user).can_read(app)
return True
class OrganizationAccess(NotificationAttachMixin, BaseAccess):
"""
I can see organizations when:
@@ -833,13 +763,11 @@ class OrganizationAccess(NotificationAttachMixin, BaseAccess):
prefetch_related = (
'created_by',
'modified_by',
'resource', # dab_resource_registry
)
# organization admin_role is not a parent of organization auditor_role
notification_attach_roles = ['admin_role', 'auditor_role']
def filtered_queryset(self):
return self.model.accessible_objects(self.user, 'read_role')
@check_superuser
def can_change(self, obj, data):
if data and data.get('default_environment'):
@@ -907,9 +835,6 @@ class InventoryAccess(BaseAccess):
Prefetch('labels', queryset=Label.objects.all().order_by('name')),
)
def filtered_queryset(self, allowed=None, ad_hoc=None):
return self.model.accessible_objects(self.user, 'read_role')
@check_superuser
def can_use(self, obj):
return self.user in obj.use_role
@@ -918,7 +843,7 @@ class InventoryAccess(BaseAccess):
def can_add(self, data):
# If no data is specified, just checking for generic add permission?
if not data:
return Organization.accessible_objects(self.user, 'inventory_admin_role').exists()
return Organization.access_qs(self.user, 'add_inventory').exists()
return self.check_related('organization', Organization, data, role_field='inventory_admin_role')
@check_superuser
@@ -943,9 +868,6 @@ class InventoryAccess(BaseAccess):
def can_update(self, obj):
return self.user in obj.update_role
def can_delete(self, obj):
return self.can_admin(obj, None)
def can_run_ad_hoc_commands(self, obj):
return self.user in obj.adhoc_role
@@ -983,7 +905,7 @@ class HostAccess(BaseAccess):
def can_add(self, data):
if not data: # So the browseable API will work
return Inventory.accessible_objects(self.user, 'admin_role').exists()
return Inventory.access_qs(self.user, 'change').exists()
# Checks for admin or change permission on inventory.
if not self.check_related('inventory', Inventory, data):
@@ -1045,7 +967,7 @@ class GroupAccess(BaseAccess):
def can_add(self, data):
if not data: # So the browseable API will work
return Inventory.accessible_objects(self.user, 'admin_role').exists()
return Inventory.access_qs(self.user, 'change').exists()
if 'inventory' not in data:
return False
# Checks for admin or change permission on inventory.
@@ -1087,7 +1009,7 @@ class InventorySourceAccess(NotificationAttachMixin, UnifiedCredentialsMixin, Ba
def can_add(self, data):
if not data or 'inventory' not in data:
return Inventory.accessible_objects(self.user, 'admin_role').exists()
return Inventory.access_qs(self.user, 'change').exists()
if not self.check_related('source_project', Project, data, role_field='use_role'):
return False
@@ -1201,9 +1123,6 @@ class CredentialAccess(BaseAccess):
)
prefetch_related = ('admin_role', 'use_role', 'read_role', 'admin_role__parents', 'admin_role__members', 'credential_type', 'organization')
def filtered_queryset(self):
return self.model.accessible_objects(self.user, 'read_role')
@check_superuser
def can_add(self, data):
if not data: # So the browseable API will work
@@ -1301,6 +1220,7 @@ class TeamAccess(BaseAccess):
'created_by',
'modified_by',
'organization',
'resource', # dab_resource_registry
)
def filtered_queryset(self):
@@ -1313,7 +1233,7 @@ class TeamAccess(BaseAccess):
@check_superuser
def can_add(self, data):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'admin_role').exists()
return Organization.access_qs(self.user, 'view').exists()
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
return self.check_related('organization', Organization, data)
@@ -1371,12 +1291,11 @@ class TeamAccess(BaseAccess):
class ExecutionEnvironmentAccess(BaseAccess):
"""
I can see an execution environment when:
- I'm a superuser
- I'm a member of the same organization
- it is a global ExecutionEnvironment
- I can see its organization
- It is a global ExecutionEnvironment
I can create/change an execution environment when:
- I'm a superuser
- I'm an admin for the organization(s)
- I have an organization or object role that gives access
"""
model = ExecutionEnvironment
@@ -1385,26 +1304,34 @@ class ExecutionEnvironmentAccess(BaseAccess):
def filtered_queryset(self):
return ExecutionEnvironment.objects.filter(
Q(organization__in=Organization.accessible_pk_qs(self.user, 'read_role')) | Q(organization__isnull=True)
Q(organization__in=Organization.access_ids_qs(self.user, 'view'))
| Q(organization__isnull=True)
| Q(id__in=ExecutionEnvironment.access_ids_qs(self.user, 'change'))
).distinct()
@check_superuser
def can_add(self, data):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'execution_environment_admin_role').exists()
return Organization.access_qs(self.user, 'add_executionenvironment').exists()
return self.check_related('organization', Organization, data, mandatory=True, role_field='execution_environment_admin_role')
@check_superuser
def can_change(self, obj, data):
if obj and obj.organization_id is None:
raise PermissionDenied
if self.user not in obj.organization.execution_environment_admin_role:
raise PermissionDenied
if data and 'organization' in data:
new_org = get_object_from_data('organization', Organization, data, obj=obj)
if not new_org or self.user not in new_org.execution_environment_admin_role:
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
if not self.user.has_obj_perm(obj, 'change'):
return False
return self.check_related('organization', Organization, data, obj=obj, mandatory=True, role_field='execution_environment_admin_role')
else:
if self.user not in obj.organization.execution_environment_admin_role:
raise PermissionDenied
if not self.check_related('organization', Organization, data, obj=obj, role_field='execution_environment_admin_role'):
return False
# Special case that check_related does not catch, org users can not remove the organization from the EE
if data and ('organization' in data or 'organization_id' in data):
if (not data.get('organization')) and (not data.get('organization_id')):
return False
return True
def can_delete(self, obj):
if obj.managed:
@@ -1434,13 +1361,10 @@ class ProjectAccess(NotificationAttachMixin, BaseAccess):
prefetch_related = ('modified_by', 'created_by', 'organization', 'last_job', 'current_job')
notification_attach_roles = ['admin_role']
def filtered_queryset(self):
return self.model.accessible_objects(self.user, 'read_role')
@check_superuser
def can_add(self, data):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'project_admin_role').exists()
return Organization.access_qs(self.user, 'add_project').exists()
if data.get('default_environment'):
ee = get_object_from_data('default_environment', ExecutionEnvironment, data)
@@ -1536,9 +1460,6 @@ class JobTemplateAccess(NotificationAttachMixin, UnifiedCredentialsMixin, BaseAc
Prefetch('last_job', queryset=UnifiedJob.objects.non_polymorphic()),
)
def filtered_queryset(self):
return self.model.accessible_objects(self.user, 'read_role')
def can_add(self, data):
"""
a user can create a job template if
@@ -1551,7 +1472,7 @@ class JobTemplateAccess(NotificationAttachMixin, UnifiedCredentialsMixin, BaseAc
Users who are able to create deploy jobs can also run normal and check (dry run) jobs.
"""
if not data: # So the browseable API will work
return Project.accessible_objects(self.user, 'use_role').exists()
return Project.access_qs(self.user, 'use_project').exists()
# if reference_obj is provided, determine if it can be copied
reference_obj = data.get('reference_obj', None)
@@ -1576,6 +1497,8 @@ class JobTemplateAccess(NotificationAttachMixin, UnifiedCredentialsMixin, BaseAc
inventory = get_value(Inventory, 'inventory')
if inventory:
if self.user not in inventory.use_role:
if self.save_messages:
self.messages['inventory'] = [_('You do not have use permission on Inventory')]
return False
if not self.check_related('execution_environment', ExecutionEnvironment, data, role_field='read_role'):
@@ -1584,11 +1507,16 @@ class JobTemplateAccess(NotificationAttachMixin, UnifiedCredentialsMixin, BaseAc
project = get_value(Project, 'project')
# If the user has admin access to the project (as an org admin), should
# be able to proceed without additional checks.
if project:
return self.user in project.use_role
else:
if not project:
return False
if self.user not in project.use_role:
if self.save_messages:
self.messages['project'] = [_('You do not have use permission on Project')]
return False
return True
@check_superuser
def can_copy_related(self, obj):
"""
@@ -1735,13 +1663,13 @@ class JobAccess(BaseAccess):
def filtered_queryset(self):
qs = self.model.objects
qs_jt = qs.filter(job_template__in=JobTemplate.accessible_objects(self.user, 'read_role'))
qs_jt = qs.filter(job_template__in=JobTemplate.access_qs(self.user, 'view'))
org_access_qs = Organization.objects.filter(Q(admin_role__members=self.user) | Q(auditor_role__members=self.user))
if not org_access_qs.exists():
return qs_jt
return qs.filter(Q(job_template__in=JobTemplate.accessible_objects(self.user, 'read_role')) | Q(organization__in=org_access_qs)).distinct()
return qs.filter(Q(job_template__in=JobTemplate.access_qs(self.user, 'view')) | Q(organization__in=org_access_qs)).distinct()
def can_add(self, data, validate_license=True):
raise NotImplementedError('Direct job creation not possible in v2 API')
@@ -1830,6 +1758,11 @@ class SystemJobTemplateAccess(BaseAccess):
model = SystemJobTemplate
def filtered_queryset(self):
if self.user.is_superuser or self.user.is_system_auditor:
return self.model.objects.all()
return self.model.objects.none()
@check_superuser
def can_start(self, obj, validate_license=True):
'''Only a superuser can start a job from a SystemJobTemplate'''
@@ -1843,6 +1776,11 @@ class SystemJobAccess(BaseAccess):
model = SystemJob
def filtered_queryset(self):
if self.user.is_superuser or self.user.is_system_auditor:
return self.model.objects.all()
return self.model.objects.none()
def can_start(self, obj, validate_license=True):
return False # no relaunching of system jobs
@@ -1942,7 +1880,7 @@ class WorkflowJobTemplateNodeAccess(UnifiedCredentialsMixin, BaseAccess):
prefetch_related = ('success_nodes', 'failure_nodes', 'always_nodes', 'unified_job_template', 'workflow_job_template')
def filtered_queryset(self):
return self.model.objects.filter(workflow_job_template__in=WorkflowJobTemplate.accessible_objects(self.user, 'read_role'))
return self.model.objects.filter(workflow_job_template__in=WorkflowJobTemplate.access_qs(self.user, 'view'))
@check_superuser
def can_add(self, data):
@@ -2057,9 +1995,6 @@ class WorkflowJobTemplateAccess(NotificationAttachMixin, BaseAccess):
'read_role',
)
def filtered_queryset(self):
return self.model.accessible_objects(self.user, 'read_role')
@check_superuser
def can_add(self, data):
"""
@@ -2070,13 +2005,25 @@ class WorkflowJobTemplateAccess(NotificationAttachMixin, BaseAccess):
Users who are able to create deploy jobs can also run normal and check (dry run) jobs.
"""
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'workflow_admin_role').exists()
return Organization.access_qs(self.user, 'add_workflowjobtemplate').exists()
return bool(
self.check_related('organization', Organization, data, role_field='workflow_admin_role', mandatory=True)
and self.check_related('inventory', Inventory, data, role_field='use_role')
and self.check_related('execution_environment', ExecutionEnvironment, data, role_field='read_role')
)
if not self.check_related('organization', Organization, data, role_field='workflow_admin_role', mandatory=True):
if data.get('organization', None) is None:
if self.save_messages:
self.messages['organization'] = [_('An organization is required to create a workflow job template for normal user')]
return False
if not self.check_related('inventory', Inventory, data, role_field='use_role'):
if self.save_messages:
self.messages['inventory'] = [_('You do not have use_role to the inventory')]
return False
if not self.check_related('execution_environment', ExecutionEnvironment, data, role_field='read_role'):
if self.save_messages:
self.messages['execution_environment'] = [_('You do not have read_role to the execution environment')]
return False
return True
def can_copy(self, obj):
if self.save_messages:
@@ -2151,7 +2098,7 @@ class WorkflowJobAccess(BaseAccess):
def filtered_queryset(self):
return WorkflowJob.objects.filter(
Q(unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
| Q(organization__in=Organization.objects.filter(Q(admin_role__members=self.user)), is_bulk_job=True)
| Q(organization__in=Organization.accessible_pk_qs(self.user, 'auditor_role'))
)
def can_read(self, obj):
@@ -2429,6 +2376,29 @@ class InventoryUpdateEventAccess(BaseAccess):
return False
class ReceptorAddressAccess(BaseAccess):
"""
I can see receptor address records whenever I can access the instance
"""
model = ReceptorAddress
def filtered_queryset(self):
return self.model.objects.filter(Q(instance__in=Instance.accessible_pk_qs(self.user, 'read_role')))
@check_superuser
def can_add(self, data):
return False
@check_superuser
def can_change(self, obj, data):
return False
@check_superuser
def can_delete(self, obj):
return False
class SystemJobEventAccess(BaseAccess):
"""
I can only see manage System Jobs events if I'm a super user
@@ -2526,12 +2496,11 @@ class UnifiedJobAccess(BaseAccess):
def filtered_queryset(self):
inv_pk_qs = Inventory._accessible_pk_qs(Inventory, self.user, 'read_role')
org_auditor_qs = Organization.objects.filter(Q(admin_role__members=self.user) | Q(auditor_role__members=self.user))
qs = self.model.objects.filter(
Q(unified_job_template_id__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
| Q(inventoryupdate__inventory_source__inventory__id__in=inv_pk_qs)
| Q(adhoccommand__inventory__id__in=inv_pk_qs)
| Q(organization__in=org_auditor_qs)
| Q(organization__in=Organization.accessible_pk_qs(self.user, 'auditor_role'))
)
return qs
@@ -2562,6 +2531,8 @@ class ScheduleAccess(UnifiedCredentialsMixin, BaseAccess):
if not JobLaunchConfigAccess(self.user).can_add(data):
return False
if not data:
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
return self.user.has_roles.filter(permission_partials__codename__in=['execute_jobtemplate', 'update_project', 'update_inventory']).exists()
return Role.objects.filter(role_field__in=['update_role', 'execute_role'], ancestors__in=self.user.roles.all()).exists()
return self.check_related('unified_job_template', UnifiedJobTemplate, data, role_field='execute_role', mandatory=True)
@@ -2583,29 +2554,28 @@ class ScheduleAccess(UnifiedCredentialsMixin, BaseAccess):
class NotificationTemplateAccess(BaseAccess):
"""
I can see/use a notification_template if I have permission to
Run standard logic from DAB RBAC
"""
model = NotificationTemplate
prefetch_related = ('created_by', 'modified_by', 'organization')
def filtered_queryset(self):
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
return self.model.access_qs(self.user, 'view')
return self.model.objects.filter(
Q(organization__in=Organization.accessible_objects(self.user, 'notification_admin_role')) | Q(organization__in=self.user.auditor_of_organizations)
Q(organization__in=Organization.access_qs(self.user, 'add_notificationtemplate')) | Q(organization__in=self.user.auditor_of_organizations)
).distinct()
@check_superuser
def can_add(self, data):
if not data:
return Organization.accessible_objects(self.user, 'notification_admin_role').exists()
return Organization.access_qs(self.user, 'add_notificationtemplate').exists()
return self.check_related('organization', Organization, data, role_field='notification_admin_role', mandatory=True)
@check_superuser
def can_change(self, obj, data):
if obj.organization is None:
# only superusers are allowed to edit orphan notification templates
return False
return self.check_related('organization', Organization, data, obj=obj, role_field='notification_admin_role', mandatory=True)
return self.user.has_obj_perm(obj, 'change') and self.check_related('organization', Organization, data, obj=obj, role_field='notification_admin_role')
def can_admin(self, obj, data):
return self.can_change(obj, data)
@@ -2615,9 +2585,7 @@ class NotificationTemplateAccess(BaseAccess):
@check_superuser
def can_start(self, obj, validate_license=True):
if obj.organization is None:
return False
return self.user in obj.organization.notification_admin_role
return self.can_change(obj, None)
class NotificationAccess(BaseAccess):
@@ -2630,7 +2598,7 @@ class NotificationAccess(BaseAccess):
def filtered_queryset(self):
return self.model.objects.filter(
Q(notification_template__organization__in=Organization.accessible_objects(self.user, 'notification_admin_role'))
Q(notification_template__organization__in=Organization.access_qs(self.user, 'add_notificationtemplate'))
| Q(notification_template__organization__in=self.user.auditor_of_organizations)
).distinct()
@@ -2690,8 +2658,6 @@ class ActivityStreamAccess(BaseAccess):
'credential_type',
'team',
'ad_hoc_command',
'o_auth2_application',
'o_auth2_access_token',
'notification_template',
'notification',
'label',
@@ -2746,11 +2712,7 @@ class ActivityStreamAccess(BaseAccess):
if credential_set:
q |= Q(credential__in=credential_set)
auditing_orgs = (
(Organization.accessible_objects(self.user, 'admin_role') | Organization.accessible_objects(self.user, 'auditor_role'))
.distinct()
.values_list('id', flat=True)
)
auditing_orgs = (Organization.access_qs(self.user, 'change') | Organization.access_qs(self.user, 'audit')).distinct().values_list('id', flat=True)
if auditing_orgs:
q |= (
Q(user__in=auditing_orgs.values('member_role__members'))
@@ -2758,7 +2720,7 @@ class ActivityStreamAccess(BaseAccess):
| Q(notification_template__organization__in=auditing_orgs)
| Q(notification__notification_template__organization__in=auditing_orgs)
| Q(label__organization__in=auditing_orgs)
| Q(role__in=Role.objects.filter(ancestors__in=self.user.roles.all()) if auditing_orgs else [])
| Q(role__in=Role.visible_roles(self.user) if auditing_orgs else [])
)
project_set = Project.accessible_pk_qs(self.user, 'read_role')
@@ -2781,14 +2743,6 @@ class ActivityStreamAccess(BaseAccess):
if team_set:
q |= Q(team__in=team_set)
app_set = OAuth2ApplicationAccess(self.user).filtered_queryset()
if app_set:
q |= Q(o_auth2_application__in=app_set)
token_set = OAuth2TokenAccess(self.user).filtered_queryset()
if token_set:
q |= Q(o_auth2_access_token__in=token_set)
return qs.filter(q).distinct()
def can_add(self, data):
@@ -2815,13 +2769,10 @@ class RoleAccess(BaseAccess):
def filtered_queryset(self):
result = Role.visible_roles(self.user)
# Sanity check: is the requesting user an orphaned non-admin/auditor?
# if yes, make system admin/auditor mandatorily visible.
if not self.user.is_superuser and not self.user.is_system_auditor and not self.user.organizations.exists():
mandatories = ('system_administrator', 'system_auditor')
super_qs = Role.objects.filter(singleton_name__in=mandatories)
result = result | super_qs
return result
# Make system admin/auditor mandatorily visible.
mandatories = ('system_administrator', 'system_auditor')
super_qs = Role.objects.filter(singleton_name__in=mandatories)
return result | super_qs
def can_add(self, obj, data):
# Unsupported for now

View File

@@ -2,7 +2,7 @@
import logging
# AWX
from awx.main.analytics.subsystem_metrics import Metrics
from awx.main.analytics.subsystem_metrics import DispatcherMetrics, CallbackReceiverMetrics
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_task_queuename
@@ -11,4 +11,5 @@ logger = logging.getLogger('awx.main.scheduler')
@task(queue=get_task_queuename)
def send_subsystem_metrics():
Metrics().send_metrics()
DispatcherMetrics().send_metrics()
CallbackReceiverMetrics().send_metrics()

View File

@@ -66,10 +66,8 @@ class FixedSlidingWindow:
class RelayWebsocketStatsManager:
def __init__(self, event_loop, local_hostname):
def __init__(self, local_hostname):
self._local_hostname = local_hostname
self._event_loop = event_loop
self._stats = dict()
self._redis_key = BROADCAST_WEBSOCKET_REDIS_KEY_NAME
@@ -94,7 +92,10 @@ class RelayWebsocketStatsManager:
self.start()
def start(self):
self.async_task = self._event_loop.create_task(self.run_loop())
self.async_task = asyncio.get_running_loop().create_task(
self.run_loop(),
name='RelayWebsocketStatsManager.run_loop',
)
return self.async_task
@classmethod

View File

@@ -419,7 +419,7 @@ def _events_table(since, full_path, until, tbl, where_column, project_job_create
resolved_action,
resolved_role,
-- '-' operator listed here:
-- https://www.postgresql.org/docs/12/functions-json.html
-- https://www.postgresql.org/docs/15/functions-json.html
-- note that operator is only supported by jsonb objects
-- https://www.postgresql.org/docs/current/datatype-json.html
(CASE WHEN event = 'playbook_on_stats' THEN {event_data} - 'artifact_data' END) as playbook_on_stats,
@@ -444,11 +444,6 @@ def _events_table(since, full_path, until, tbl, where_column, project_job_create
return _copy_table(table='events', query=query(fr"replace({tbl}.event_data, '\u', '\u005cu')::jsonb"), path=full_path)
@register('events_table', '1.5', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
def events_table_unpartitioned(since, full_path, until, **kwargs):
return _events_table(since, full_path, until, '_unpartitioned_main_jobevent', 'created', **kwargs)
@register('events_table', '1.5', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
def events_table_partitioned_modified(since, full_path, until, **kwargs):
return _events_table(since, full_path, until, 'main_jobevent', 'modified', project_job_created=True, **kwargs)

View File

@@ -16,10 +16,13 @@ from rest_framework.exceptions import PermissionDenied
import requests
from awx.conf.license import get_license
from ansible_base.lib.utils.db import advisory_lock
from awx.main.models import Job
from awx.main.access import access_registry
from awx.main.utils import get_awx_http_client_headers, set_environ, datetime_hook
from awx.main.utils.pglock import advisory_lock
from awx.main.utils.analytics_proxy import OIDCClient, DEFAULT_OIDC_TOKEN_ENDPOINT
__all__ = ['register', 'gather', 'ship']
@@ -181,7 +184,10 @@ def gather(dest=None, module=None, subset=None, since=None, until=None, collecti
logger.log(log_level, "Automation Analytics not enabled. Use --dry-run to gather locally without sending.")
return None
if not (settings.AUTOMATION_ANALYTICS_URL and settings.REDHAT_USERNAME and settings.REDHAT_PASSWORD):
if not (
settings.AUTOMATION_ANALYTICS_URL
and ((settings.REDHAT_USERNAME and settings.REDHAT_PASSWORD) or (settings.SUBSCRIPTIONS_USERNAME and settings.SUBSCRIPTIONS_PASSWORD))
):
logger.log(log_level, "Not gathering analytics, configuration is invalid. Use --dry-run to gather locally without sending.")
return None
@@ -361,21 +367,35 @@ def ship(path):
if not url:
logger.error('AUTOMATION_ANALYTICS_URL is not set')
return False
rh_user = getattr(settings, 'REDHAT_USERNAME', None)
rh_password = getattr(settings, 'REDHAT_PASSWORD', None)
if not rh_user:
logger.error('REDHAT_USERNAME is not set')
return False
if not rh_password:
logger.error('REDHAT_PASSWORD is not set')
return False
with open(path, 'rb') as f:
files = {'file': (os.path.basename(path), f, settings.INSIGHTS_AGENT_MIME)}
s = requests.Session()
s.headers = get_awx_http_client_headers()
s.headers.pop('Content-Type')
with set_environ(**settings.AWX_TASK_ENV):
response = s.post(url, files=files, verify=settings.INSIGHTS_CERT_PATH, auth=(rh_user, rh_password), headers=s.headers, timeout=(31, 31))
if rh_user and rh_password:
try:
client = OIDCClient(rh_user, rh_password, DEFAULT_OIDC_TOKEN_ENDPOINT, ['api.console'])
response = client.make_request("POST", url, headers=s.headers, files=files, verify=settings.INSIGHTS_CERT_PATH, timeout=(31, 31))
except requests.RequestException:
logger.error("Automation Analytics API request failed, trying base auth method")
response = s.post(url, files=files, verify=settings.INSIGHTS_CERT_PATH, auth=(rh_user, rh_password), headers=s.headers, timeout=(31, 31))
elif not rh_user or not rh_password:
logger.info('REDHAT_USERNAME and REDHAT_PASSWORD are not set, using SUBSCRIPTIONS_USERNAME and SUBSCRIPTIONS_PASSWORD')
rh_user = getattr(settings, 'SUBSCRIPTIONS_USERNAME', None)
rh_password = getattr(settings, 'SUBSCRIPTIONS_PASSWORD', None)
if rh_user and rh_password:
response = s.post(url, files=files, verify=settings.INSIGHTS_CERT_PATH, auth=(rh_user, rh_password), headers=s.headers, timeout=(31, 31))
elif not rh_user:
logger.error('REDHAT_USERNAME and SUBSCRIPTIONS_USERNAME are not set')
return False
elif not rh_password:
logger.error('REDHAT_PASSWORD and SUBSCRIPTIONS_USERNAME are not set')
return False
# Accept 2XX status_codes
if response.status_code >= 300:
logger.error('Upload failed with status {}, {}'.format(response.status_code, response.text))

View File

@@ -1,10 +1,16 @@
import itertools
import redis
import json
import time
import logging
import prometheus_client
from prometheus_client.core import GaugeMetricFamily, HistogramMetricFamily
from prometheus_client.registry import CollectorRegistry
from django.conf import settings
from django.apps import apps
from django.http import HttpRequest
import redis.exceptions
from rest_framework.request import Request
from awx.main.consumers import emit_channel_notification
from awx.main.utils import is_testing
@@ -13,6 +19,30 @@ root_key = settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX
logger = logging.getLogger('awx.main.analytics')
class MetricsNamespace:
def __init__(self, namespace):
self._namespace = namespace
class MetricsServerSettings(MetricsNamespace):
def port(self):
return settings.METRICS_SUBSYSTEM_CONFIG['server'][self._namespace]['port']
class MetricsServer(MetricsServerSettings):
def __init__(self, namespace, registry):
MetricsNamespace.__init__(self, namespace)
self._registry = registry
def start(self):
try:
# TODO: addr for ipv6 ?
prometheus_client.start_http_server(self.port(), addr='localhost', registry=self._registry)
except Exception:
logger.error(f"MetricsServer failed to start for service '{self._namespace}.")
raise
class BaseM:
def __init__(self, field, help_text):
self.field = field
@@ -148,76 +178,40 @@ class HistogramM(BaseM):
return output_text
class Metrics:
def __init__(self, auto_pipe_execute=False, instance_name=None):
class Metrics(MetricsNamespace):
# metric name, help_text
METRICSLIST = []
_METRICSLIST = [
FloatM('subsystem_metrics_pipe_execute_seconds', 'Time spent saving metrics to redis'),
IntM('subsystem_metrics_pipe_execute_calls', 'Number of calls to pipe_execute'),
FloatM('subsystem_metrics_send_metrics_seconds', 'Time spent sending metrics to other nodes'),
]
def __init__(self, namespace, auto_pipe_execute=False, instance_name=None, metrics_have_changed=True, **kwargs):
MetricsNamespace.__init__(self, namespace)
self.pipe = redis.Redis.from_url(settings.BROKER_URL).pipeline()
self.conn = redis.Redis.from_url(settings.BROKER_URL)
self.last_pipe_execute = time.time()
# track if metrics have been modified since last saved to redis
# start with True so that we get an initial save to redis
self.metrics_have_changed = True
self.metrics_have_changed = metrics_have_changed
self.pipe_execute_interval = settings.SUBSYSTEM_METRICS_INTERVAL_SAVE_TO_REDIS
self.send_metrics_interval = settings.SUBSYSTEM_METRICS_INTERVAL_SEND_METRICS
# auto pipe execute will commit transaction of metric data to redis
# at a regular interval (pipe_execute_interval). If set to False,
# the calling function should call .pipe_execute() explicitly
self.auto_pipe_execute = auto_pipe_execute
Instance = apps.get_model('main', 'Instance')
if instance_name:
self.instance_name = instance_name
elif is_testing():
self.instance_name = "awx_testing"
else:
self.instance_name = Instance.objects.my_hostname()
self.instance_name = settings.CLUSTER_HOST_ID # Same as Instance.objects.my_hostname() BUT we do not need to import Instance
# metric name, help_text
METRICSLIST = [
SetIntM('callback_receiver_events_queue_size_redis', 'Current number of events in redis queue'),
IntM('callback_receiver_events_popped_redis', 'Number of events popped from redis'),
IntM('callback_receiver_events_in_memory', 'Current number of events in memory (in transfer from redis to db)'),
IntM('callback_receiver_batch_events_errors', 'Number of times batch insertion failed'),
FloatM('callback_receiver_events_insert_db_seconds', 'Total time spent saving events to database'),
IntM('callback_receiver_events_insert_db', 'Number of events batch inserted into database'),
IntM('callback_receiver_events_broadcast', 'Number of events broadcast to other control plane nodes'),
HistogramM(
'callback_receiver_batch_events_insert_db', 'Number of events batch inserted into database', settings.SUBSYSTEM_METRICS_BATCH_INSERT_BUCKETS
),
SetFloatM('callback_receiver_event_processing_avg_seconds', 'Average processing time per event per callback receiver batch'),
FloatM('subsystem_metrics_pipe_execute_seconds', 'Time spent saving metrics to redis'),
IntM('subsystem_metrics_pipe_execute_calls', 'Number of calls to pipe_execute'),
FloatM('subsystem_metrics_send_metrics_seconds', 'Time spent sending metrics to other nodes'),
SetFloatM('task_manager_get_tasks_seconds', 'Time spent in loading tasks from db'),
SetFloatM('task_manager_start_task_seconds', 'Time spent starting task'),
SetFloatM('task_manager_process_running_tasks_seconds', 'Time spent processing running tasks'),
SetFloatM('task_manager_process_pending_tasks_seconds', 'Time spent processing pending tasks'),
SetFloatM('task_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('task_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('task_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('task_manager_tasks_started', 'Number of tasks started'),
SetIntM('task_manager_running_processed', 'Number of running tasks processed'),
SetIntM('task_manager_pending_processed', 'Number of pending tasks processed'),
SetIntM('task_manager_tasks_blocked', 'Number of tasks blocked from running'),
SetFloatM('task_manager_commit_seconds', 'Time spent in db transaction, including on_commit calls'),
SetFloatM('dependency_manager_get_tasks_seconds', 'Time spent loading pending tasks from db'),
SetFloatM('dependency_manager_generate_dependencies_seconds', 'Time spent generating dependencies for pending tasks'),
SetFloatM('dependency_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('dependency_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('dependency_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('dependency_manager_pending_processed', 'Number of pending tasks processed'),
SetFloatM('workflow_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('workflow_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
# dispatcher subsystem metrics
SetIntM('dispatcher_pool_scale_up_events', 'Number of times local dispatcher scaled up a worker since startup'),
SetIntM('dispatcher_pool_active_task_count', 'Number of active tasks in the worker pool when last task was submitted'),
SetIntM('dispatcher_pool_max_worker_count', 'Highest number of workers in worker pool in last collection interval, about 20s'),
SetFloatM('dispatcher_availability', 'Fraction of time (in last collection interval) dispatcher was able to receive messages'),
]
# turn metric list into dictionary with the metric name as a key
self.METRICS = {}
for m in METRICSLIST:
for m in itertools.chain(self.METRICSLIST, self._METRICSLIST):
self.METRICS[m.field] = m
# track last time metrics were sent to other nodes
@@ -230,7 +224,7 @@ class Metrics:
m.reset_value(self.conn)
self.metrics_have_changed = True
self.conn.delete(root_key + "_lock")
for m in self.conn.scan_iter(root_key + '_instance_*'):
for m in self.conn.scan_iter(root_key + '-' + self._namespace + '_instance_*'):
self.conn.delete(m)
def inc(self, field, value):
@@ -297,8 +291,12 @@ class Metrics:
def send_metrics(self):
# more than one thread could be calling this at the same time, so should
# acquire redis lock before sending metrics
lock = self.conn.lock(root_key + '_lock')
if not lock.acquire(blocking=False):
try:
lock = self.conn.lock(root_key + '-' + self._namespace + '_lock')
if not lock.acquire(blocking=False):
return
except redis.exceptions.ConnectionError as exc:
logger.warning(f'Connection error in send_metrics: {exc}')
return
try:
current_time = time.time()
@@ -307,9 +305,10 @@ class Metrics:
payload = {
'instance': self.instance_name,
'metrics': serialized_metrics,
'metrics_namespace': self._namespace,
}
# store the serialized data locally as well, so that load_other_metrics will read it
self.conn.set(root_key + '_instance_' + self.instance_name, serialized_metrics)
self.conn.set(root_key + '-' + self._namespace + '_instance_' + self.instance_name, serialized_metrics)
emit_channel_notification("metrics", payload)
self.previous_send_metrics.set(current_time)
@@ -331,14 +330,14 @@ class Metrics:
instances_filter = request.query_params.getlist("node")
# get a sorted list of instance names
instance_names = [self.instance_name]
for m in self.conn.scan_iter(root_key + '_instance_*'):
for m in self.conn.scan_iter(root_key + '-' + self._namespace + '_instance_*'):
instance_names.append(m.decode('UTF-8').split('_instance_')[1])
instance_names.sort()
# load data, including data from the this local instance
instance_data = {}
for instance in instance_names:
if len(instances_filter) == 0 or instance in instances_filter:
instance_data_from_redis = self.conn.get(root_key + '_instance_' + instance)
instance_data_from_redis = self.conn.get(root_key + '-' + self._namespace + '_instance_' + instance)
# data from other instances may not be available. That is OK.
if instance_data_from_redis:
instance_data[instance] = json.loads(instance_data_from_redis.decode('UTF-8'))
@@ -357,6 +356,120 @@ class Metrics:
return output_text
class DispatcherMetrics(Metrics):
METRICSLIST = [
SetFloatM('task_manager_get_tasks_seconds', 'Time spent in loading tasks from db'),
SetFloatM('task_manager_start_task_seconds', 'Time spent starting task'),
SetFloatM('task_manager_process_running_tasks_seconds', 'Time spent processing running tasks'),
SetFloatM('task_manager_process_pending_tasks_seconds', 'Time spent processing pending tasks'),
SetFloatM('task_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('task_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('task_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('task_manager_tasks_started', 'Number of tasks started'),
SetIntM('task_manager_running_processed', 'Number of running tasks processed'),
SetIntM('task_manager_pending_processed', 'Number of pending tasks processed'),
SetIntM('task_manager_tasks_blocked', 'Number of tasks blocked from running'),
SetFloatM('task_manager_commit_seconds', 'Time spent in db transaction, including on_commit calls'),
SetFloatM('dependency_manager_get_tasks_seconds', 'Time spent loading pending tasks from db'),
SetFloatM('dependency_manager_generate_dependencies_seconds', 'Time spent generating dependencies for pending tasks'),
SetFloatM('dependency_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('dependency_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('dependency_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('dependency_manager_pending_processed', 'Number of pending tasks processed'),
SetFloatM('workflow_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('workflow_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
# dispatcher subsystem metrics
SetIntM('dispatcher_pool_scale_up_events', 'Number of times local dispatcher scaled up a worker since startup'),
SetIntM('dispatcher_pool_active_task_count', 'Number of active tasks in the worker pool when last task was submitted'),
SetIntM('dispatcher_pool_max_worker_count', 'Highest number of workers in worker pool in last collection interval, about 20s'),
SetFloatM('dispatcher_availability', 'Fraction of time (in last collection interval) dispatcher was able to receive messages'),
]
def __init__(self, *args, **kwargs):
super().__init__(settings.METRICS_SERVICE_DISPATCHER, *args, **kwargs)
class CallbackReceiverMetrics(Metrics):
METRICSLIST = [
SetIntM('callback_receiver_events_queue_size_redis', 'Current number of events in redis queue'),
IntM('callback_receiver_events_popped_redis', 'Number of events popped from redis'),
IntM('callback_receiver_events_in_memory', 'Current number of events in memory (in transfer from redis to db)'),
IntM('callback_receiver_batch_events_errors', 'Number of times batch insertion failed'),
FloatM('callback_receiver_events_insert_db_seconds', 'Total time spent saving events to database'),
IntM('callback_receiver_events_insert_db', 'Number of events batch inserted into database'),
IntM('callback_receiver_events_broadcast', 'Number of events broadcast to other control plane nodes'),
HistogramM(
'callback_receiver_batch_events_insert_db', 'Number of events batch inserted into database', settings.SUBSYSTEM_METRICS_BATCH_INSERT_BUCKETS
),
SetFloatM('callback_receiver_event_processing_avg_seconds', 'Average processing time per event per callback receiver batch'),
]
def __init__(self, *args, **kwargs):
super().__init__(settings.METRICS_SERVICE_CALLBACK_RECEIVER, *args, **kwargs)
def metrics(request):
m = Metrics()
return m.generate_metrics(request)
output_text = ''
for m in [DispatcherMetrics(), CallbackReceiverMetrics()]:
output_text += m.generate_metrics(request)
return output_text
class CustomToPrometheusMetricsCollector(prometheus_client.registry.Collector):
"""
Takes the metric data from redis -> our custom metric fields -> prometheus
library metric fields.
The plan is to get rid of the use of redis, our custom metric fields, and
to switch fully to the prometheus library. At that point, this translation
code will be deleted.
"""
def __init__(self, metrics_obj, *args, **kwargs):
super().__init__(*args, **kwargs)
self._metrics = metrics_obj
def collect(self):
my_hostname = settings.CLUSTER_HOST_ID
instance_data = self._metrics.load_other_metrics(Request(HttpRequest()))
if not instance_data:
logger.debug(f"No metric data not found in redis for metric namespace '{self._metrics._namespace}'")
return None
host_metrics = instance_data.get(my_hostname)
for _, metric in self._metrics.METRICS.items():
entry = host_metrics.get(metric.field)
if not entry:
logger.debug(f"{self._metrics._namespace} metric '{metric.field}' not found in redis data payload {json.dumps(instance_data, indent=2)}")
continue
if isinstance(metric, HistogramM):
buckets = list(zip(metric.buckets, entry['counts']))
buckets = [[str(i[0]), str(i[1])] for i in buckets]
yield HistogramMetricFamily(metric.field, metric.help_text, buckets=buckets, sum_value=entry['sum'])
else:
yield GaugeMetricFamily(metric.field, metric.help_text, value=entry)
class CallbackReceiverMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
registry.register(CustomToPrometheusMetricsCollector(CallbackReceiverMetrics(metrics_have_changed=False)))
super().__init__(settings.METRICS_SERVICE_CALLBACK_RECEIVER, registry)
class DispatcherMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
registry.register(CustomToPrometheusMetricsCollector(DispatcherMetrics(metrics_have_changed=False)))
super().__init__(settings.METRICS_SERVICE_DISPATCHER, registry)
class WebsocketsMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
# registry.register()
super().__init__(settings.METRICS_SERVICE_WEBSOCKETS, registry)

View File

@@ -1,7 +1,90 @@
import os
from django.apps import AppConfig
from django.utils.translation import gettext_lazy as _
from awx.main.utils.common import bypass_in_test, load_all_entry_points_for
from awx.main.utils.migration import is_database_synchronized
from awx.main.utils.named_url_graph import _customize_graph, generate_graph
from awx.conf import register, fields
from awx_plugins.interfaces._temporary_private_licensing_api import detect_server_product_name
class MainConfig(AppConfig):
name = 'awx.main'
verbose_name = _('Main')
def load_named_url_feature(self):
models = [m for m in self.get_models() if hasattr(m, 'get_absolute_url')]
generate_graph(models)
_customize_graph()
register(
'NAMED_URL_FORMATS',
field_class=fields.DictField,
read_only=True,
label=_('Formats of all available named urls'),
help_text=_('Read-only list of key-value pairs that shows the standard format of all available named URLs.'),
category=_('Named URL'),
category_slug='named-url',
)
register(
'NAMED_URL_GRAPH_NODES',
field_class=fields.DictField,
read_only=True,
label=_('List of all named url graph nodes.'),
help_text=_(
'Read-only list of key-value pairs that exposes named URL graph topology.'
' Use this list to programmatically generate named URLs for resources'
),
category=_('Named URL'),
category_slug='named-url',
)
def _load_credential_types_feature(self):
"""
Create CredentialType records for any discovered credentials.
Note that Django docs advise _against_ interacting with the database using
the ORM models in the ready() path. Specifically, during testing.
However, we explicitly use the @bypass_in_test decorator to avoid calling this
method during testing.
Django also advises against running pattern because it runs everywhere i.e.
every management command. We use an advisory lock to ensure correctness and
we will deal performance if it becomes an issue.
"""
from awx.main.models.credential import CredentialType
if is_database_synchronized():
CredentialType.setup_tower_managed_defaults(app_config=self)
@bypass_in_test
def load_credential_types_feature(self):
from awx.main.models.credential import load_credentials
load_credentials()
return self._load_credential_types_feature()
def load_inventory_plugins(self):
from awx.main.models.inventory import InventorySourceOptions
is_awx = detect_server_product_name() == 'AWX'
extra_entry_point_groups = () if is_awx else ('inventory.supported',)
entry_points = load_all_entry_points_for(['inventory', *extra_entry_point_groups])
for entry_point_name, entry_point in entry_points.items():
cls = entry_point.load()
InventorySourceOptions.injectors[entry_point_name] = cls
def ready(self):
super().ready()
"""
Credential loading triggers database operations. There are cases we want to call
awx-manage collectstatic without a database. All management commands invoke the ready() code
path. Using settings.AWX_SKIP_CREDENTIAL_TYPES_DISCOVER _could_ invoke a database operation.
"""
if not os.environ.get('AWX_SKIP_CREDENTIAL_TYPES_DISCOVER', None):
self.load_credential_types_feature()
self.load_named_url_feature()
self.load_inventory_plugins()

View File

@@ -2,6 +2,7 @@
import logging
# Django
from django.core.checks import Error
from django.utils.translation import gettext_lazy as _
# Django REST Framework
@@ -45,10 +46,7 @@ register(
'MANAGE_ORGANIZATION_AUTH',
field_class=fields.BooleanField,
label=_('Organization Admins Can Manage Users and Teams'),
help_text=_(
'Controls whether any Organization Admin has the privileges to create and manage users and teams. '
'You may want to disable this ability if you are using an LDAP or SAML integration.'
),
help_text=_('Controls whether any Organization Admin has the privileges to create and manage users and teams.'),
category=_('System'),
category_slug='system',
)
@@ -92,6 +90,7 @@ register(
),
category=_('System'),
category_slug='system',
required=False,
)
register(
@@ -593,7 +592,7 @@ register(
register(
'LOG_AGGREGATOR_LOGGERS',
field_class=fields.StringListField,
default=['awx', 'activity_stream', 'job_events', 'system_tracking', 'broadcast_websocket'],
default=['awx', 'activity_stream', 'job_events', 'system_tracking', 'broadcast_websocket', 'job_lifecycle'],
label=_('Loggers Sending Data to Log Aggregator Form'),
help_text=_(
'List of loggers that will send HTTP logs to the collector, these can '
@@ -603,6 +602,7 @@ register(
'job_events - callback data from Ansible job events\n'
'system_tracking - facts gathered from scan jobs\n'
'broadcast_websocket - errors pertaining to websockets broadcast metrics\n'
'job_lifecycle - logs related to processing of a job\n'
),
category=_('Logging'),
category_slug='logging',
@@ -774,6 +774,8 @@ register(
allow_null=True,
category=_('System'),
category_slug='system',
required=False,
hidden=True,
)
register(
'AUTOMATION_ANALYTICS_LAST_ENTRIES',
@@ -815,6 +817,7 @@ register(
help_text=_('Max jobs to allow bulk jobs to launch'),
category=_('Bulk Actions'),
category_slug='bulk',
hidden=True,
)
register(
@@ -825,23 +828,26 @@ register(
help_text=_('Max number of hosts to allow to be created in a single bulk action'),
category=_('Bulk Actions'),
category_slug='bulk',
hidden=True,
)
register(
'UI_NEXT',
field_class=fields.BooleanField,
default=False,
label=_('Enable Preview of New User Interface'),
help_text=_('Enable preview of new user interface.'),
category=_('System'),
category_slug='system',
'BULK_HOST_MAX_DELETE',
field_class=fields.IntegerField,
default=250,
label=_('Max number of hosts to allow to be deleted in a single bulk action'),
help_text=_('Max number of hosts to allow to be deleted in a single bulk action'),
category=_('Bulk Actions'),
category_slug='bulk',
hidden=True,
)
register(
'SUBSCRIPTION_USAGE_MODEL',
field_class=fields.ChoiceField,
choices=[
('', _('Default model for AWX - no subscription. Deletion of host_metrics will not be considered for purposes of managed host counting')),
('', _('No subscription. Deletion of host_metrics will not be considered for purposes of managed host counting')),
(
SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS,
_('Usage based on unique managed nodes in a large historical time frame and delete functionality for no longer used managed nodes'),
@@ -861,6 +867,7 @@ register(
allow_null=True,
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -870,6 +877,7 @@ register(
allow_null=True,
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -912,6 +920,16 @@ register(
category_slug='debug',
)
register(
'RECEPTOR_KEEP_WORK_ON_ERROR',
field_class=fields.BooleanField,
label=_('Keep receptor work on error'),
default=False,
help_text=_('Prevent receptor work from being released on when error is detected'),
category=('Debug'),
category_slug='debug',
)
def logging_validate(serializer, attrs):
if not serializer.instance or not hasattr(serializer.instance, 'LOG_AGGREGATOR_HOST') or not hasattr(serializer.instance, 'LOG_AGGREGATOR_TYPE'):
@@ -938,3 +956,27 @@ def logging_validate(serializer, attrs):
register_validate('logging', logging_validate)
def csrf_trusted_origins_validate(serializer, attrs):
if not serializer.instance or not hasattr(serializer.instance, 'CSRF_TRUSTED_ORIGINS'):
return attrs
if 'CSRF_TRUSTED_ORIGINS' not in attrs:
return attrs
errors = []
for origin in attrs['CSRF_TRUSTED_ORIGINS']:
if "://" not in origin:
errors.append(
Error(
"As of Django 4.0, the values in the CSRF_TRUSTED_ORIGINS "
"setting must start with a scheme (usually http:// or "
"https://) but found %s. See the release notes for details." % origin,
)
)
if errors:
error_messages = [error.msg for error in errors]
raise serializers.ValidationError(_('\n'.join(error_messages)))
return attrs
register_validate('system', csrf_trusted_origins_validate)

View File

@@ -6,7 +6,6 @@ import re
from django.utils.translation import gettext_lazy as _
__all__ = [
'CLOUD_PROVIDERS',
'PRIVILEGE_ESCALATION_METHODS',
'ANSI_SGR_PATTERN',
'CAN_CANCEL',
@@ -14,7 +13,6 @@ __all__ = [
'STANDARD_INVENTORY_UPDATE_ENV',
]
CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'controller', 'insights')
PRIVILEGE_ESCALATION_METHODS = [
('sudo', _('Sudo')),
('su', _('Su')),
@@ -43,6 +41,7 @@ STANDARD_INVENTORY_UPDATE_ENV = {
}
CAN_CANCEL = ('new', 'pending', 'waiting', 'running')
ACTIVE_STATES = CAN_CANCEL
ERROR_STATES = ('error',)
MINIMAL_EVENTS = set(['playbook_on_play_start', 'playbook_on_task_start', 'playbook_on_stats', 'EOF'])
CENSOR_VALUE = '************'
ENV_BLOCKLIST = frozenset(
@@ -114,3 +113,28 @@ SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS = 'unique_managed_hosts'
# Shared prefetch to use for creating a queryset for the purpose of writing or saving facts
HOST_FACTS_FIELDS = ('name', 'ansible_facts', 'ansible_facts_modified', 'modified', 'inventory_id')
# Data for RBAC compatibility layer
role_name_to_perm_mapping = {
'adhoc_role': ['adhoc_'],
'approval_role': ['approve_'],
'auditor_role': ['audit_'],
'admin_role': ['change_', 'add_', 'delete_'],
'execute_role': ['execute_'],
'read_role': ['view_'],
'update_role': ['update_'],
'member_role': ['member_'],
'use_role': ['use_'],
}
org_role_to_permission = {
'notification_admin_role': 'add_notificationtemplate',
'project_admin_role': 'add_project',
'execute_role': 'execute_jobtemplate',
'inventory_admin_role': 'add_inventory',
'credential_admin_role': 'add_credential',
'workflow_admin_role': 'add_workflowjobtemplate',
'job_template_admin_role': 'change_jobtemplate', # TODO: this doesnt really work, solution not clear
'execution_environment_admin_role': 'add_executionenvironment',
'auditor_role': 'view_project', # TODO: also doesnt really work
}

View File

@@ -106,7 +106,7 @@ class RelayConsumer(AsyncJsonWebsocketConsumer):
if group == "metrics":
message = json.loads(message['text'])
conn = redis.Redis.from_url(settings.BROKER_URL)
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "_instance_" + message['instance'], message['metrics'])
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "-" + message['metrics_namespace'] + "_instance_" + message['instance'], message['metrics'])
else:
await self.channel_layer.group_send(group, message)

View File

@@ -1,122 +0,0 @@
from .plugin import CredentialPlugin, CertFiles, raise_for_status
from urllib.parse import quote, urlencode, urljoin
from django.utils.translation import gettext_lazy as _
import requests
aim_inputs = {
'fields': [
{
'id': 'url',
'label': _('CyberArk CCP URL'),
'type': 'string',
'format': 'url',
},
{
'id': 'webservice_id',
'label': _('Web Service ID'),
'type': 'string',
'help_text': _('The CCP Web Service ID. Leave blank to default to AIMWebService.'),
},
{
'id': 'app_id',
'label': _('Application ID'),
'type': 'string',
'secret': True,
},
{
'id': 'client_key',
'label': _('Client Key'),
'type': 'string',
'secret': True,
'multiline': True,
},
{
'id': 'client_cert',
'label': _('Client Certificate'),
'type': 'string',
'secret': True,
'multiline': True,
},
{
'id': 'verify',
'label': _('Verify SSL Certificates'),
'type': 'boolean',
'default': True,
},
],
'metadata': [
{
'id': 'object_query',
'label': _('Object Query'),
'type': 'string',
'help_text': _('Lookup query for the object. Ex: Safe=TestSafe;Object=testAccountName123'),
},
{'id': 'object_query_format', 'label': _('Object Query Format'), 'type': 'string', 'default': 'Exact', 'choices': ['Exact', 'Regexp']},
{
'id': 'object_property',
'label': _('Object Property'),
'type': 'string',
'help_text': _('The property of the object to return. Default: Content Ex: Username, Address, etc.'),
},
{
'id': 'reason',
'label': _('Reason'),
'type': 'string',
'help_text': _('Object request reason. This is only needed if it is required by the object\'s policy.'),
},
],
'required': ['url', 'app_id', 'object_query'],
}
def aim_backend(**kwargs):
url = kwargs['url']
client_cert = kwargs.get('client_cert', None)
client_key = kwargs.get('client_key', None)
verify = kwargs['verify']
webservice_id = kwargs.get('webservice_id', '')
app_id = kwargs['app_id']
object_query = kwargs['object_query']
object_query_format = kwargs['object_query_format']
object_property = kwargs.get('object_property', '')
reason = kwargs.get('reason', None)
if webservice_id == '':
webservice_id = 'AIMWebService'
query_params = {
'AppId': app_id,
'Query': object_query,
'QueryFormat': object_query_format,
}
if reason:
query_params['reason'] = reason
request_qs = '?' + urlencode(query_params, quote_via=quote)
request_url = urljoin(url, '/'.join([webservice_id, 'api', 'Accounts']))
with CertFiles(client_cert, client_key) as cert:
res = requests.get(
request_url + request_qs,
timeout=30,
cert=cert,
verify=verify,
allow_redirects=False,
)
raise_for_status(res)
# CCP returns the property name capitalized, username is camel case
# so we need to handle that case
if object_property == '':
object_property = 'Content'
elif object_property.lower() == 'username':
object_property = 'UserName'
elif object_property not in res:
raise KeyError('Property {} not found in object'.format(object_property))
else:
object_property = object_property.capitalize()
return res.json()[object_property]
aim_plugin = CredentialPlugin('CyberArk Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend)

View File

@@ -1,65 +0,0 @@
import boto3
from botocore.exceptions import ClientError
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
secrets_manager_inputs = {
'fields': [
{
'id': 'aws_access_key',
'label': _('AWS Access Key'),
'type': 'string',
},
{
'id': 'aws_secret_key',
'label': _('AWS Secret Key'),
'type': 'string',
'secret': True,
},
],
'metadata': [
{
'id': 'region_name',
'label': _('AWS Secrets Manager Region'),
'type': 'string',
'help_text': _('Region which the secrets manager is located'),
},
{
'id': 'secret_name',
'label': _('AWS Secret Name'),
'type': 'string',
},
],
'required': ['aws_access_key', 'aws_secret_key', 'region_name', 'secret_name'],
}
def aws_secretsmanager_backend(**kwargs):
secret_name = kwargs['secret_name']
region_name = kwargs['region_name']
aws_secret_access_key = kwargs['aws_secret_key']
aws_access_key_id = kwargs['aws_access_key']
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager', region_name=region_name, aws_secret_access_key=aws_secret_access_key, aws_access_key_id=aws_access_key_id
)
try:
get_secret_value_response = client.get_secret_value(SecretId=secret_name)
except ClientError as e:
raise e
# Secrets Manager decrypts the secret value using the associated KMS CMK
# Depending on whether the secret was a string or binary, only one of these fields will be populated
if 'SecretString' in get_secret_value_response:
secret = get_secret_value_response['SecretString']
else:
secret = get_secret_value_response['SecretBinary']
return secret
aws_secretmanager_plugin = CredentialPlugin('AWS Secrets Manager lookup', inputs=secrets_manager_inputs, backend=aws_secretsmanager_backend)

View File

@@ -1,75 +0,0 @@
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
from azure.keyvault import KeyVaultClient, KeyVaultAuthentication
from azure.common.credentials import ServicePrincipalCredentials
from msrestazure import azure_cloud
# https://github.com/Azure/msrestazure-for-python/blob/master/msrestazure/azure_cloud.py
clouds = [vars(azure_cloud)[n] for n in dir(azure_cloud) if n.startswith("AZURE_") and n.endswith("_CLOUD")]
default_cloud = vars(azure_cloud)["AZURE_PUBLIC_CLOUD"]
azure_keyvault_inputs = {
'fields': [
{
'id': 'url',
'label': _('Vault URL (DNS Name)'),
'type': 'string',
'format': 'url',
},
{'id': 'client', 'label': _('Client ID'), 'type': 'string'},
{
'id': 'secret',
'label': _('Client Secret'),
'type': 'string',
'secret': True,
},
{'id': 'tenant', 'label': _('Tenant ID'), 'type': 'string'},
{
'id': 'cloud_name',
'label': _('Cloud Environment'),
'help_text': _('Specify which azure cloud environment to use.'),
'choices': list(set([default_cloud.name] + [c.name for c in clouds])),
'default': default_cloud.name,
},
],
'metadata': [
{
'id': 'secret_field',
'label': _('Secret Name'),
'type': 'string',
'help_text': _('The name of the secret to look up.'),
},
{
'id': 'secret_version',
'label': _('Secret Version'),
'type': 'string',
'help_text': _('Used to specify a specific secret version (if left empty, the latest version will be used).'),
},
],
'required': ['url', 'client', 'secret', 'tenant', 'secret_field'],
}
def azure_keyvault_backend(**kwargs):
url = kwargs['url']
[cloud] = [c for c in clouds if c.name == kwargs.get('cloud_name', default_cloud.name)]
def auth_callback(server, resource, scope):
credentials = ServicePrincipalCredentials(
url=url,
client_id=kwargs['client'],
secret=kwargs['secret'],
tenant=kwargs['tenant'],
resource=f"https://{cloud.suffixes.keyvault_dns.split('.', 1).pop()}",
)
token = credentials.token
return token['token_type'], token['access_token']
kv = KeyVaultClient(KeyVaultAuthentication(auth_callback))
return kv.get_secret(url, kwargs['secret_field'], kwargs.get('secret_version', '')).value
azure_keyvault_plugin = CredentialPlugin('Microsoft Azure Key Vault', inputs=azure_keyvault_inputs, backend=azure_keyvault_backend)

View File

@@ -1,115 +0,0 @@
from .plugin import CredentialPlugin, raise_for_status
from django.utils.translation import gettext_lazy as _
from urllib.parse import urljoin
import requests
pas_inputs = {
'fields': [
{
'id': 'url',
'label': _('Centrify Tenant URL'),
'type': 'string',
'help_text': _('Centrify Tenant URL'),
'format': 'url',
},
{
'id': 'client_id',
'label': _('Centrify API User'),
'type': 'string',
'help_text': _('Centrify API User, having necessary permissions as mentioned in support doc'),
},
{
'id': 'client_password',
'label': _('Centrify API Password'),
'type': 'string',
'help_text': _('Password of Centrify API User with necessary permissions'),
'secret': True,
},
{
'id': 'oauth_application_id',
'label': _('OAuth2 Application ID'),
'type': 'string',
'help_text': _('Application ID of the configured OAuth2 Client (defaults to \'awx\')'),
'default': 'awx',
},
{
'id': 'oauth_scope',
'label': _('OAuth2 Scope'),
'type': 'string',
'help_text': _('Scope of the configured OAuth2 Client (defaults to \'awx\')'),
'default': 'awx',
},
],
'metadata': [
{
'id': 'account-name',
'label': _('Account Name'),
'type': 'string',
'help_text': _('Local system account or Domain account name enrolled in Centrify Vault. eg. (root or DOMAIN/Administrator)'),
},
{
'id': 'system-name',
'label': _('System Name'),
'type': 'string',
'help_text': _('Machine Name enrolled with in Centrify Portal'),
},
],
'required': ['url', 'account-name', 'system-name', 'client_id', 'client_password'],
}
# generate bearer token to authenticate with PAS portal, Input : Client ID, Client Secret
def handle_auth(**kwargs):
post_data = {"grant_type": "client_credentials", "scope": kwargs['oauth_scope']}
response = requests.post(kwargs['endpoint'], data=post_data, auth=(kwargs['client_id'], kwargs['client_password']), verify=True, timeout=(5, 30))
raise_for_status(response)
try:
return response.json()['access_token']
except KeyError:
raise RuntimeError('OAuth request to tenant was unsuccessful')
# fetch the ID of system with RedRock query, Input : System Name, Account Name
def get_ID(**kwargs):
endpoint = urljoin(kwargs['url'], '/Redrock/query')
name = " Name='{0}' and User='{1}'".format(kwargs['system_name'], kwargs['acc_name'])
query = 'Select ID from VaultAccount where {0}'.format(name)
post_headers = {"Authorization": "Bearer " + kwargs['access_token'], "X-CENTRIFY-NATIVE-CLIENT": "true"}
response = requests.post(endpoint, json={'Script': query}, headers=post_headers, verify=True, timeout=(5, 30))
raise_for_status(response)
try:
result_str = response.json()["Result"]["Results"]
return result_str[0]["Row"]["ID"]
except (IndexError, KeyError):
raise RuntimeError("Error Detected!! Check the Inputs")
# CheckOut Password from Centrify Vault, Input : ID
def get_passwd(**kwargs):
endpoint = urljoin(kwargs['url'], '/ServerManage/CheckoutPassword')
post_headers = {"Authorization": "Bearer " + kwargs['access_token'], "X-CENTRIFY-NATIVE-CLIENT": "true"}
response = requests.post(endpoint, json={'ID': kwargs['acc_id']}, headers=post_headers, verify=True, timeout=(5, 30))
raise_for_status(response)
try:
return response.json()["Result"]["Password"]
except KeyError:
raise RuntimeError("Password Not Found")
def centrify_backend(**kwargs):
url = kwargs.get('url')
acc_name = kwargs.get('account-name')
system_name = kwargs.get('system-name')
client_id = kwargs.get('client_id')
client_password = kwargs.get('client_password')
app_id = kwargs.get('oauth_application_id', 'awx')
endpoint = urljoin(url, f'/oauth2/token/{app_id}')
endpoint = {'endpoint': endpoint, 'client_id': client_id, 'client_password': client_password, 'oauth_scope': kwargs.get('oauth_scope', 'awx')}
token = handle_auth(**endpoint)
get_id_args = {'system_name': system_name, 'acc_name': acc_name, 'url': url, 'access_token': token}
acc_id = get_ID(**get_id_args)
get_pwd_args = {'url': url, 'acc_id': acc_id, 'access_token': token}
return get_passwd(**get_pwd_args)
centrify_plugin = CredentialPlugin('Centrify Vault Credential Provider Lookup', inputs=pas_inputs, backend=centrify_backend)

View File

@@ -1,112 +0,0 @@
from .plugin import CredentialPlugin, CertFiles, raise_for_status
from urllib.parse import urljoin, quote
from django.utils.translation import gettext_lazy as _
import requests
import base64
import binascii
conjur_inputs = {
'fields': [
{
'id': 'url',
'label': _('Conjur URL'),
'type': 'string',
'format': 'url',
},
{
'id': 'api_key',
'label': _('API Key'),
'type': 'string',
'secret': True,
},
{
'id': 'account',
'label': _('Account'),
'type': 'string',
},
{
'id': 'username',
'label': _('Username'),
'type': 'string',
},
{'id': 'cacert', 'label': _('Public Key Certificate'), 'type': 'string', 'multiline': True},
],
'metadata': [
{
'id': 'secret_path',
'label': _('Secret Identifier'),
'type': 'string',
'help_text': _('The identifier for the secret e.g., /some/identifier'),
},
{
'id': 'secret_version',
'label': _('Secret Version'),
'type': 'string',
'help_text': _('Used to specify a specific secret version (if left empty, the latest version will be used).'),
},
],
'required': ['url', 'api_key', 'account', 'username'],
}
def _is_base64(s: str) -> bool:
try:
return base64.b64encode(base64.b64decode(s.encode("utf-8"))) == s.encode("utf-8")
except binascii.Error:
return False
def conjur_backend(**kwargs):
url = kwargs['url']
api_key = kwargs['api_key']
account = quote(kwargs['account'], safe='')
username = quote(kwargs['username'], safe='')
secret_path = quote(kwargs['secret_path'], safe='')
version = kwargs.get('secret_version')
cacert = kwargs.get('cacert', None)
auth_kwargs = {
'headers': {'Content-Type': 'text/plain', 'Accept-Encoding': 'base64'},
'data': api_key,
'allow_redirects': False,
}
with CertFiles(cacert) as cert:
# https://www.conjur.org/api.html#authentication-authenticate-post
auth_kwargs['verify'] = cert
try:
resp = requests.post(urljoin(url, '/'.join(['authn', account, username, 'authenticate'])), **auth_kwargs)
resp.raise_for_status()
except requests.exceptions.HTTPError:
resp = requests.post(urljoin(url, '/'.join(['api', 'authn', account, username, 'authenticate'])), **auth_kwargs)
raise_for_status(resp)
token = resp.content.decode('utf-8')
lookup_kwargs = {
'headers': {'Authorization': 'Token token="{}"'.format(token if _is_base64(token) else base64.b64encode(token.encode('utf-8')).decode('utf-8'))},
'allow_redirects': False,
}
# https://www.conjur.org/api.html#secrets-retrieve-a-secret-get
path = urljoin(url, '/'.join(['secrets', account, 'variable', secret_path]))
path_conjurcloud = urljoin(url, '/'.join(['api', 'secrets', account, 'variable', secret_path]))
if version:
ver = "version={}".format(version)
path = '?'.join([path, ver])
path_conjurcloud = '?'.join([path_conjurcloud, ver])
with CertFiles(cacert) as cert:
lookup_kwargs['verify'] = cert
try:
resp = requests.get(path, timeout=30, **lookup_kwargs)
resp.raise_for_status()
except requests.exceptions.HTTPError:
resp = requests.get(path_conjurcloud, timeout=30, **lookup_kwargs)
raise_for_status(resp)
return resp.text
conjur_plugin = CredentialPlugin('CyberArk Conjur Secrets Manager Lookup', inputs=conjur_inputs, backend=conjur_backend)

View File

@@ -1,94 +0,0 @@
from .plugin import CredentialPlugin
from django.conf import settings
from django.utils.translation import gettext_lazy as _
from delinea.secrets.vault import PasswordGrantAuthorizer, SecretsVault
from base64 import b64decode
dsv_inputs = {
'fields': [
{
'id': 'tenant',
'label': _('Tenant'),
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretsvaultcloud.com'),
'type': 'string',
},
{
'id': 'tld',
'label': _('Top-level Domain (TLD)'),
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretsvaultcloud.com'),
'choices': ['ca', 'com', 'com.au', 'eu'],
'default': 'com',
},
{
'id': 'client_id',
'label': _('Client ID'),
'type': 'string',
},
{
'id': 'client_secret',
'label': _('Client Secret'),
'type': 'string',
'secret': True,
},
],
'metadata': [
{
'id': 'path',
'label': _('Secret Path'),
'type': 'string',
'help_text': _('The secret path e.g. /test/secret1'),
},
{
'id': 'secret_field',
'label': _('Secret Field'),
'help_text': _('The field to extract from the secret'),
'type': 'string',
},
{
'id': 'secret_decoding',
'label': _('Should the secret be base64 decoded?'),
'help_text': _('Specify whether the secret should be base64 decoded, typically used for storing files, such as SSH keys'),
'choices': ['No Decoding', 'Decode Base64'],
'type': 'string',
'default': 'No Decoding',
},
],
'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field', 'secret_decoding'],
}
if settings.DEBUG:
dsv_inputs['fields'].append(
{
'id': 'url_template',
'label': _('URL template'),
'type': 'string',
'default': 'https://{}.secretsvaultcloud.{}',
}
)
def dsv_backend(**kwargs):
tenant_name = kwargs['tenant']
tenant_tld = kwargs.get('tld', 'com')
tenant_url_template = kwargs.get('url_template', 'https://{}.secretsvaultcloud.{}')
client_id = kwargs['client_id']
client_secret = kwargs['client_secret']
secret_path = kwargs['path']
secret_field = kwargs['secret_field']
# providing a default value to remain backward compatible for secrets that have not specified this option
secret_decoding = kwargs.get('secret_decoding', 'No Decoding')
tenant_url = tenant_url_template.format(tenant_name, tenant_tld.strip("."))
authorizer = PasswordGrantAuthorizer(tenant_url, client_id, client_secret)
dsv_secret = SecretsVault(tenant_url, authorizer).get_secret(secret_path)
# files can be uploaded base64 decoded to DSV and thus decoding it only, when asked for
if secret_decoding == 'Decode Base64':
return b64decode(dsv_secret['data'][secret_field]).decode()
return dsv_secret['data'][secret_field]
dsv_plugin = CredentialPlugin(name='Thycotic DevOps Secrets Vault', inputs=dsv_inputs, backend=dsv_backend)

View File

@@ -1,318 +0,0 @@
import copy
import os
import pathlib
import time
from urllib.parse import urljoin
from .plugin import CredentialPlugin, CertFiles, raise_for_status
import requests
from django.utils.translation import gettext_lazy as _
base_inputs = {
'fields': [
{
'id': 'url',
'label': _('Server URL'),
'type': 'string',
'format': 'url',
'help_text': _('The URL to the HashiCorp Vault'),
},
{
'id': 'token',
'label': _('Token'),
'type': 'string',
'secret': True,
'help_text': _('The access token used to authenticate to the Vault server'),
},
{
'id': 'cacert',
'label': _('CA Certificate'),
'type': 'string',
'multiline': True,
'help_text': _('The CA certificate used to verify the SSL certificate of the Vault server'),
},
{'id': 'role_id', 'label': _('AppRole role_id'), 'type': 'string', 'multiline': False, 'help_text': _('The Role ID for AppRole Authentication')},
{
'id': 'secret_id',
'label': _('AppRole secret_id'),
'type': 'string',
'multiline': False,
'secret': True,
'help_text': _('The Secret ID for AppRole Authentication'),
},
{
'id': 'namespace',
'label': _('Namespace name (Vault Enterprise only)'),
'type': 'string',
'multiline': False,
'help_text': _('Name of the namespace to use when authenticate and retrieve secrets'),
},
{
'id': 'kubernetes_role',
'label': _('Kubernetes role'),
'type': 'string',
'multiline': False,
'help_text': _(
'The Role for Kubernetes Authentication.'
' This is the named role, configured in Vault server, for AWX pod auth policies.'
' see https://www.vaultproject.io/docs/auth/kubernetes#configuration'
),
},
{
'id': 'default_auth_path',
'label': _('Path to Auth'),
'type': 'string',
'multiline': False,
'default': 'approle',
'help_text': _('The Authentication path to use if one isn\'t provided in the metadata when linking to an input field. Defaults to \'approle\''),
},
],
'metadata': [
{
'id': 'secret_path',
'label': _('Path to Secret'),
'type': 'string',
'help_text': _(
(
'The path to the secret stored in the secret backend e.g, /some/secret/. It is recommended'
' that you use the secret backend field to identify the storage backend and to use this field'
' for locating a specific secret within that store. However, if you prefer to fully identify'
' both the secret backend and one of its secrets using only this field, join their locations'
' into a single path without any additional separators, e.g, /location/of/backend/some/secret.'
)
),
},
{
'id': 'auth_path',
'label': _('Path to Auth'),
'type': 'string',
'multiline': False,
'help_text': _('The path where the Authentication method is mounted e.g, approle'),
},
],
'required': ['url', 'secret_path'],
}
hashi_kv_inputs = copy.deepcopy(base_inputs)
hashi_kv_inputs['fields'].append(
{
'id': 'api_version',
'label': _('API Version'),
'choices': ['v1', 'v2'],
'help_text': _('API v1 is for static key/value lookups. API v2 is for versioned key/value lookups.'),
'default': 'v1',
}
)
hashi_kv_inputs['metadata'] = (
[
{
'id': 'secret_backend',
'label': _('Name of Secret Backend'),
'type': 'string',
'help_text': _('The name of the kv secret backend (if left empty, the first segment of the secret path will be used).'),
}
]
+ hashi_kv_inputs['metadata']
+ [
{
'id': 'secret_key',
'label': _('Key Name'),
'type': 'string',
'help_text': _('The name of the key to look up in the secret.'),
},
{
'id': 'secret_version',
'label': _('Secret Version (v2 only)'),
'type': 'string',
'help_text': _('Used to specify a specific secret version (if left empty, the latest version will be used).'),
},
]
)
hashi_kv_inputs['required'].extend(['api_version', 'secret_key'])
hashi_ssh_inputs = copy.deepcopy(base_inputs)
hashi_ssh_inputs['metadata'] = (
[
{
'id': 'public_key',
'label': _('Unsigned Public Key'),
'type': 'string',
'multiline': True,
}
]
+ hashi_ssh_inputs['metadata']
+ [
{'id': 'role', 'label': _('Role Name'), 'type': 'string', 'help_text': _('The name of the role used to sign.')},
{
'id': 'valid_principals',
'label': _('Valid Principals'),
'type': 'string',
'help_text': _('Valid principals (either usernames or hostnames) that the certificate should be signed for.'),
},
]
)
hashi_ssh_inputs['required'].extend(['public_key', 'role'])
def handle_auth(**kwargs):
token = None
if kwargs.get('token'):
token = kwargs['token']
elif kwargs.get('role_id') and kwargs.get('secret_id'):
token = method_auth(**kwargs, auth_param=approle_auth(**kwargs))
elif kwargs.get('kubernetes_role'):
token = method_auth(**kwargs, auth_param=kubernetes_auth(**kwargs))
else:
raise Exception('Either token or AppRole/Kubernetes authentication parameters must be set')
return token
def approle_auth(**kwargs):
return {'role_id': kwargs['role_id'], 'secret_id': kwargs['secret_id']}
def kubernetes_auth(**kwargs):
jwt_file = pathlib.Path('/var/run/secrets/kubernetes.io/serviceaccount/token')
with jwt_file.open('r') as jwt_fo:
jwt = jwt_fo.read().rstrip()
return {'role': kwargs['kubernetes_role'], 'jwt': jwt}
def method_auth(**kwargs):
# get auth method specific params
request_kwargs = {'json': kwargs['auth_param'], 'timeout': 30}
# we first try to use the 'auth_path' from the metadata
# if not found we try to fetch the 'default_auth_path' from inputs
auth_path = kwargs.get('auth_path') or kwargs['default_auth_path']
url = urljoin(kwargs['url'], 'v1')
cacert = kwargs.get('cacert', None)
sess = requests.Session()
# Namespace support
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']
request_url = '/'.join([url, 'auth', auth_path, 'login']).rstrip('/')
with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert
resp = sess.post(request_url, **request_kwargs)
resp.raise_for_status()
token = resp.json()['auth']['client_token']
return token
def kv_backend(**kwargs):
token = handle_auth(**kwargs)
url = kwargs['url']
secret_path = kwargs['secret_path']
secret_backend = kwargs.get('secret_backend', None)
secret_key = kwargs.get('secret_key', None)
cacert = kwargs.get('cacert', None)
api_version = kwargs['api_version']
request_kwargs = {
'timeout': 30,
'allow_redirects': False,
}
sess = requests.Session()
sess.headers['Authorization'] = 'Bearer {}'.format(token)
# Compatibility header for older installs of Hashicorp Vault
sess.headers['X-Vault-Token'] = token
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']
if api_version == 'v2':
if kwargs.get('secret_version'):
request_kwargs['params'] = {'version': kwargs['secret_version']}
if secret_backend:
path_segments = [secret_backend, 'data', secret_path]
else:
try:
mount_point, *path = pathlib.Path(secret_path.lstrip(os.sep)).parts
'/'.join(path)
except Exception:
mount_point, path = secret_path, []
# https://www.vaultproject.io/api/secret/kv/kv-v2.html#read-secret-version
path_segments = [mount_point, 'data'] + path
else:
if secret_backend:
path_segments = [secret_backend, secret_path]
else:
path_segments = [secret_path]
request_url = urljoin(url, '/'.join(['v1'] + path_segments)).rstrip('/')
with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert
request_retries = 0
while request_retries < 5:
response = sess.get(request_url, **request_kwargs)
# https://developer.hashicorp.com/vault/docs/enterprise/consistency
if response.status_code == 412:
request_retries += 1
time.sleep(1)
else:
break
raise_for_status(response)
json = response.json()
if api_version == 'v2':
json = json['data']
if secret_key:
try:
if (secret_key != 'data') and (secret_key not in json['data']) and ('data' in json['data']):
return json['data']['data'][secret_key]
return json['data'][secret_key]
except KeyError:
raise RuntimeError('{} is not present at {}'.format(secret_key, secret_path))
return json['data']
def ssh_backend(**kwargs):
token = handle_auth(**kwargs)
url = urljoin(kwargs['url'], 'v1')
secret_path = kwargs['secret_path']
role = kwargs['role']
cacert = kwargs.get('cacert', None)
request_kwargs = {
'timeout': 30,
'allow_redirects': False,
}
request_kwargs['json'] = {'public_key': kwargs['public_key']}
if kwargs.get('valid_principals'):
request_kwargs['json']['valid_principals'] = kwargs['valid_principals']
sess = requests.Session()
sess.headers['Authorization'] = 'Bearer {}'.format(token)
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']
# Compatability header for older installs of Hashicorp Vault
sess.headers['X-Vault-Token'] = token
# https://www.vaultproject.io/api/secret/ssh/index.html#sign-ssh-key
request_url = '/'.join([url, secret_path, 'sign', role]).rstrip('/')
with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert
request_retries = 0
while request_retries < 5:
resp = sess.post(request_url, **request_kwargs)
# https://developer.hashicorp.com/vault/docs/enterprise/consistency
if resp.status_code == 412:
request_retries += 1
time.sleep(1)
else:
break
raise_for_status(resp)
return resp.json()['data']['signed_key']
hashivault_kv_plugin = CredentialPlugin('HashiCorp Vault Secret Lookup', inputs=hashi_kv_inputs, backend=kv_backend)
hashivault_ssh_plugin = CredentialPlugin('HashiCorp Vault Signed SSH', inputs=hashi_ssh_inputs, backend=ssh_backend)

Some files were not shown because too many files have changed in this diff Show More