Compare commits

...

284 Commits

Author SHA1 Message Date
Seth Foster
0d4f653794 Fix up ansible-test sanity checks due to ansible 2.17 release (#15208)
* Fix up ansible sanity checks

* Fix awx-collection test failure

* Add ignore for ansible-test 2.17 

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-05-21 15:05:59 -04:00
Alan Rominger
8de8f6dce2 Update a few dev requirements (#15203)
* Update a few dev requirements

* Fix test failures due to upgrade

* Update patterns for mocker usage
2024-05-20 23:37:02 +00:00
Hao Liu
fc9064e27f Allow wsrelay to fail without FATAL (#15191)
We have not identify the root cause of wsrelay failure but attempt to make wsrelay restart itself resulted in postgres and redis connection leak. We were not able to fully identify where the redis connection leak comes from so reverting back to failing and removing startsecs 30 will prevent wsrelay to FATAL
2024-05-20 23:34:12 +00:00
TVo
7de350dc3e Added docs for new RBAC changes (#15150)
* Added docs for new RBAC changes

* Added UI changes with screens and API endpoints with sample commands.

* Update docs/docsite/rst/userguide/rbac.rst

Co-authored-by: Vidya Nambiar <43621546+vidyanambiar@users.noreply.github.com>

* Incorporated review feedback from @vidyanambiar.

---------

Co-authored-by: Vidya Nambiar <43621546+vidyanambiar@users.noreply.github.com>
2024-05-17 20:10:16 -04:00
Michael Anstis
d4bdaad4d8 Fix success_url_allowed_hosts set instantiation (#15196)
Co-authored-by: Michael Anstis <manstis@redhat.com>
2024-05-16 12:08:50 -04:00
Bikouo Aubin
a9b2ffa3e9 Fix terraform backend credential issue (#15141)
fix issue introduced by PR15055
2024-05-15 15:19:18 -04:00
Sean Sullivan
1b8d409043 Add skip authorization option to collection application module (#15190) 2024-05-15 09:29:00 -04:00
dependabot[bot]
da2bccf5a8 Bump jinja2 from 3.1.3 to 3.1.4 in /docs/docsite (#15168)
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.4.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.3...3.1.4)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-14 14:26:26 -04:00
Hao Liu
a2f083bd8e Fix podman failure in development environment (#15188)
```
ERRO[0000] path "/var/lib/awx/.config" exists and it is not owned by the current user
```
start to happen with podman 5

it seems that the config files are no longer needed removing it fixes the problem
2024-05-14 14:18:48 -04:00
Michael Anstis
4d641b6cf5 Support Django logout redirects (#15148)
* Allowed hosts for logout redirects can now be set via the LOGOUT_ALLOWED_HOSTS setting

Authored-by: Michael Anstis <manstis@redhat.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-05-13 13:03:27 -04:00
Elijah DeLee
439c3f0c23 Skip 3 expensive calls for jobs saving in 'waiting' status on UnifiedJob (#15174)
skip update parent logic for 'waiting' on UnifiedJob

by not looking up "status_before" from previous instance
we save 2 to 3 expensive calls (the self lookup of old state, the lookup
of parent, and the update to parent if allow_simultaneous == False or status == 'waiting')
2024-05-13 10:26:03 -04:00
jessicamack
946bbe3560 Clean up settings file (#15135)
remove unneeded settings
2024-05-10 11:25:15 -04:00
James
20f054d600 Expose websockets on api prefix v2 2024-05-01 10:44:51 -04:00
Alan Rominger
918d5b3565 Do some aesthetic adjustments to role presentation fields (#15153)
* Do some asthetic adjustments to role presentation fields

* Correctly test managed setup

* Minor migration adjustments
2024-04-29 17:11:10 -04:00
Hao Liu
158314af50 Delete deprecated Cypress UI e2e_test.yml (#15155)
Delete e2e_test.yml

Remove because it's no longer being maintained
2024-04-29 12:58:10 -04:00
Seth Foster
4754819a09 awx modules wait on event processing finished (#15152)
This change makes "wait: true" for jobs and syncs
look at the event_processing_finished instead of
finished field.

Right now there is a race condition where
a module might try to delete an inventory, but the events
for an inventory sync have not yet finished. We have a
RelatedJobsPreventDeleteMixin that checks for this condition.

bulk jobs don't have event_processing_finished so we just
use finished field in that case.
2024-04-26 17:33:34 -04:00
Seth Foster
78fc23138a Pin openssl 3.0.7 (#15147)
followup to PR #15142

This commit pins openssl in the awx image,
not just the builder image.
2024-04-26 12:29:22 -04:00
Alan Rominger
014534bfa5 Upgrade DRF (#15144)
* Upgrade DRF

* Fix failures caused by DRF upgrade
2024-04-25 15:37:08 -04:00
Seth Foster
2502e7c7d8 Temporarily downgrade openssl (#15142)
openssl 3.2.0 has incompatiblity issues with
the libpq version we are using, and causes
some C runtime errors:
"double free or corruption (out)"

see awx issue #15136

also this issue

github.com/conan-io/conan-center-index/pull/22615

once the libpq libraries on centos stream9 are
updated with the patch, we can unpin openssl

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-04-25 14:01:03 -04:00
Jeff Bradberry
fb237e3834 Stop pre-caching every resource in the system upon import
If we don't have something in the cache when we call
get_by_natural_key, do an actual filtered query for it and cache the
results.  We'll get more overall API calls this way, but they'll be
smaller and will happen while we are importing, not upfront.
2024-04-24 17:04:40 -04:00
irozet12
e4646ae611 Add help message for expiration tokens (#15076) (#15077)
Co-authored-by: Ирина Розет <irozet@astralinux.ru>
2024-04-24 19:58:09 +00:00
Bruno Sanchez
7dc77546f4 Adding CSRF Validation for schemas (#15027)
* Adding CSRF Validation for schemas

* Changing retrieve of scheme to avoid importing new library

* check if CSRF_TRUSTED_ORIGINS exists before accessing it

---------

Signed-off-by: Bruno Sanchez <brsanche@redhat.com>
2024-04-24 15:47:03 -04:00
Michael Tipton
f5f85666c8 Add ability to set SameSite policy for userLoggedIn cookie (#15100)
* Add ability to set SameSite policy for userLoggedIn cookie

* reformat line for linter
2024-04-24 15:44:31 -04:00
Alan Rominger
47a061eb39 Fix and test data migration error from DAB RBAC (#15138)
* Fix and test data migration error from DAB RBAC

* Fix up migration test

* Fix custom method bug

* Fix another fat fingered bug
2024-04-24 15:14:03 -04:00
Alan Rominger
c760577855 Adjust test for stricter DAB user view permission enforcement (#15130) 2024-04-23 15:21:06 -04:00
TVo
814ceb0d06 Backports previously approved corrections. (#15121)
* Backports previously approved corrections.

* Deleted a blank line in inventories line 100
2024-04-22 09:55:19 -06:00
Seth Foster
f178c84728 Fix instance peering pagination (#15108)
Currently the association box displays a
list of available instances/addresses that can
be peered to.

The pagination issue arises as follows:

- fetch 5 instances (based on page_size)
- filter these instances down based on some
criteria (like is_internal: false)
- show results

Filtering down the results inside of the fetch
method results in pagnation errors (may return fewer than
5, for example)

instead, do the filtering by API queries. That way the
pagination count will be correct.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-04-18 15:17:10 -04:00
Jeff Bradberry
c0f71801f6 Use $(shell ...) to filter the redis docker volumes
Makefiles use $() for variable templating, so trying to use it
directly as a shell subcommand doesn't work.
2024-04-17 16:45:23 -04:00
Jan-Piet Mens
4e8e1398d7 Omit using -X when not needed, and don't default to demonstrating -k
It's the year 2024: using -k as default in https URL schemes should be deprecated. (I have left one mention of it possibly being required if no CA available). Furtheremore, neither -XGET or -XPOST are required, as curl(1) well knows when to use which method.
2024-04-17 15:18:57 -04:00
STEVEN ADAMS
3d6a8fd4ef chore: remove repetitive words (#15101)
Signed-off-by: hugehope <cmm7@sina.cn>
2024-04-17 19:18:25 +00:00
Hao Liu
e873bb1304 Fix wsrelay connection leak (#15113)
- when re-establishing connection to db close old connection
- re-initialize WebSocketRelayManager when restarting asyncio.run
- log and ignore error in cleanup_offline_host (this might come back to bite us)
- cleanup connection when WebSocketRelayManager crash
2024-04-16 14:54:36 -04:00
lucas-benedito
672f1eb745 Fixed missing fstring from wsrelay logging (#15094)
Fixed missing fstring from wsrelay logging
2024-04-16 13:32:34 -04:00
abikouo
199507c6f1 Support Google credentials on Terraform credentials type 2024-04-16 10:34:38 -04:00
jessicamack
a176c04c14 Update LDAP/SAML config dump command (#15106)
* update LDAP config dump

* return missing fields if any

* update test, remove unused import

* return bool and fields. check for missing_fields
2024-04-15 12:26:57 -04:00
Alan Rominger
e3af658f82 Use released version of django-radius (#15103) 2024-04-12 16:34:23 -04:00
Hao Liu
e8a3b96482 Use latest awx-ee in devel CI (#15098) 2024-04-12 14:30:39 -04:00
Hao Liu
c015e8413e Store molecule debug output to github artifacts (#15107)
Related to https://github.com/ansible/awx-operator/pull/1823
2024-04-12 13:56:25 -04:00
Alan Rominger
390c2d8907 [RBAC] Update related name to reflect upstream DAB change (#15093)
Update related name to reflect upstream DAB change
2024-04-11 14:59:09 -04:00
Alan Rominger
97605c5f19 Make custom urls work with RBAC 2024-04-11 14:59:09 -04:00
Alan Rominger
818c326160 [RBAC] Rename managed role definitions, and move migration logic here (#15087)
* Rename managed role definitions, and move migration logic here

* Fix naming capitalization
2024-04-11 14:59:09 -04:00
Alan Rominger
c98727d83e [RBAC] Fix bug where team could not be given read_role to other team (#15067)
* Fix bug where team could not be given read_role to other team

* Avoid unwanted triggers of parentage granting

* Restructure signal structure

* Fix another bug unmasked by team member permission fix

* Changes to live with test writing

* Use equality as opposed to string "in"

from Seth in review comment

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2024-04-11 14:59:09 -04:00
Alan Rominger
a138a92e67 [RBAC] Tweaks to reflect what endpoints are deprecated (#15068)
Tweaks to reflect what endpoints are deprecated
2024-04-11 14:59:09 -04:00
Alan Rominger
7aed19ffda Fix missing role membership when giving creator permissions (#15058) 2024-04-11 14:59:09 -04:00
Seth Foster
3bb559dd09 AWX Collections for DAB RBAC
Adds new modules for CRUD operations on the
following endpoints:

- api/v2/role_definitions
- api/v2/role_user_assignments
- api/v2/role_team_assignments

Note: assignment is Create or Delete only

Additional changes:
- Currently DAB endpoints do not have "type"
field on the resource list items. So this modifies
the create_or_update_if_needed to allow manually
specifying item type.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-04-11 14:59:09 -04:00
Alan Rominger
389a729b75 [RBAC] Fix known issues with backward compatible access_list (#15052)
* Remove duplicate access_list entries for direct team access

* Revert test changes for superuser in access_list
2024-04-11 14:59:09 -04:00
Alan Rominger
2f3c9122fd Generalize can_delete solution, use devel DAB (#15009)
* Generalize can_delete solution, use devel DAB

* Fix bug where model was used instead of model_name

* Linter fixes
2024-04-11 14:59:09 -04:00
Alan Rominger
733478ee19 [RBAC] Fix server error from delete capability of approvals (#15002)
Fix server error from delete capability of approvals
2024-04-11 14:59:09 -04:00
Alan Rominger
41c6337fc1 [RBAC] Fix migration for created and modified field changes (#14999)
Fix migration for created and modified field changes
2024-04-11 14:59:09 -04:00
Alan Rominger
7446da1c2f Bump migration number for RBAC branch 2024-04-11 14:59:09 -04:00
Alan Rominger
c79fca5ceb Adopt internal DAB RBAC Permission model (#14994) 2024-04-11 14:59:09 -04:00
Alan Rominger
dc5f43927a Minor RBAC test fix (#14982) 2024-04-11 14:59:09 -04:00
Alan Rominger
35a5a81e19 Use AWX base view to make unauth requests 401 (#14981) 2024-04-11 14:59:09 -04:00
Alan Rominger
9dcc11d54c [DAB RBAC] Re-implement system auditor as a singleton role in new system (#14963)
* Add new enablement settings from DAB RBAC

* Initial implementation of system auditor as role without testing

* Fix system auditor role, remove duplicate assignments

* Make the system auditor role managed

* Flake8 fix

* Remove another thing from old solution

* Fix a few test failures

* Add extra setting to disable custom system roles via API

* Add test for custom role prohibition
2024-04-11 14:59:09 -04:00
Alan Rominger
74ce21fa54 Bump number of allowed endpoints (#14956) 2024-04-11 14:59:09 -04:00
Alan Rominger
eb93660b36 Cache organization child evaluations and remove hacks 2024-04-11 14:59:09 -04:00
Alan Rominger
f50e597548 Cast ObjectRole object_id to int, very wrong, tmp fix 2024-04-11 14:59:09 -04:00
Alan Rominger
817c3b36b9 Replace role system with permissions-based DB roles
Develop ability to list permissions for existing roles

Create a model registry for RBAC-tracked models

Write the data migration logic for creating
  the preloaded role definitions

Write migration to migrate old Role into ObjectRole model

This loops over the old Role model, knowing it is unique
  on object and role_field

Most of the logic is concerned with identifying the
  needed permissions, and then corresponding role definition

As needed, object roles are created and users then teams
  are assigned

Write re-computation of cache logic for teams
  and then for object role permissions

Migrate new RBAC internals to ansible_base

Migrate tests to ansible_base

Implement solution for visible_roles

Expose URLs for DAB RBAC
2024-04-11 14:59:09 -04:00
Alan Rominger
1859a6ae69 Fix failure from DAB (#15102)
@AlanCoding said to do this 🚌
2024-04-11 17:10:11 +00:00
Chris Meyers
0645d342dd Implement optional url prefix the Django way
* Before, the optional url prefix feature required calling our
  versioning version of reverse(). This worked _ok_ until we added more
  and more urls from 3rd party apps. Those 3rd party apps do not call
  our reverse(), writefully so.
* This implementation looks at the incoming request path. If it includes
  the special optional prefix url, then we register ALL the urls WITH
  the optional url prefix.
  If the incoming request path does NOT contain the options url prefix
  then we register ALL the urls WITHOUT the optional url prefix.
* Before this, we were registering BOTH sets of urls and then reverse()
  + the request as context to decide which url.
2024-04-10 16:03:09 -04:00
Chris Meyers
61ec03e540 Move named url init out of Middleware init
* Middleware classes can be instantiated multiple times in testing. To
  make this a non-issue, move the init code for named urls out of the
  middleware init and into the app init.
* This makes it easier to use other testing facilities, like
  LiveServerTestCase, without having to mock the named url middleware
  init.
2024-04-10 15:46:30 -04:00
Shane McDonald
09f0a366bf Revert accidental line deletion
I made a mistake in https://github.com/ansible/awx/pull/15096. I realized afterwards that it must have been being consumed by the make target.
2024-04-10 12:19:07 -04:00
Shane McDonald
778961d31e Fix awxkit uploads when re-running promote workflow 2024-04-10 12:12:35 -04:00
Shane McDonald
f962c88df3 Allow for manually restarting promote workflow
The promote workflow recently failed. Since this was just a problem with our automation, it would be nice if we didn't have to do another release just to fix our tooling.
2024-04-10 12:00:30 -04:00
Hao Liu
8db3ffe719 Check galaxy collection with and without redirect 2024-04-10 11:26:42 -04:00
Alan Rominger
cc5d4dd119 Clean the postgres 15 volume (#15083) 2024-04-09 16:06:42 -04:00
Hao Liu
86204cf23b Publish amd64 and arm64 awx image on release (#15053)
* Stage multi-arch awx image

- change CI to use `make awx-kube-build` instead of build playbook
- update staging CI to build and push multiarch awx image
- update doc to use `make awx-kube-build` to build awx image
- remove build playbook (no longer used)
2024-04-09 09:50:09 -04:00
Chris Meyers
468949b899 Remove uneeded drf_reverse overwrite
* `drf_reverse()` was introduced here 1a75b1836e
* There is a comment about monkey patching. I can't find the monkey patch it is referencing.
* AWX `drf_reverse()` is a copy paste of this https://github.com/encode/django-rest-framework/blob/master/rest_framework/reverse.py#L32
  * The only difference is DRF's version calls `preserve_builtin_query_params()`
    * `preserve_builtin_query_params()` only does something if `api_settings.URL_FORMAT_OVERRIDE` is defined.
      * We don't use `REST_FRAMEWORK.URL_FORMAT_OVERRIDE`
2024-04-08 16:14:11 -04:00
TVo
f1d9966224 Added docs for terraform credential/inventory source (#15004)
* Added docs for terraform credential/inventory source

* Updated screen captures for inventories and source to match wfjt example

* Added docs for terraform credential/inventory source

* Updated screen captures for inventories and source to match wfjt example

* Update docs/docsite/rst/userguide/inventories.rst

Co-authored-by: Aoki <lucasaoki@users.noreply.github.com>

* Revised per review feedback.

* Update docs/docsite/rst/userguide/inventories.rst

Co-authored-by: Helen Bailey <hakbailey@gmail.com>

---------

Co-authored-by: Aoki <lucasaoki@users.noreply.github.com>
Co-authored-by: Helen Bailey <hakbailey@gmail.com>
2024-04-05 09:57:27 -06:00
César Francisco San Nicolás Martínez
b022b50966 fix service-index url calling reverse method 2024-04-04 07:48:04 -04:00
Elijah DeLee
e2f4213839 Round out options url prefix edge cases 2024-04-04 07:48:04 -04:00
Chris Meyers
ae1235b223 Rename container hostname from awx_1 to awx-1
* Django and other webservers that care about proper hostnames don't
  like underscores in them.
2024-04-03 15:58:17 -04:00
Tom Page
c061f59f1c Add tags and skip_tags option to awx.awx.workflow_launch (#15011)
Signed-off-by: Tom Page <tpage@redhat.com>
2024-04-03 15:29:43 -04:00
Jeff Bradberry
3edaaebba2 Adjust the awx-manage script to make use of importlib (#15015)
* Adjust the awx-manage script to make use of importlib

removing the deprecation warning.

* Synlink awx-manage in docker-compose

No longer need to rebuild docker-compose devel image to load change for `tools/docker-compose/awx-manage` in development environment

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-04-02 17:20:05 -04:00
Hao Liu
7cdf1c7f96 Update DOCKER_COMPOSE command to docker compose (#15056)
* Update DOCKER_COMPOSE command

docker-compose will stop being supported soon and this is causing CI flake setting DOCKER_COMPOSE default to `docker compose`

* Give AWX network a static name
2024-04-02 15:13:14 -04:00
Hao Liu
d558204192 Make db password optional for wsrelay (#15046)
* Make db password optional for wsrelay

* Change DB setting copy to deepcopy

safer than copy()

Co-Authored-By: Jeff Bradberry <685957+jbradberry@users.noreply.github.com>

---------

Co-authored-by: Jeff Bradberry <685957+jbradberry@users.noreply.github.com>
2024-04-02 11:47:24 -04:00
Chris Meyers
d06ce8f911 Remove json formatter for job lifecycle
* We didn't really make use of json formatting across the app. Remove
  the special case json formatter. Instead, output all of the meta-data
  associated with a job lifecycle event every time. Before, we tried to
  only output this extra meta data when in DEBUG mode. It turns out this
  information is smaller than we thought and more useful than we thought
  so always output it.
2024-04-02 11:39:34 -04:00
Alan Rominger
4b6f7e0ebe Add link to service-index URL 2024-03-29 10:07:15 +00:00
John Barker
370c567be1 Remove /17 magic number from Forum URL 2024-03-29 10:03:45 +00:00
John Barker
9be64f3de5 Improve social documentation release_process.md 2024-03-29 10:03:45 +00:00
Alan Rominger
30500e5a95 Re-parent DAB views from AWX base 2024-03-29 10:03:12 +00:00
David O Neill
bb323c5710 Loosen up body check on template
https://github.com/ansible/awx/issues/14985
https://github.com/ansible/awx/issues/13983
2024-03-29 10:02:18 +00:00
Chris Meyers
7571df49d5 Pass --exclude="list of exclude dirs like this"
* Previously, the params were passed without quotes and each directory
  was being interpreted as a seperate command line flag.
* Added some structure around the error messages returned from
  receptorctl so we can more easily decide how to handle each case. For
  example, releasing the cleanup job from receptor doesn't absolutely
  need to succeed because we have a periodic job that does that. In
  fact, that is the thing that is making it fail .. but I digress.
2024-03-28 14:42:08 -04:00
Jeff Bradberry
1559c21033 Change awx.awx.application to output the OAuth2 client secret
if one was generated.
2024-03-28 10:17:46 -04:00
PabloHiro
d9b81731e9 Fix: broken reference to API url 2024-03-27 20:37:53 +01:00
Adam Miller
2034cca3a9 update playbooks to use fqcn
Signed-off-by: Adam Miller <admiller@redhat.com>
2024-03-27 15:13:43 -04:00
Chris Meyers
0b5e59d9cb Fix websocket relay. Set autocommit so conn.notifies() does not blocks forever (#15043)
Without autocommit conn.notifies() blocks forever
2024-03-27 15:11:17 -04:00
Alan Rominger
f48b2d1ae5 Add resource and ansible_id to serializers (#15020) 2024-03-26 22:37:15 -04:00
Dimitri Savineau
b44bb98c7e Dockerfile: Fix collectstatic command (#15035)
Recent changes in awx and/or django ansible base cause the django
collectstatic command to fail when using an empty settings file.
Instead, use the defaults settings file from controller via
DJANGO_SETTINGS_MODULE=awx.settings.defaults

[linux/amd64 builder 13/13] RUN AWX_SETTINGS_FILE=/dev/null
SKIP_SECRET_KEY_CHECK=yes SKIP_PG_VERSION_CHECK=yes
/var/lib/awx/venv/awx/bin/awx-manage collectstatic --noinput --clear
Traceback (most recent call last):
(...)
django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly
configured. Please supply the ENGINE value. Check settings documentation for
more details.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2024-03-26 14:19:51 -04:00
Hao Liu
8cafdf0400 Fix wsrelay KeyboardInteruption not respected (#15036)
- stop wsrelay on keyboard interuption
- restart wsrelay for any other failure reason
2024-03-26 17:29:15 +00:00
Hao Liu
3f566c8737 Fix wsrelay not retry to establish db connection (#15031)
- run_wsrelay retry to run wsrelay forever with 10 second sleep
- wsrelay restart on`on_ws_heartbeat` task if fail to db connection goes away
2024-03-26 11:56:16 -04:00
Hao Liu
c8021a25bf Fix keycloak doc (#15024) 2024-03-25 15:01:49 -04:00
Matt Martz
934646a0f6 Address first_found skip bug (#15017)
* Address first_found skip bug

* Don't attempt installing project root requirements.yml as v2 collection format
2024-03-22 12:06:43 +01:00
Michael Abashian
9bb97dd658 Fix bug where extra variables were reset on schedule edit
Fix survey prompt presentation inconsistencies

Remove unnecessary conditional

This conditional always returned true.  See the following warning: This condition will always return 'true' since JavaScript compares objects by reference, not value.

Fix schedule edit tests
2024-03-20 10:30:10 -04:00
Hao Liu
7150f5edc6 Editable dependencies in docker compose development environment (#14979)
* Editable dependencies in docker compose development environment
2024-03-19 15:09:15 -04:00
Hao Liu
93da15c0ee Setting modification to address requests from UI_NEXT devs (#14996)
Modification to settings

- Add hidden to indicate to UI_NEXT to hide the field
- Add warning_text to indicate to UI_NEXT to display the warning when specific setting is modified
- Address some non required field being marked as required
2024-03-19 15:08:41 -04:00
Hao Liu
ab593bda45 Add setting for configuring optional URL prefix for /api (#14939)
* Add setting for configuring optional URL prefix for /api

Add OPTIONAL_API_URLPATTERN_PREFIX setting

examples:
- if set to `''` (empty string) API pattern will be `/api`
- if set to 'controller' API pattern will be `/api` AND `/api/controller`
2024-03-19 15:56:33 +00:00
TVo
065bd3ae2a Backported from product-docs PR #2001 (misc doc cleanup) (#14980)
* Backported from product-docs PR #2001 (misc doc cleanup)

* Update docs/docsite/rst/administration/awx-manage.rst
2024-03-15 12:20:02 -06:00
Hao Liu
8ff7260bc6 Add dump_auth_config management cmd (for SAML and LDAP) (#14947)
* Add dump_auth_config management cmd

- Dump SAML config from AWX to DAB authenticator config in json format

* Add dumping of LDAP settings

* add test for command

* Fix is_enabled

* fix command name typo

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

* add fields to config, add name to data

* break out IDP values

* change test fields and value comparison

* edit help text, reformat settings

---------

Co-authored-by: jessicamack <jmack@redhat.com>
2024-03-15 13:47:30 -04:00
Hao Liu
a635445082 Fix failing bulk launch job due to create partition race
https://github.com/ansible/awx/pull/14910/files

introduced a bug where we no longer accept the right exceptions

when 2 job launch at the sametime and try to create jobevent table partition 1 of the job will fail
2024-03-15 10:10:38 -04:00
Hao Liu
949e7efab1 Fix wsrelay hanging after db outage
TCP keepalive settings was moved out from settings.DATABASE to settings.LISTENER_DATABASES and it's not longer being respected by wsrelay
2024-03-14 15:51:30 -04:00
Hao Liu
615f09226f Fix awx-manage run_wsrelay --status (#14997)
by don't start the metrics server if --status is passed in
2024-03-14 18:55:05 +00:00
Dave
d903c524f5 Fix for 14924 - Unformatted help text toast message (#14990)
Fix for 14924  - Unformatted help text is popped out when peers for intances are changed

Co-authored-by: David O Neill <daoneill@redhat.com>
2024-03-14 13:24:53 -04:00
Cesar Francisco San Nicolas Martinez
393d9c39c6 Mismatch dependencies version (#14986)
* Fixed mismatch between setuptools version in the makefile and requirements file

* Fix mismatch of versions in makefile and requirements

* Added maturin license
2024-03-14 13:32:56 +01:00
Hao Liu
dfab342bb4 Skip replicas test for awx-operator (#14987)
speed up CI, also AWX code change won't effect that test
2024-03-13 18:01:02 +00:00
Dave
12843eccf7 AAP-13369 Python 3.9 -> 3.11 upgrade (#14771)
* Python 3.9 -> 3.11 upgrade

* Test: updating azure-keyvault to 4.2.0

* Revert "Test: updating azure-keyvault to 4.2.0"

This reverts commit cf0b83699442e0c0de4a1152d4af8543a5e05b88.

* Test: updating azure-keyvault to latest and adding azure-identity

* Fix licenses

* Adding new licenses

* Revert "Fix licenses"

This reverts commit da3876911ef5ebbe7a8adbddd336ced3039b6228.

* Fixing dependencies

* Test: updating azure-keyvault to 4.2.0

* Fix licenses

* Revert "Fix licenses"

This reverts commit da3876911ef5ebbe7a8adbddd336ced3039b6228.

* Fixing dependencies

---------

Co-authored-by: César Francisco San Nicolás Martínez <csannico@redhat.com>
2024-03-13 14:41:40 +01:00
Hao Liu
dd9160135d Prune dangle image periodically (#14957)
Prune dangle image periodically

pairs with https://github.com/ansible/ansible-runner/pull/1342

this fix the problem of us forcefully remove images when setting changing ee image that's being used in a job causing the job to fail
2024-03-12 10:57:57 -04:00
Chris Meyers
ad96a92fa7 Align Orign and Host header (#14970)
* Align Orign and Host header

* Before this change the Host: header was runserver. Seems to be set by
  nginx upstream flow.
* After this change we explicitly set the Host: header
* More about CSRF checks ...
  CSRF checks that Origin == Host. Think about how the browser works.

  <browser goes to awx.com>
  "I'm executing javascript that I downloaded from awx.com (ORIGIN) and
  I'm making an XHR POST request to awx.com (HOST)"
  Server verifies; Host: header == Origin: header; OK!

  vs. the malicious case.

  <hacker injects javascript code into google.com>
  <browser goes to google.com>
  "I'm executing javascript that I downloaded from google.com (ORIGIN)
  and I'm making an XHR POST request to awx.com (HOST)"
  Server verifies; Host: header != Origin: header; NOT OK!

* Update awx/settings/development.py

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-03-11 17:06:09 -04:00
David O Neill
ca8085fe7e English string validation to error code validation 2024-03-11 20:07:16 +00:00
Hao Liu
b076cb00a9 Revert "Implement project pulling from Azure DevOps using Service Pri… (#14977)
Revert "Implement project pulling from Azure DevOps using Service Principals (#14628)"

This reverts commit 2e2cd7f2de.
2024-03-11 14:05:24 +00:00
John Westcott IV
ee9eac15dc Upgrade to postgres:15 (#14230)
* Upgrade to postgres:15
* Changed postgres:15 to quay.io/sclorg/postgresql-15-c9s
2024-03-07 16:27:03 -05:00
Hao Liu
3f2f7b75a6 [developer productivity improvement] Running awx components in vscode debugger (#14942)
Enable VSCode debugger integration when attaching VSCode to with AWX docker-compose development environment container

- add debugpy launch target in `.vscode/launch.json` to enable launching awx processes with debugpy
- add vscode tasks in `.vscode/tasks.json` to facilitate shutting down corresponding supervisord managed processes while launching process with debugpy
- modify nginx conf to add django runserver as fallback to uwsgi (enable launching API server via debugpy)
2024-03-07 19:31:50 +00:00
Dave
b71645f3b1 AAP-12273 remove incorrect sentence conjugation (#14946)
AAP-12273 remove incorrect sentance conjugation

Co-authored-by: David O Neill <daoneill@redhat.com>
2024-03-07 14:04:11 -05:00
Hao Liu
eb300252b8 Fix awx-autoreload in dev environment (#14968)
Fix awx-autoreload, recent change made autoreload no longer take the command parameter
2024-03-07 16:33:23 +00:00
Patrick Uiterwijk
2e2cd7f2de Implement project pulling from Azure DevOps using Service Principals (#14628)
* Credential Lookup with multiple types
Allow looking up a credential with one of multiple type IDs.

* Allow Azure cred for SCM
Allow selecting an Azure Resource Manager credential for Git-based SCMs.
This is in order to enable using Azure Service Principals for project updates.

* Implement Azure Service Principal Git
This adds support for using an Azure Service Principal for project updates.

---------

Signed-off-by: Patrick Uiterwijk <patrick@puiterwijk.org>
2024-03-07 10:07:03 -05:00
Hao Liu
727278aaa3 Add pip>=21.3 to dev requirement to install django-ansible-base in editable mode (#14961)
Add  pip>=21.3 to dev requirement required for installing django-ansible-base in editable mode

https://peps.python.org/pep-0660/

PEP 660 – Editable installs for pyproject.toml based builds (wheel based)
2024-03-06 21:28:41 -05:00
Michael Abashian
81825ab755 Bump axios UI dep to 1.6.z (#14954)
* Bump axios UI dep to 1.6.z

* Add proxy-from-env license
2024-03-06 16:03:49 -05:00
Helen Bailey
7f2a1b6b03 Add terraform state inventory source (#14840)
* Add terraform state inventory source
* Update inventory source plugin test
Signed-off-by: Helen Bailey <hebailey@redhat.com>
2024-03-06 20:27:52 +00:00
Hao Liu
1b56d94d30 In development environment not auto-reload explicitly STOPPED processes (#14958)
Not auto-reload explicitly STOPPED processes

In development/debug workflow sometime we explicitly STOP processes this will make sure auto-reload does not start them back up
2024-03-06 20:22:44 +00:00
Shane McDonald
e1e32c971c Allow for manually starting workflow to build devel images (#14955) 2024-03-06 17:53:05 +00:00
Alan Rominger
a4a2fabc01 Add test for utils method is_testing 2024-03-05 11:38:14 +00:00
Hao Liu
b7b7bfa520 Fix test that fail on rerun due to expecting exact IDs (#14943)
Fix test that fail on rerun

due to expecting exact IDs
2024-03-01 12:37:17 -05:00
jessicamack
887604317e Integrate resources API in Controller (#14896)
* add resources api to controller

* update setting

models are not the source of truth in AWX

* Force creation of ServiceID object in tests

* fix typo

* settings fix for CI

---------

Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-03-01 11:18:35 -05:00
Chris Meyers
d35d8b6ed7 fails when ran with vscode debugger
Tried to dig as to why we ever needed this and could not find the answer. We removed it and ran all the tests and the tests passed so assuming it's no longer needed.
2024-02-29 15:22:41 -05:00
Hao Liu
ec28eff7f7 Convert swagger release fixture to env var (#14940)
`pytest awx/main/tests/docs --release=$(VERSION_TARGET)`
where --release is required breaks test discovery and running in vscode (from within the container)
2024-02-29 14:55:02 -05:00
Cesar Francisco San Nicolas Martinez
a5d17539c6 Removing Podman to use Docker again in the collection ci (#14938)
Removing Podman var to use Docker again in the collection ci
2024-02-28 14:56:49 +01:00
Thanhnguyet Vo
a49d894cf1 Added missing AWS secret management lookup creds. 2024-02-28 04:16:59 +00:00
Chris Meyers
b3466d4449 Make JWT the first auth class and default
* No harm in adding it to the list. If a JWT auth header is provided,
  then process it (valid or not). If a JWT is not provided, move on to
  the next auth.
2024-02-27 15:09:16 -05:00
Hao Liu
237adc6150 Publish multi-arch versioned awx-ee
dependent on https://github.com/ansible/awx-ee/pull/235
2024-02-27 08:21:51 +00:00
Hao Liu
09b028ee3c Publish multi-arch manifest of awx (#14929)
Promote multi-arch awx manifest
2024-02-26 17:05:57 -05:00
Hao Liu
fb83bfbc31 Fix ui_next banner (#14928) 2024-02-26 14:20:42 -05:00
Hao Liu
88e406e121 Fix CVEs and bump receptorctl (#14925)
CVE-2023-47627
CVE-2023-49083
CVE-2023-41040
CVE-2024-22195
CVE-2023-46137
2024-02-26 15:48:38 +00:00
Thanhnguyet Vo
59d0bcc63f Fixed some misc errors in illustrations and header formatting 2024-02-22 13:04:37 +00:00
Hao Liu
3fb3125bc3 Send QUIT to worker before dying (#14913)
Fix deadlock scenario where dispatcher child process stuck in reading from queue loop after dispatcher parent process decided to quit

Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-02-21 16:08:43 -05:00
Alan Rominger
d70c6b9474 Reset another to test-playbooks 2024-02-21 15:11:12 +00:00
Alan Rominger
5549516a37 Reset these tests back to test-playbooks 2024-02-21 15:11:12 +00:00
Alan Rominger
14ac91a8a2 Swap repos test fix 2024-02-21 15:11:12 +00:00
Alan Rominger
d5753818a0 Stop using the bulky test-playbooks in tests where possible 2024-02-21 15:11:12 +00:00
Chad Ferman
33010a2e02 added # -*-coding:utf-8-*- to test if this fixes issues with users being unable to have Japanese, Chinese and Korean Characters in email messages 2024-02-21 14:50:46 +00:00
Alan Rominger
14454cc670 Fix playbook errors found by debugging 2024-02-21 14:36:41 +00:00
Alan Rominger
7ab2bca16e Fix missing var name change 2024-02-21 14:36:41 +00:00
Alan Rominger
f0f655f2c3 Style consistency for task 'when' 2024-02-21 14:36:41 +00:00
Brian Coca
4286d411a7 fix project_update role/collection install
- avoid looping
  - avoid using multiple files, only one should be provided and processed per type
  - use first_found and variables to locate existing file
  - skip if no file exists
2024-02-21 14:36:41 +00:00
James Talton
06ad32ed8e Enhance the dashboard job summary endpoint to contain canceled and error job counts
Signed-off-by: James Talton <jtalton@redhat.com>
2024-02-21 14:12:59 +00:00
Alan Rominger
1ebff23232 Do not rely on unreliable dir output and use exists query 2024-02-21 13:43:54 +00:00
Alan Rominger
700de14c76 Make the migration middleware faster, second attempt 2024-02-21 13:43:54 +00:00
Thanhnguyet Vo
8605e339df Deleted duplicate graphics that were converted to drawio. 2024-02-21 13:10:41 +00:00
Thanhnguyet Vo
e50954ce40 Fixed graphics, illustrations, tables, examples, sizing 2024-02-21 13:10:41 +00:00
Hao Liu
7caca60308 Multi-arch build for AWX images in ghcr.io (#14899)
build amd64 and ARM image for
- awx
- awx_devel
- awx_kube_devel
2024-02-20 17:17:31 -05:00
Cesar Francisco San Nicolas Martinez
f4e13af056 Add support for terraform credentials in awxkit (#14902) 2024-02-20 13:42:25 +00:00
Jérémy Lal
decdb56288 Fix mistake in french message 2024-02-20 12:33:59 +00:00
Fdubois97
bcd4c2e8ef Fix: Ajout de traduction sur la langue FR 2024-02-20 12:32:18 +00:00
Thaïs DOUCET
d663066ac5 Fix typo in french message
Typo in "Launch Template" french message

Signed-off-by: Thaïs DOUCET <tdoucet@amiltone.com>
2024-02-20 12:21:31 +00:00
Sasa Jovicic
1ceebb275c Fix: login rerouting only works on the user's current tab (PR:#14399)
Signed-off-by: Sasa Jovicic <sjovicic@anexia-it.com>
2024-02-20 12:16:13 +00:00
César Francisco San Nicolás Martínez
f78ba282a6 Ability to user awxkit with websocket custom urls 2024-02-20 11:58:18 +00:00
Mikhail Yohman
81d88df757 Add YAML tab for Job Output event modal. 2024-02-19 16:45:46 +00:00
Hao Liu
0bdb01a9e9 Allow dev image to build on fork (#14898)
* Allow dev image to build on fork

Fix uppercase repo owner error in CI
example
```
Run docker pull ghcr.io/TheRealHaoLiu/awx_devel:devel
  docker pull ghcr.io/TheRealHaoLiu/awx_devel:devel
  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
  env:
    LC_ALL: C.UTF-8
    CI_GITHUB_TOKEN: ***
    DEV_DOCKER_OWNER: TheRealHaoLiu
    COMPOSE_TAG: devel
    py_version: 3
invalid reference format: repository name must be lowercase
```

---------

Co-authored-by: Rick Elrod <rick@elrod.me>
2024-02-19 16:19:59 +00:00
Alan Rominger
cd91fbf59f Label any changes to requirements folder with dependencies label 2024-02-19 15:56:53 +00:00
Michael Abashian
f240e640e5 Run prettier 2024-02-19 14:48:36 +00:00
ivarmu
46f489185e fixes problems with workflow nodes information section 2024-02-19 14:48:36 +00:00
Thanhnguyet Vo
dbb80fb7e3 Updated release notes so they don't need to be revised so often. 2024-02-19 12:18:57 +00:00
Seth Foster
cb3d357ce1 Disable install_bundle endpoint for ingress node
As we do for control nodes, disable the
install_bundle endpoint for ingress nodes.

This can be done by checking if instance managed
is True.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-19 11:00:06 +00:00
Chris Meyers
dfa4db9266 Add tests for websocket endpoints
* authorized/not authorized tests for wsrelay endpoint
* not authorized test for web browser websockets
* skeleton of a test for authorized web browser websockets
2024-02-17 18:37:53 -05:00
Chris Meyers
6906a88dc9 Add pytest-asyncio to test channels websockets 2024-02-17 18:37:53 -05:00
Alan Rominger
1f7be9258c Remove tower_legacy module_utils that appears unused (#14421)
* Remove tower_legacy module that appears unused

* Update license details
2024-02-16 16:02:09 -05:00
Jake Jackson
dcce024424 Update release doc to check for awxkit tar files. (#14892)
add a step to release process to confirm tar file is published
2024-02-16 18:51:58 +00:00
Jeff Bradberry
79d7179c72 Fix the persistent breakage when cleaning up branches
The github workflow that we have set up for branch deletion doesn't work:

- the `on: delete` event does not support the `branches:` filter
- the `mode` flag for the aws_s3 module does not have `delete` as one
  of the options.  The proper option appears to be `delobj`.
2024-02-15 15:16:18 -05:00
Alan Rominger
4d80f886e0 Revert "Drop cython dep" (#14884)
* Revert "Remove cython lib"

This reverts commit 46f816e7a4.

* Revert "WIP consider droping cython dep"

This reverts commit 54b32c10f0.

* Update Cython comment
2024-02-15 11:58:17 -05:00
TVo
5179333185 Added mesh ingress content to instances chapter. (#14854)
* Added mesh ingress content to instances chapter.

* Changes to incorp feedback from @TheRealHaoLiu.

* Added mesh ingress content to instances chapter.

* Line 75

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Line 117

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Line 126

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Wording changes and update graphics

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2024-02-15 07:19:52 -07:00
Alan Rominger
362e11aaf2 Respect old downtime setting name if user has already set it 2024-02-15 12:34:24 +00:00
Hao Liu
decff01fa4 Update command for sos-report
wsbroadcast has been renamed to wsrelay
2024-02-15 12:32:05 +00:00
Daniel Gonçalves
a14cc8199d Add python 3.12 dependency (#14869)
Signed-off-by: Daniel Gonçalves <daniel.gonc@lves.fr>
2024-02-14 15:12:51 -05:00
Chris Meyers
b6436826f6 Support websocket per-endpoint auth
* Channels doesn't really give you an interface to support per-endpoint
  auth ... so this adds one.
* The web browser and node <--> node communication have different auth
  needs.
2024-02-14 15:11:52 -05:00
Chris Meyers
2109b5039e Fixes "Task was destroyed but it is pending"
* asyncio.gather expects *tasks .. gotta unpack the list
2024-02-14 15:11:52 -05:00
root
b6f9b73418 adding iputils 2024-02-14 17:01:15 +00:00
Christian M. Adams
40a8a3cb2f Add dockerx make target for building awx for arm64
Signed-off-by: Christian M. Adams <chadams@redhat.com>
2024-02-14 16:01:36 +00:00
David O Neill
19f80c0a26 GH13983 - Add additional check for bad templates 2024-02-14 15:49:26 +00:00
Christian Adams
5d1bb2125e Update .github/workflows/stage.yml
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-14 15:44:21 +00:00
Christian Adams
99c512bcef Update .github/workflows/stage.yml
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-14 15:44:21 +00:00
Christian M. Adams
ed0329f5db Build multi-arch awx-operator images when releasing 2024-02-14 15:44:21 +00:00
Chris Meyers
dd53345397 shhhhhhhhhhhhhhh
Hopefully silence some setuptools
2024-02-14 15:38:31 +00:00
Chris Meyers
f66cde51d7 More locked down websocket path
* Previously, the nginx location would match on /foo/websocket... or
  /foo/api/websocket... Now, we require these two paths to start at the
  root i.e. <host>/websocket/... /api/websocket/...
* Note: We now also require an ending / and do NOT support
  <host>/websocket_foobar but DO support <host>/websocket/foobar. This
  was always the intended behavior. We want to keep
  <host>/api/websocket/... "open" and routing to daphne in case we want
  to add more websocket urls in the future.
2024-02-14 13:50:51 +00:00
thedoubl3j
d1c31687fc remove the ldap volume when cleaning all volumes 2024-02-14 12:33:42 +00:00
Jeff Bradberry
38424487f1 Unbreak the pip-compile command when multiple files are passed in (#14875) 2024-02-13 15:59:04 -05:00
Hao Liu
b0565e9937 Switch to docker_compose_v2 in tools playbook (#14872)
Switch to docker_compose_v2

Fix
```
"Configuration error - kwargs_from_env() got an unexpected keyword argument 'ssl_version'"}
```
2024-02-13 13:05:33 -05:00
Hao Liu
44d85b589c Retries on vault on seal (#14873)
Sometime we tried to unseal when vault is not ready yet
2024-02-13 13:05:23 -05:00
Alan Rominger
46f816e7a4 Remove cython lib 2024-02-13 14:45:28 +00:00
Alan Rominger
54b32c10f0 WIP consider droping cython dep 2024-02-13 14:45:28 +00:00
Alan Rominger
20202054cc Use lowercase password 2024-02-13 14:36:39 +00:00
Alan Rominger
e84e2962d0 Support DB configs where PASSWORD is not used 2024-02-13 14:36:39 +00:00
Chris Meyers
2259047527 Websockets now use rest_framework configed auth
* Always support cookies, session, and also allow rest_framework
  configured auth methods over the browser websocket.
* The node -> node websocket auth remains locked down and unchanged
2024-02-13 12:05:25 +00:00
Chris Meyers
f429ef6ca7 Allow connecting to websockets via api/websocket/
* Before, we just allowed websockets on <host>/websocket/. With this
  change, they can now come from <host>/api/websocket/
2024-02-13 12:02:44 +00:00
Hao Liu
4b637c1319 gitignore pyenv python-version file
pyenv local can be use to set directory specific python version in `.python-version` file in the directory

Signed-off-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-13 12:00:57 +00:00
Alan Rominger
4c41f6b018 Ability to use updater script to pin dev requirements (#14644)
* Add a dev option for updater script to pin CI reqs

* Avoid removing git links for dev requirements

* Add dev to primary options

* Fix up sanitize git switch
2024-02-12 11:57:59 -05:00
Jesse Wattenbarger
3ae72219b4 Change parsing of docker info in dev build
This is a non-functional change. The way os_info is populated with docker info
and grep 'Operating System' breaks on podman and likely in other places. This
makes it work on both podman and docker, and it will continue to return the
exact same strings everywhere else.
2024-02-12 16:40:48 +00:00
jessicamack
402c29dc52 Switch mailing list to forum (#14600)
* Switch mailing list to forum

* add link to community

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

* use correct channel name

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-12 15:51:31 +00:00
Alan Rominger
8eb4a9a2a0 Update location of logstash build context (#14676) 2024-02-12 15:49:29 +00:00
Jeff Bradberry
36f3b46726 Avoid using SmartFilter during migrations (#14786)
Our migrations that touch roles tend to bring in our real models via
migration_utils.set_current_apps_for_migrations, and that can have
some undesirable side-effects.
2024-02-12 15:48:58 +00:00
Bikouo Aubin
55c6a319dc Add new credential type to support Terraform backend configuration (#14828)
* Add new credential type to support configuration of Terraform Backend

* Fix unit tests
2024-02-12 15:47:24 +00:00
Dave
56b6a07f6e Remove json serialization for notify validation (#14847)
* Remove json serialization for notify validation

* Update serializers.py
2024-02-12 15:43:43 +00:00
Jake Jackson
519fd22bec Add ldap support to vault container in docker dev environment (#14777)
* add ldap_auth mount and configure it

* added in key engines, userpass auth method, still needs testing

* add policies and fix ldap_user

* start awx automation for vault demo and move ldap

* update docs with new flags/new credentials
2024-02-09 15:19:17 -05:00
TVo
2e5306ae8e Added LDAP support for HashiCorp Vault lookup credential (#14833)
* Added LDAP support for HashiCorp Vault lookup credential

* Added LDAP support for HashiCorp Vault lookup credential

* Replaced graphics and updated missing fields.

* Added LDAP support for HashiCorp Vault lookup credential

* Replaced graphics and updated missing fields.

* Incorporated review feedback from @thedoubl3j and @djyasin.
2024-02-09 10:17:14 -07:00
Cesar Francisco San Nicolas Martinez
068e6acbd5 Fix the way we are passing the awxkit base path to resources (#14862) 2024-02-09 15:31:52 +01:00
Seth Foster
f9a23a5645 Remove enable button for hop nodes (#14861)
Enabled/disabled does not apply to hop nodes,
since hop nodes don't run jobs.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-08 22:10:10 -05:00
Seth Foster
40150a2be8 Fix UI peers_from_control_nodes (#14858)
* Fix UI peers_from_control_nodes

Fixes bug where peers_from_control_node was
greyed out in UI.

Additional changes:
- Make edit instance button only show for instances
with managed = False
- Make remove instance button only show for instances
with managed = False
- InstanceList selectable only for instances with
managed = False

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-08 14:13:11 -05:00
TVo
b79aa5b1ed Removed erroneous line for basic auth 2024-02-08 12:45:47 -05:00
Jeff Bradberry
b3aeb962ce Fix the test_export_system_auditor collection test 2024-02-07 15:55:19 -05:00
Jeff Bradberry
2300b8fddf Fix linting problem 2024-02-07 15:55:19 -05:00
Lorenzo Tanganelli
3a3284b5df rework with check if POST exists 2024-02-07 15:55:19 -05:00
Lorenzo Tanganelli
2359004cc1 fix decription extraction on export 2024-02-07 15:55:19 -05:00
Lorenzo Tanganelli
694d7e98e7 leave $encrypted$ on export
add encrypted removal from import when object not exists
2024-02-07 14:29:30 -05:00
Julen Landa Alustiza
8c9c02c975 awxkit: allow to modify api base url (#14835)
Signed-off-by: Julen Landa Alustiza <jlanda@redhat.com>
2024-02-07 12:26:42 +01:00
Chris Meyers
8a902debd5 Per-service metrics http server
* Organize metrics into their respective service
* Server per-service metrics on a per-service http server
* Increase prometheus client usage over our custom metrics fields
2024-02-05 15:17:24 -05:00
Seth Foster
6dcaa09dfb UI rename Endpoints to Listener Addresses
Listener Addresses is a better name to
emphasize these are routable addresses to
reach a listener service on the node.

Also removed expand toggle on the listener
addresses list items, as the expanded mode
had no additional information.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
21fd6af0f9 InstanceLink unique constraint source and target
Prevent creating InstanceLinks with duplicate
source and target pairings.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
eeae1d59d4 Disable health check button if managed
Also, update ui screen tests to expect
injecting "listener_port: null" if
listener_port is empty

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
a252d0ae33 Fix UI lint by running npm prettier
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
48971411cc Protocol blank if no canonical address
Make protocol be blank on instance if there
is no canonical address for this instance.

It was defaulting to "tcp" before.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
083c05f12a Prevent duplicating instance links
In receptor address post-save method:
- Fixed detecting if address was missing
a link from control nodes
- Use InstanceLink create_or_update to prevent
adding duplicate InstanceLink source and target
peers

In instance serializer create_or_update,
delete receptor addresses first before doing
instance create or update. This ensures that we don't
trigger unnecessary post-save methods that might
attempt to manipulate receptor addresses that
will just be removed later.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
b558397b67 Remove redundant tests
test_listener_port
test_peers_from_control_nodes
test_peers_from_control_nodes_without_listener_port

are covered in the following tests:

test_no_op
test_creates_canonical_address
test_deletes_canonical_address
test_updates_canonical_address
test_canonical_address_validation_error

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
904c6001e9 If managed, cannot modify peers_from_control_nodes
Adds validation to prevent changing
peers_from_control_nodes if instance managed=True

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
818e11dfdc Test inspect_established_receptor_connections
Add functional test case for inspecting
established receptor connections.

InstanceLink starts in ADDING state, and should
move to ESTABLISHED state if the connection
is detected in the receptor status output.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Jeff Bradberry
7fc13a0569 Write tests around the two special instance serializer fields
and all of the cases that they might be in.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
92c693f14e Break out peer validation into its own method 2024-02-02 10:37:41 -05:00
Jeff Bradberry
f2417f0ed2 Make the peer validation more compact
and reuse information.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
8f22188116 Use a select_related to build the peers queryset in the install bundle
Since the relationship is ReceptorAddress -> Instance,
prefetch_related isn't necessary.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
05502c0af8 Placeholder FIXMEs for things of concern 2024-02-02 10:37:41 -05:00
Jeff Bradberry
957ce59bf7 Use Counter to find duplicate peer relationships
Gives a bit more readability.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
cc4cc37d46 'managed' is a read-only field on InstanceSerializer
so we don't need complex logic to compare an incoming to existing value.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
1e254c804c The listener port cannot be disabled when setting peers_from_control_nodes
If the port is explicitly set to null (causing any ReceptorAddress to
be deleted), then that's a validation error.

If the port is left off but a ReceptorAddress doesn't already exist,
we should not infer a port number and that is also a validation error.
2024-02-02 10:37:41 -05:00
Seth Foster
1b44bebed3 Peers_from_control_nodes requires listener port
Adds validation and a unit test to ensure:

- peers_from_control_nodes=True should fail if
listener_port is not set
- peers_from_control_nodes=False should be NOOP if
listener_port is not set

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
a4cf55bdba Require receptor collection 2.0.3
This release supports ingress hop node, and
is backwards compatible with previous AWX
version.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
c333d0e82f Prevent modifying peers on managed node
Add validation to prevent any managed node
from modifying "peers" through the API

Peering from these nodes should be handled
by setting peers_from_control_nodes only.

Managed nodes are control nodes and
ingress hop nodes.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
David O Neill
b093c89a84 Form hardening and node type exclusion
Disabled add/edit/remove for managed nodes
Tightened validation on Peers from control nodes
2024-02-02 10:37:41 -05:00
Jeff Bradberry
f98493aa61 Support wss as ws-listener in the Receptor config 2024-02-02 10:37:41 -05:00
Hao Liu
c36d2b0485 Update requirements.yml 2024-02-02 10:37:41 -05:00
Jeff Bradberry
8ddb604bf1 Template the listener protocol into the receptor install bundle (#14792)
Previously we were hard-coding tcp, now we need to also support ws/wss.
2024-02-02 10:37:41 -05:00
Seth Foster
cd9dd43be7 Make InstanceLink target non-nullable
InstanceLink target should not be null.

Should be safe to set to null=False, because we have
a custom RunPython method to explicitly set
target to a proper key.

Also, add new test to test_migrations
which ensures data integrity after migrating
the receptor address model changes.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Jeff Bradberry
82323390a7 Reconstitute migration file
Do the things as a proper 3-step process -- add new stuff, copy the data, drop the old.
2024-02-02 10:37:41 -05:00
Seth Foster
4c5ac1d3da Add management command to remove address
Adds remove_receptor_address to delete a
receptor address from the database

Also, enforce that only 1 canonical address
can be added to an instance via
the add_receptor_address command.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
David O Neill
9c06370e33 InstanceAdd sends null for port_listener 2024-02-02 10:37:41 -05:00
David O Neill
449b95d1eb Fix remaning tests, removed unused code 2024-02-02 10:37:41 -05:00
David O Neill
1712540c8e Updates for receptor reaslese to ui for protocol and is_managed 2024-02-02 10:37:41 -05:00
Seth Foster
7cf639d8eb Add migration to support InstanceLink changes
- Add forwards method to create a receptor address
for any existing Instance that has listener_port defined

- Add forwards method to modify each InstanceLink object
that changes target to the newly created receptor addresses

This migration was implemented as follows:

1. Add a target_new to InstanceLink which is a foreign key
to ReceptorAddress
2. create receptor addresses
3. link to these receptor addresses using the target_new field
4. rename target_new to target
5. drop listener_port and peers_from_control_nodes from Instance

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
dbfcc40d7c Only create receptor address if port is defined
If a Instance endpoint is patched with
{"peers_from_control_nodes" True}

but a listener_port is not defined on the instance,
or is part of the patch payload, do not create a
receptor address.

Only update or create a receptor address if listener_port
is set, either in the payload or already on the instance.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
73d2c92ae3 Fix condition for creating receptor_address
If listener_port is not explicitly defined don't create a receptor_address
2024-02-02 10:37:41 -05:00
Seth Foster
24a4242147 Add protocol to receptor address serializer
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
92ce85b688 Remove unused warnings import
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
9531f8377a Ensure register_peers target is ReceptorAddress
command has 3 "targets", all of which should be a
ReceptorAddress object

- peers, disconnect, exact

Add logic to make sure each entry in those lists
are receptor addresses.

When creating InstanceLink objects, make sure
target is ReceptorAddress, not an Instance.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
15a16b3dd1 Update bootstrap_development.sh 2024-02-02 10:37:41 -05:00
Seth Foster
a37e7bf147 Add canonical=True when creating ReceptorAddress in tests
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
a2fcd2f97a Fix ui-lint error 2024-02-02 10:37:41 -05:00
Hao Liu
c394ffdd19 Fix lint error, remove unused import 2024-02-02 10:37:41 -05:00
Seth Foster
69102cf265 Remove receptor_address module from collection
After removing CRUD from receptor addresses, we need
to remove the module.

- remove receptor_address module
- Add listener_port to instance module
- Add peers_from_control_nodes to instance module

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
a188798543 Make canonical field default to False
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
60108ebd10 Rename migration dependency
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Jeff Bradberry
8c7c00451a Join across the InstanceLink.target to the underlying Instance
to avoid having to do a lookup for every instance in the list for the
mesh visualizer endpoint.
2024-02-02 10:37:41 -05:00
Seth Foster
7a1ed406da Remove CRUD for Receptor Addresses
Removes ability to directly create and delete
receptor addresses for a given node.

Instead, receptor addresses are created automatically
if listener_port is set on the Instance.

For example patching "hop" instance

with {"listener_port": 6667}

will create a canonical receptor address with port
6667.

Likewise, peers_from_control_nodes on the instance
sets the peers_from_control_nodes on the canonical
address (if listener port is also set).

protocol is a read-only field that simply reflects
the canonical address protocol.

Other Changes:

- rename k8s_routable to is_internal
- add protocol to ReceptorAddress
- remove peers_from_control_nodes and listener_port
from Instance model

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
César Francisco San Nicolás Martínez
f916ffe1e9 Adjust migration names and dependencies 2024-02-02 10:37:41 -05:00
César Francisco San Nicolás Martínez
901dbd697e Comment unused dependency 2024-02-02 10:37:41 -05:00
David O Neill
d8b4a9825e Cleanup
- Remove peer selection on add and edit instance
- Added conconcial name and order comlums names same on endpoints and
  peers
- Other cleanup items
- rename name to instance name
2024-02-02 10:37:41 -05:00
Hao Liu
6db66c5f81 Fix provision instance not respecting protocol 2024-02-02 10:37:41 -05:00
David O Neill
82ad7dcf40 Mesh UI support
- add endpoint
- delete endpoint (wip)
- associate
- disassociate
2024-02-02 10:37:41 -05:00
David O Neill
93500f9fea Fix lint trailing whitespace 2024-02-02 10:37:41 -05:00
Seth Foster
9ba70c151d Add canonical receptor address
Creates a non-deletable address that acts as
the "main" address for this instance.

All other addresses for that instance must
be non-canonical.

When listener_port on an instance is set, automatically
create a canonical receptor address where:
  - address is hostname of instance
  - port is listener_port
  - canonical is True

Additionally, protocol field is added to instance to
denote the receptor listener protocol to use (ws, tcp).

The receptor config listener information is derived from
the listener_port and protocol information. Having a
canonical address that mirrors the listener_port ensures that
an address exists that matches the receptor config information.

Other changes:
- Add managed field to receptor address.
If managed is True, no fields on on this address can be edited
via the API.
If canonical is True, only the address cannot be edited.

- Add managed field to instance. If managed is True, users
cannot set node_state to deprovisioning (i.e. cannot delete node)

This change to our mechanism to prevent users from deleting
the mesh ingress hop node.

- Field is_internal is now renamed to k8s_routable

- Add reverse_peers on instance which is a list of instance IDs
that peer to this instance (via an address)

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
David O Neill
46dc61253f UI Updates for receptor peering 2024-02-02 10:37:41 -05:00
Seth Foster
6cb2cd18b0 Fix proper indent to instance module
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5d1dd8ec41 Fix inconsistent tab width
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
9f69daf787 Add choices to module protocol field
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
16ece5de7e Remove unused variables and imports
Addresses flake8 and linting failures

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
ab0e9265c5 Add search fields to views
Make receptoraddress list views
searchable by "address"

Other changes:
- Add help text to source and target of the
InstanceLink model

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
04cbbbccfa Update awx_collection to support ReceptorAddress
- Add receptor_address module which allows
users to create addresses for instances

- Update awx_collection functional and integration
tests to support new peering design

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
d1cacf64de Add functional and unit tests
Updated existing tests to support the
ReceptorAddress model

- cannot peer to self
- cannot peer to node that is already peered to me
- cannot peer to node more than once (via 2+ addresses)
- cannot set is_internal True

Other changes:
Change post save signal to only call
schedule_write_receptor_config() when an actual change is detected.

Make functional tests more robust by
checking for specific validation error in the
response.

I.e. instead of just checking for 400, just for 400
and that the error message corresponds to the
validation we are testing for.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5385eb0fb3 Add validation when setting peers
- cannot peer to self
- cannot peer to instance that is already peered to self

Other changes:
- ReceptorAddress protocol field restricted to choices: tcp, ws, wss
- fix awx-manage list_instances when instance.last_seen is None
- InstanceLink make source and target unique together
- Add help text to the ReceptorAddress fields

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
7d7503279d Add API validation when creating ReceptorAddress
- websocket_path can only be set if protocol is ws
- is_internal must be False
- only 1 address per instance can have
peers_from_control_nodes set to True

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
d860d1d91b Temp change to aid dev
Temporarily switch to feature branch for receptor-collection in instance_install_bundle to make dev easier
2024-02-02 10:37:41 -05:00
Seth Foster
3a17c45b64 Register_peers support for receptor_addresses
register_peers has inputs:

source: source instance
peers: list of instances the source should peer to

InstanceLink "target" is now expected to be a ReceptorAddress

For each peer, we can just use the first receptor address. If
multiple receptor addresses exist, throw a command error.

Currently this command is only used on VM-deployments, where
there is only a single receptor address per instance, so this
should work fine.

Other changes:
drop listener_port field from Instance. Listener port is now just
"port" on ReceptorAddress

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
bca68bcdf1 Add install bundle support
group_vars all.yaml changes:
- peer entry has two fields, address and port
- receptor_port is inferred from the first
receptor_address entry that uses protocol tcp

other changes:
ActivityStream now records when receptor_addresses
are peered to

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
c32f234ebb Add peers_from_control_nodes to ReceptorAddress
- write_receptor_config peers to ReceptorAddress entries
that have peers_from_control_nodes enabled

- peers_from_control_nodes and listener_port removed from Instance model

- peers_from_control_nodes added to ReceptorAddress model

- ReceptorAddress is now unique by address and protocol combination

- Write receptor config task is dispatched upon ReceptorAddress creation
or deletion, and when control node is first created

- InstanceLinkSerializer adds a target_address field and has logic
to grab the instance hostname associated with the peered ReceptorAddress

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5cb3d3b078 Update receptor conf when address changes
Add post save and post delete hooks to
call write_receptor_config when
a receptor address is added / removed.

Add peers_from_control_nodes to
provision_instance

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5199cc5246 Add ReceptorAddress to root urls
- Add database contraints to make sure addresses
are unique
If port is defined:
address, port, protocol, websocket_path are unique together

if port is not defined:
address, protocol, websocket_path are unique together

- Allow deleting address via API
- Add ReceptorAddressAccess to determine permissions
- awx-manage add_receptor_address returns changed: True
if successful
2024-02-02 10:37:41 -05:00
Hao Liu
387e877485 Connect from controlplane node to mesh ingress 2024-02-02 10:37:41 -05:00
Seth Foster
d54c5934ff Add support for inbound hop nodes 2024-02-02 10:37:41 -05:00
Chris Meyers
2fa5116197 Project updates do not run against hosts
* The project update event has no host_id or host_name because they
  only run on localhost.
2024-02-01 14:45:16 -05:00
Chris Meyers
527755d986 Always get host from event data
* Regardless of if there is a host map or not, get the host that Ansible
  reports and assign it to the Events' host_name
2024-01-31 09:42:01 -05:00
394 changed files with 9243 additions and 3360 deletions

View File

@@ -11,6 +11,12 @@ runs:
shell: bash
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Set lower case owner name
shell: bash
run: echo "OWNER_LC=${OWNER,,}" >> $GITHUB_ENV
env:
OWNER: '${{ github.repository_owner }}'
- name: Log in to registry
shell: bash
run: |
@@ -18,11 +24,11 @@ runs:
- name: Pre-pull latest devel image to warm cache
shell: bash
run: docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
run: docker pull ghcr.io/${OWNER_LC}/awx_devel:${{ github.base_ref }}
- name: Build image for current source checkout
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} \
COMPOSE_TAG=${{ github.base_ref }} \
make docker-compose-build

View File

@@ -35,7 +35,7 @@ runs:
- name: Start AWX
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
DEV_DOCKER_OWNER=${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
COMPOSE_UP_OPTS="-d" \
make docker-compose
@@ -71,7 +71,7 @@ runs:
id: data
shell: bash
run: |
AWX_IP=$(docker inspect -f '{{.NetworkSettings.Networks._sources_awx.IPAddress}}' tools_awx_1)
AWX_IP=$(docker inspect -f '{{.NetworkSettings.Networks.awx.IPAddress}}' tools_awx_1)
ADMIN_TOKEN=$(docker exec -i tools_awx_1 awx-manage create_oauth2_token --user admin)
echo "ip=$AWX_IP" >> $GITHUB_OUTPUT
echo "admin_token=$ADMIN_TOKEN" >> $GITHUB_OUTPUT

View File

@@ -15,5 +15,4 @@
"dependencies":
- any: ["awx/ui/package.json"]
- any: ["requirements/*.txt"]
- any: ["requirements/requirements.in"]
- any: ["requirements/*"]

View File

@@ -66,6 +66,8 @@ jobs:
awx-operator:
runs-on: ubuntu-latest
timeout-minutes: 60
env:
DEBUG_OUTPUT_DIR: /tmp/awx_operator_molecule_test
steps:
- name: Checkout awx
uses: actions/checkout@v3
@@ -94,11 +96,11 @@ jobs:
- name: Build AWX image
working-directory: awx
run: |
ansible-playbook -v tools/ansible/build.yml \
-e headless=yes \
-e awx_image=awx \
-e awx_image_tag=ci \
-e ansible_python_interpreter=$(which python3)
VERSION=`make version-for-buildyml` make awx-kube-build
env:
COMPOSE_TAG: ci
DEV_DOCKER_TAG_BASE: local
HEADLESS: yes
- name: Run test deployment with awx-operator
working-directory: awx-operator
@@ -107,10 +109,19 @@ jobs:
ansible-galaxy collection install -r molecule/requirements.yml
sudo rm -f $(which kustomize)
make kustomize
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind -- --skip-tags=replicas
env:
AWX_TEST_IMAGE: awx
AWX_TEST_IMAGE: local/awx
AWX_TEST_VERSION: ci
AWX_EE_TEST_IMAGE: quay.io/ansible/awx-ee:latest
STORE_DEBUG_OUTPUT: true
- name: Upload debug output
if: failure()
uses: actions/upload-artifact@v3
with:
name: awx-operator-debug-output
path: ${{ env.DEBUG_OUTPUT_DIR }}
collection-sanity:
name: awx_collection sanity
@@ -127,10 +138,6 @@ jobs:
- name: Run sanity tests
run: make test_collection_sanity
env:
# needed due to cgroupsv2. This is fixed, but a stable release
# with the fix has not been made yet.
ANSIBLE_TEST_PREFER_PODMAN: 1
collection-integration:
name: awx_collection integration

View File

@@ -3,28 +3,50 @@ name: Build/Push Development Images
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on:
workflow_dispatch:
push:
branches:
- devel
- release_*
- feature_*
jobs:
push:
if: endsWith(github.repository, '/awx') || startsWith(github.ref, 'refs/heads/release_')
push-development-images:
runs-on: ubuntu-latest
timeout-minutes: 60
timeout-minutes: 120
permissions:
packages: write
contents: read
strategy:
fail-fast: false
matrix:
build-targets:
- image-name: awx_devel
make-target: docker-compose-buildx
- image-name: awx_kube_devel
make-target: awx-kube-dev-buildx
- image-name: awx
make-target: awx-kube-buildx
steps:
- name: Skipping build of awx image for non-awx repository
run: |
echo "Skipping build of awx image for non-awx repository"
exit 0
if: matrix.build-targets.image-name == 'awx' && !endsWith(github.repository, '/awx')
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set lower case owner name
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Set GITHUB_ENV variables
run: |
echo "OWNER_LC=${OWNER,,}" >>${GITHUB_ENV}
echo "DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER,,}" >> $GITHUB_ENV
echo "COMPOSE_TAG=${GITHUB_REF##*/}" >> $GITHUB_ENV
echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
env:
OWNER: '${{ github.repository_owner }}'
@@ -37,23 +59,19 @@ jobs:
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/} || :
- name: Setup node and npm
uses: actions/setup-node@v2
with:
node-version: '16.13.1'
if: matrix.build-targets.image-name == 'awx'
- name: Build images
- name: Prebuild UI for awx image (to speed up build process)
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make docker-compose-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-dev-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-build
sudo apt-get install gettext
make ui-release
make ui-next
if: matrix.build-targets.image-name == 'awx'
- name: Push development images
- name: Build and push AWX devel images
run: |
docker push ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/}
- name: Push AWX k8s image, only for upstream and feature branches
run: docker push ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/}
if: endsWith(github.repository, '/awx')
make ${{ matrix.build-targets.make-target }}

View File

@@ -1,75 +0,0 @@
---
name: E2E Tests
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on:
pull_request_target:
types: [labeled]
jobs:
e2e-test:
if: contains(github.event.pull_request.labels.*.name, 'qe:e2e')
runs-on: ubuntu-latest
timeout-minutes: 40
permissions:
packages: write
contents: read
strategy:
matrix:
job: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: true
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Pull awx_cypress_base image
run: |
docker pull quay.io/awx/awx_cypress_base:latest
- name: Checkout test project
uses: actions/checkout@v3
with:
repository: ${{ github.repository_owner }}/tower-qa
ssh-key: ${{ secrets.QA_REPO_KEY }}
path: tower-qa
ref: devel
- name: Build cypress
run: |
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
docker build -t awx-pf-tests .
- name: Run E2E tests
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
run: |
export COMMIT_INFO_BRANCH=$GITHUB_HEAD_REF
export COMMIT_INFO_AUTHOR=$GITHUB_ACTOR
export COMMIT_INFO_SHA=$GITHUB_SHA
export COMMIT_INFO_REMOTE=$GITHUB_REPOSITORY_OWNER
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
AWX_IP=${{ steps.awx.outputs.ip }}
printenv > .env
echo "Executing tests:"
docker run \
--network '_sources_default' \
--ipc=host \
--env-file=.env \
-e CYPRESS_baseUrl="https://$AWX_IP:8043" \
-e CYPRESS_AWX_E2E_USERNAME=admin \
-e CYPRESS_AWX_E2E_PASSWORD='password' \
-e COMMAND="npm run cypress-concurrently-gha" \
-v /dev/shm:/dev/shm \
-v $PWD:/e2e \
-w /e2e \
awx-pf-tests run --project .
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: e2e-${{ matrix.job }}.log

View File

@@ -2,12 +2,10 @@
name: Feature branch deletion cleanup
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on:
delete:
branches:
- feature_**
on: delete
jobs:
push:
branch_delete:
if: ${{ github.event.ref_type == 'branch' && startsWith(github.event.ref, 'feature_') }}
runs-on: ubuntu-latest
timeout-minutes: 20
permissions:
@@ -22,6 +20,4 @@ jobs:
run: |
ansible localhost -c local, -m command -a "{{ ansible_python_interpreter + ' -m pip install boto3'}}"
ansible localhost -c local -m aws_s3 \
-a "bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=delete permission=public-read"
-a "bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=delobj permission=public-read"

View File

@@ -7,7 +7,11 @@ env:
on:
release:
types: [published]
workflow_dispatch:
inputs:
tag_name:
description: 'Name for the tag of the release.'
required: true
permissions:
contents: read # to fetch code (actions/checkout)
@@ -17,6 +21,16 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Set GitHub Env vars for workflow_dispatch event
if: ${{ github.event_name == 'workflow_dispatch' }}
run: |
echo "TAG_NAME=${{ github.event.inputs.tag_name }}" >> $GITHUB_ENV
- name: Set GitHub Env vars if release event
if: ${{ github.event_name == 'release' }}
run: |
echo "TAG_NAME=${{ env.TAG_NAME }}" >> $GITHUB_ENV
- name: Checkout awx
uses: actions/checkout@v3
@@ -43,16 +57,18 @@ jobs:
- name: Build collection and publish to galaxy
env:
COLLECTION_NAMESPACE: ${{ env.collection_namespace }}
COLLECTION_VERSION: ${{ github.event.release.tag_name }}
COLLECTION_VERSION: ${{ env.TAG_NAME }}
COLLECTION_TEMPLATE_VERSION: true
run: |
make build_collection
if [ "$(curl -L --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
echo "Galaxy release already done"; \
else \
curl_with_redirects=$(curl --head -sLw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz | tail -1)
curl_without_redirects=$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz | tail -1)
if [[ "$curl_with_redirects" == "302" ]] || [[ "$curl_without_redirects" == "302" ]]; then
echo "Galaxy release already done";
else
ansible-galaxy collection publish \
--token=${{ secrets.GALAXY_TOKEN }} \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz; \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz;
fi
- name: Set official pypi info
@@ -64,6 +80,8 @@ jobs:
if: ${{ github.repository_owner != 'ansible' }}
- name: Build awxkit and upload to pypi
env:
SETUPTOOLS_SCM_PRETEND_VERSION: ${{ env.TAG_NAME }}
run: |
git reset --hard
cd awxkit && python3 setup.py sdist bdist_wheel
@@ -83,11 +101,15 @@ jobs:
- name: Re-tag and promote awx image
run: |
docker pull ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }}
docker tag ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }} quay.io/${{ github.repository }}:${{ github.event.release.tag_name }}
docker tag ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }} quay.io/${{ github.repository }}:latest
docker push quay.io/${{ github.repository }}:${{ github.event.release.tag_name }}
docker push quay.io/${{ github.repository }}:latest
docker pull ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker tag ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }} quay.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker push quay.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker buildx imagetools create \
ghcr.io/${{ github.repository }}:${{ env.TAG_NAME }} \
--tag quay.io/${{ github.repository }}:${{ env.TAG_NAME }}
docker buildx imagetools create \
ghcr.io/${{ github.repository }}:${{ env.TAG_NAME }} \
--tag quay.io/${{ github.repository }}:latest
- name: Re-tag and promote awx-ee image
run: |
docker buildx imagetools create \
ghcr.io/${{ github.repository_owner }}/awx-ee:${{ env.TAG_NAME }} \
--tag quay.io/${{ github.repository_owner }}/awx-ee:${{ env.TAG_NAME }}

View File

@@ -49,13 +49,11 @@ jobs:
with:
path: awx
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
- name: Checkout awx-operator
uses: actions/checkout@v3
with:
python-version: ${{ env.py_version }}
repository: ${{ github.repository_owner }}/awx-operator
path: awx-operator
- name: Checkout awx-logos
uses: actions/checkout@v3
@@ -63,50 +61,84 @@ jobs:
repository: ansible/awx-logos
path: awx-logos
- name: Checkout awx-operator
uses: actions/checkout@v3
- name: Get python version from Makefile
working-directory: awx
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
with:
repository: ${{ github.repository_owner }}/awx-operator
path: awx-operator
python-version: ${{ env.py_version }}
- name: Install playbook dependencies
run: |
python3 -m pip install docker
- name: Build and stage AWX
- name: Log into registry ghcr.io
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Copy logos for inclusion in sdist for official build
working-directory: awx
run: |
ansible-playbook -v tools/ansible/build.yml \
-e registry=ghcr.io \
-e registry_username=${{ github.actor }} \
-e registry_password=${{ secrets.GITHUB_TOKEN }} \
-e awx_image=${{ github.repository }} \
-e awx_version=${{ github.event.inputs.version }} \
-e ansible_python_interpreter=$(which python3) \
-e push=yes \
-e awx_official=yes
cp ../awx-logos/awx/ui/client/assets/* awx/ui/public/static/media/
- name: Log in to GHCR
run: |
echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Setup node and npm
uses: actions/setup-node@v2
with:
node-version: '16.13.1'
- name: Log in to Quay
- name: Prebuild UI for awx image (to speed up build process)
working-directory: awx
run: |
echo ${{ secrets.QUAY_TOKEN }} | docker login quay.io -u ${{ secrets.QUAY_USER }} --password-stdin
sudo apt-get install gettext
make ui-release
make ui-next
- name: Set build env variables
run: |
echo "DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER,,}" >> $GITHUB_ENV
echo "COMPOSE_TAG=${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "VERSION=${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "AWX_TEST_VERSION=${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "AWX_TEST_IMAGE=ghcr.io/${OWNER,,}/awx" >> $GITHUB_ENV
echo "AWX_EE_TEST_IMAGE=ghcr.io/${OWNER,,}/awx-ee:${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "AWX_OPERATOR_TEST_IMAGE=ghcr.io/${OWNER,,}/awx-operator:${{ github.event.inputs.operator_version }}" >> $GITHUB_ENV
env:
OWNER: ${{ github.repository_owner }}
- name: Build and stage AWX
working-directory: awx
env:
DOCKER_BUILDX_PUSH: true
HEADLESS: false
PLATFORMS: linux/amd64,linux/arm64
run: |
make awx-kube-buildx
- name: tag awx-ee:latest with version input
run: |
docker pull quay.io/ansible/awx-ee:latest
docker tag quay.io/ansible/awx-ee:latest ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
docker push ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
docker buildx imagetools create \
quay.io/ansible/awx-ee:latest \
--tag ${AWX_EE_TEST_IMAGE}
- name: Build and stage awx-operator
- name: Stage awx-operator image
working-directory: awx-operator
run: |
BUILD_ARGS="--build-arg DEFAULT_AWX_VERSION=${{ github.event.inputs.version }} \
--build-arg OPERATOR_VERSION=${{ github.event.inputs.operator_version }}" \
IMAGE_TAG_BASE=ghcr.io/${{ github.repository_owner }}/awx-operator \
VERSION=${{ github.event.inputs.operator_version }} make docker-build docker-push
BUILD_ARGS="--build-arg DEFAULT_AWX_VERSION=${{ github.event.inputs.version}} \
--build-arg OPERATOR_VERSION=${{ github.event.inputs.operator_version }}" \
IMG=${AWX_OPERATOR_TEST_IMAGE} \
make docker-buildx
- name: Pulling images for test deployment with awx-operator
# awx operator molecue test expect to kind load image and buildx exports image to registry and not local
run: |
docker pull ${AWX_OPERATOR_TEST_IMAGE}
docker pull ${AWX_EE_TEST_IMAGE}
docker pull ${AWX_TEST_IMAGE}:${AWX_TEST_VERSION}
- name: Run test deployment with awx-operator
working-directory: awx-operator
@@ -116,10 +148,6 @@ jobs:
sudo rm -f $(which kustomize)
make kustomize
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule test -s kind
env:
AWX_TEST_IMAGE: ${{ github.repository }}
AWX_TEST_VERSION: ${{ github.event.inputs.version }}
AWX_EE_TEST_IMAGE: ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
- name: Create draft release for AWX
working-directory: awx

8
.gitignore vendored
View File

@@ -46,6 +46,11 @@ tools/docker-compose/overrides/
tools/docker-compose-minikube/_sources
tools/docker-compose/keycloak.awx.realm.json
!tools/docker-compose/editable_dependencies
tools/docker-compose/editable_dependencies/*
!tools/docker-compose/editable_dependencies/README.md
!tools/docker-compose/editable_dependencies/install.sh
# Tower setup playbook testing
setup/test/roles/postgresql
**/provision_docker
@@ -169,3 +174,6 @@ awx/ui_next/build
# Docs build stuff
docs/docsite/build/
_readthedocs/
# Pyenv
.python-version

113
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,113 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "run_ws_heartbeat",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_ws_heartbeat"],
"django": true,
"preLaunchTask": "stop awx-ws-heartbeat",
"postDebugTask": "start awx-ws-heartbeat"
},
{
"name": "run_cache_clear",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_cache_clear"],
"django": true,
"preLaunchTask": "stop awx-cache-clear",
"postDebugTask": "start awx-cache-clear"
},
{
"name": "run_callback_receiver",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_callback_receiver"],
"django": true,
"preLaunchTask": "stop awx-receiver",
"postDebugTask": "start awx-receiver"
},
{
"name": "run_dispatcher",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_dispatcher"],
"django": true,
"preLaunchTask": "stop awx-dispatcher",
"postDebugTask": "start awx-dispatcher"
},
{
"name": "run_rsyslog_configurer",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_rsyslog_configurer"],
"django": true,
"preLaunchTask": "stop awx-rsyslog-configurer",
"postDebugTask": "start awx-rsyslog-configurer"
},
{
"name": "run_cache_clear",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_cache_clear"],
"django": true,
"preLaunchTask": "stop awx-cache-clear",
"postDebugTask": "start awx-cache-clear"
},
{
"name": "run_wsrelay",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_wsrelay"],
"django": true,
"preLaunchTask": "stop awx-wsrelay",
"postDebugTask": "start awx-wsrelay"
},
{
"name": "daphne",
"type": "debugpy",
"request": "launch",
"program": "/var/lib/awx/venv/awx/bin/daphne",
"args": ["-b", "127.0.0.1", "-p", "8051", "awx.asgi:channel_layer"],
"django": true,
"preLaunchTask": "stop awx-daphne",
"postDebugTask": "start awx-daphne"
},
{
"name": "runserver(uwsgi alternative)",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["runserver", "127.0.0.1:8052"],
"django": true,
"preLaunchTask": "stop awx-uwsgi",
"postDebugTask": "start awx-uwsgi"
},
{
"name": "runserver_plus(uwsgi alternative)",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["runserver_plus", "127.0.0.1:8052"],
"django": true,
"preLaunchTask": "stop awx-uwsgi and install Werkzeug",
"postDebugTask": "start awx-uwsgi"
},
{
"name": "shell_plus",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["shell_plus"],
"django": true,
},
]
}

100
.vscode/tasks.json vendored Normal file
View File

@@ -0,0 +1,100 @@
{
"version": "2.0.0",
"tasks": [
{
"label": "start awx-cache-clear",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-cache-clear"
},
{
"label": "stop awx-cache-clear",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-cache-clear"
},
{
"label": "start awx-daphne",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-daphne"
},
{
"label": "stop awx-daphne",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-daphne"
},
{
"label": "start awx-dispatcher",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-dispatcher"
},
{
"label": "stop awx-dispatcher",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-dispatcher"
},
{
"label": "start awx-receiver",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-receiver"
},
{
"label": "stop awx-receiver",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-receiver"
},
{
"label": "start awx-rsyslog-configurer",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-rsyslog-configurer"
},
{
"label": "stop awx-rsyslog-configurer",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-rsyslog-configurer"
},
{
"label": "start awx-rsyslogd",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-rsyslogd"
},
{
"label": "stop awx-rsyslogd",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-rsyslogd"
},
{
"label": "start awx-uwsgi",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-uwsgi"
},
{
"label": "stop awx-uwsgi",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-uwsgi"
},
{
"label": "stop awx-uwsgi and install Werkzeug",
"type": "shell",
"command": "pip install Werkzeug; supervisorctl stop tower-processes:awx-uwsgi"
},
{
"label": "start awx-ws-heartbeat",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-ws-heartbeat"
},
{
"label": "stop awx-ws-heartbeat",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-ws-heartbeat"
},
{
"label": "start awx-wsrelay",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-wsrelay"
},
{
"label": "stop awx-wsrelay",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-wsrelay"
}
]
}

View File

@@ -80,7 +80,7 @@ If any of those items are missing your pull request will still get the `needs_tr
Currently you can expect awxbot to add common labels such as `state:needs_triage`, `type:bug`, `component:docs`, etc...
These labels are determined by the template data. Please use the template and fill it out as accurately as possible.
The `state:needs_triage` label will will remain on your pull request until a person has looked at it.
The `state:needs_triage` label will remain on your pull request until a person has looked at it.
You can also expect the bot to CC maintainers of specific areas of the code, this will notify them that there is a pull request by placing a comment on the pull request.
The comment will look something like `CC @matburt @wwitzel3 ...`.

View File

@@ -1,8 +1,8 @@
-include awx/ui_next/Makefile
PYTHON := $(notdir $(shell for i in python3.9 python3; do command -v $$i; done|sed 1q))
PYTHON := $(notdir $(shell for i in python3.11 python3; do command -v $$i; done|sed 1q))
SHELL := bash
DOCKER_COMPOSE ?= docker-compose
DOCKER_COMPOSE ?= docker compose
OFFICIAL ?= no
NODE ?= node
NPM_BIN ?= npm
@@ -10,7 +10,7 @@ KIND_BIN ?= $(shell which kind)
CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py)
VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py 2> /dev/null)
# ansible-test requires semver compatable version, so we allow overrides to hack it
COLLECTION_VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py | cut -d . -f 1-3)
@@ -47,6 +47,8 @@ VAULT ?= false
VAULT_TLS ?= false
# If set to true docker-compose will also start a tacacs+ instance
TACACS ?= false
# If set to true docker-compose will install editable dependencies
EDITABLE_DEPENDENCIES ?= false
VENV_BASE ?= /var/lib/awx/venv
@@ -63,7 +65,7 @@ RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio
# These should be upgraded in the AWX and Ansible venv before attempting
# to install the actual requirements
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==65.6.3 setuptools_scm[toml]==8.0.4 wheel==0.38.4
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==69.0.2 setuptools_scm[toml]==8.0.4 wheel==0.42.0
NAME ?= awx
@@ -75,6 +77,9 @@ SDIST_TAR_FILE ?= $(SDIST_TAR_NAME).tar.gz
I18N_FLAG_FILE = .i18n_built
## PLATFORMS defines the target platforms for the manager image be build to provide support to multiple
PLATFORMS ?= linux/amd64,linux/arm64 # linux/ppc64le,linux/s390x
.PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \
develop refresh adduser migrate dbchange \
receiver test test_unit test_coverage coverage_html \
@@ -213,8 +218,6 @@ collectstatic:
fi; \
$(PYTHON) manage.py collectstatic --clear --noinput > /dev/null 2>&1
DEV_RELOAD_COMMAND ?= supervisorctl restart tower-processes:*
uwsgi: collectstatic
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -222,7 +225,7 @@ uwsgi: collectstatic
uwsgi /etc/tower/uwsgi.ini
awx-autoreload:
@/awx_devel/tools/docker-compose/awx-autoreload /awx_devel/awx "$(DEV_RELOAD_COMMAND)"
@/awx_devel/tools/docker-compose/awx-autoreload /awx_devel/awx
daphne:
@if [ "$(VENV_BASE)" ]; then \
@@ -302,7 +305,7 @@ swagger: reports
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
(set -o pipefail && py.test $(PYTEST_ARGS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs --release=$(VERSION_TARGET) | tee reports/$@.report)
(set -o pipefail && py.test $(PYTEST_ARGS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs | tee reports/$@.report)
check: black
@@ -532,15 +535,23 @@ docker-compose-sources: .git/hooks/pre-commit
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS) \
-e enable_tacacs=$(TACACS) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)
-e install_editable_dependencies=$(EDITABLE_DEPENDENCIES) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)
docker-compose: awx/projects docker-compose-sources
ansible-galaxy install --ignore-certs -r tools/docker-compose/ansible/requirements.yml;
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS);
-e vault_tls=$(VAULT_TLS) \
-e enable_ldap=$(LDAP); \
$(MAKE) docker-compose-up
docker-compose-up:
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans
docker-compose-down:
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) down --remove-orphans
docker-compose-credential-plugins: awx/projects docker-compose-sources
echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m"
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx_1 --remove-orphans
@@ -585,12 +596,27 @@ docker-compose-build: Dockerfile.dev
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
.PHONY: docker-compose-buildx
## Build awx_devel image for docker compose development environment for multiple architectures
docker-compose-buildx: Dockerfile.dev
- docker buildx create --name docker-compose-buildx
docker buildx use docker-compose-buildx
- docker buildx build \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) \
--platform=$(PLATFORMS) \
--tag $(DEVEL_IMAGE_NAME) \
-f Dockerfile.dev .
- docker buildx rm docker-compose-buildx
docker-clean:
-$(foreach container_id,$(shell docker ps -f name=tools_awx -aq && docker ps -f name=tools_receptor -aq),docker stop $(container_id); docker rm -f $(container_id);)
-$(foreach image_id,$(shell docker images --filter=reference='*/*/*awx_devel*' --filter=reference='*/*awx_devel*' --filter=reference='*awx_devel*' -aq),docker rmi --force $(image_id);)
docker-clean-volumes: docker-compose-clean docker-compose-container-group-clean
docker volume rm -f tools_awx_db tools_vault_1 tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q)
docker volume rm -f tools_var_lib_awx tools_awx_db tools_awx_db_15 tools_vault_1 tools_ldap_1 tools_grafana_storage tools_prometheus_storage $(shell docker volume ls --filter name=tools_redis_socket_ -q)
docker-refresh: docker-clean docker-compose
@@ -612,9 +638,6 @@ clean-elk:
docker rm tools_elasticsearch_1
docker rm tools_kibana_1
psql-container:
docker run -it --net tools_default --rm postgres:12 sh -c 'exec psql -h "postgres" -p "5432" -U postgres'
VERSION:
@echo "awx: $(VERSION)"
@@ -647,6 +670,21 @@ awx-kube-build: Dockerfile
--build-arg HEADLESS=$(HEADLESS) \
-t $(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG) .
## Build multi-arch awx image for deployment on Kubernetes environment.
awx-kube-buildx: Dockerfile
- docker buildx create --name awx-kube-buildx
docker buildx use awx-kube-buildx
- docker buildx build \
--push \
--build-arg VERSION=$(VERSION) \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
--build-arg HEADLESS=$(HEADLESS) \
--platform=$(PLATFORMS) \
--tag $(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG) \
-f Dockerfile .
- docker buildx rm awx-kube-buildx
.PHONY: Dockerfile.kube-dev
## Generate Docker.kube-dev for awx_kube_devel image
Dockerfile.kube-dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
@@ -663,6 +701,18 @@ awx-kube-dev-build: Dockerfile.kube-dev
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
## Build and push multi-arch awx_kube_devel image for development on local Kubernetes environment.
awx-kube-dev-buildx: Dockerfile.kube-dev
- docker buildx create --name awx-kube-dev-buildx
docker buildx use awx-kube-dev-buildx
- docker buildx build \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
--platform=$(PLATFORMS) \
--tag $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
-f Dockerfile.kube-dev .
- docker buildx rm awx-kube-dev-buildx
kind-dev-load: awx-kube-dev-build
$(KIND_BIN) load docker-image $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG)

View File

@@ -154,10 +154,12 @@ def manage():
from django.conf import settings
from django.core.management import execute_from_command_line
# enforce the postgres version is equal to 12. if not, then terminate program with exit code of 1
# enforce the postgres version is a minimum of 12 (we need this for partitioning); if not, then terminate program with exit code of 1
# In the future if we require a feature of a version of postgres > 12 this should be updated to reflect that.
# The return of connection.pg_version is something like 12013
if not os.getenv('SKIP_PG_VERSION_CHECK', False) and not MODE == 'development':
if (connection.pg_version // 10000) < 12:
sys.stderr.write("Postgres version 12 is required\n")
sys.stderr.write("At a minimum, postgres version 12 is required\n")
sys.exit(1)
if len(sys.argv) >= 2 and sys.argv[1] in ('version', '--version'): # pragma: no cover

View File

@@ -93,6 +93,7 @@ register(
default='',
label=_('Login redirect override URL'),
help_text=_('URL to which unauthorized users will be redirected to log in. If blank, users will be sent to the login page.'),
warning_text=_('Changing the redirect URL could impact the ability to login if local authentication is also disabled.'),
category=_('Authentication'),
category_slug='authentication',
)

View File

@@ -30,11 +30,15 @@ from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import StaticHTMLRenderer
from rest_framework.negotiation import DefaultContentNegotiation
# django-ansible-base
from ansible_base.rest_filters.rest_framework.field_lookup_backend import FieldLookupBackend
from ansible_base.lib.utils.models import get_all_field_names
from ansible_base.rbac.models import RoleEvaluation, RoleDefinition
from ansible_base.rbac.permission_registry import permission_registry
# AWX
from awx.main.models import UnifiedJob, UnifiedJobTemplate, User, Role, Credential, WorkflowJobTemplateNode, WorkflowApprovalTemplate
from awx.main.models.rbac import give_creator_permissions
from awx.main.access import optimize_queryset
from awx.main.utils import camelcase_to_underscore, get_search_fields, getattrd, get_object_or_400, decrypt_field, get_awx_version
from awx.main.utils.licensing import server_product_name
@@ -91,7 +95,9 @@ class LoggedLoginView(auth_views.LoginView):
ret = super(LoggedLoginView, self).post(request, *args, **kwargs)
if request.user.is_authenticated:
logger.info(smart_str(u"User {} logged in from {}".format(self.request.user.username, request.META.get('REMOTE_ADDR', None))))
ret.set_cookie('userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False))
ret.set_cookie(
'userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False), samesite=getattr(settings, 'USER_COOKIE_SAMESITE', 'Lax')
)
ret.setdefault('X-API-Session-Cookie-Name', getattr(settings, 'SESSION_COOKIE_NAME', 'awx_sessionid'))
return ret
@@ -103,6 +109,9 @@ class LoggedLoginView(auth_views.LoginView):
class LoggedLogoutView(auth_views.LogoutView):
success_url_allowed_hosts = set(settings.LOGOUT_ALLOWED_HOSTS.split(",")) if settings.LOGOUT_ALLOWED_HOSTS else set()
def dispatch(self, request, *args, **kwargs):
original_user = getattr(request, 'user', None)
ret = super(LoggedLogoutView, self).dispatch(request, *args, **kwargs)
@@ -472,7 +481,11 @@ class ListAPIView(generics.ListAPIView, GenericAPIView):
class ListCreateAPIView(ListAPIView, generics.ListCreateAPIView):
# Base class for a list view that allows creating new objects.
pass
def perform_create(self, serializer):
super().perform_create(serializer)
if serializer.Meta.model in permission_registry.all_registered_models:
if self.request and self.request.user:
give_creator_permissions(self.request.user, serializer.instance)
class ParentMixin(object):
@@ -792,6 +805,7 @@ class RetrieveUpdateDestroyAPIView(RetrieveUpdateAPIView, DestroyAPIView):
class ResourceAccessList(ParentMixin, ListAPIView):
deprecated = True
serializer_class = ResourceAccessListElementSerializer
ordering = ('username',)
@@ -799,6 +813,15 @@ class ResourceAccessList(ParentMixin, ListAPIView):
obj = self.get_parent_object()
content_type = ContentType.objects.get_for_model(obj)
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
ancestors = set(RoleEvaluation.objects.filter(content_type_id=content_type.id, object_id=obj.id).values_list('role_id', flat=True))
qs = User.objects.filter(has_roles__in=ancestors) | User.objects.filter(is_superuser=True)
auditor_role = RoleDefinition.objects.filter(name="System Auditor").first()
if auditor_role:
qs |= User.objects.filter(role_assignments__role_definition=auditor_role)
return qs.distinct()
roles = set(Role.objects.filter(content_type=content_type, object_id=obj.id))
ancestors = set()
@@ -958,7 +981,7 @@ class CopyAPIView(GenericAPIView):
None, None, self.model, obj, request.user, create_kwargs=create_kwargs, copy_name=serializer.validated_data.get('name', '')
)
if hasattr(new_obj, 'admin_role') and request.user not in new_obj.admin_role.members.all():
new_obj.admin_role.members.add(request.user)
give_creator_permissions(request.user, new_obj)
if sub_objs:
permission_check_func = None
if hasattr(type(self), 'deep_copy_permission_check_func'):

View File

@@ -36,11 +36,13 @@ class Metadata(metadata.SimpleMetadata):
field_info = OrderedDict()
field_info['type'] = self.label_lookup[field]
field_info['required'] = getattr(field, 'required', False)
field_info['hidden'] = getattr(field, 'hidden', False)
text_attrs = [
'read_only',
'label',
'help_text',
'warning_text',
'min_length',
'max_length',
'min_value',

View File

@@ -6,7 +6,7 @@ import copy
import json
import logging
import re
from collections import OrderedDict
from collections import Counter, OrderedDict
from datetime import timedelta
from uuid import uuid4
@@ -43,11 +43,14 @@ from rest_framework.utils.serializer_helpers import ReturnList
# Django-Polymorphic
from polymorphic.models import PolymorphicModel
# django-ansible-base
from ansible_base.lib.utils.models import get_type_for_model
from ansible_base.rbac.models import RoleEvaluation, ObjectRole
from ansible_base.rbac import permission_registry
# AWX
from awx.main.access import get_user_capabilities
from awx.main.constants import ACTIVE_STATES, CENSOR_VALUE
from awx.main.constants import ACTIVE_STATES, CENSOR_VALUE, org_role_to_permission
from awx.main.models import (
ActivityStream,
AdHocCommand,
@@ -82,6 +85,7 @@ from awx.main.models import (
Project,
ProjectUpdate,
ProjectUpdateEvent,
ReceptorAddress,
RefreshToken,
Role,
Schedule,
@@ -101,7 +105,7 @@ from awx.main.models import (
CLOUD_INVENTORY_SOURCES,
)
from awx.main.models.base import VERBOSITY_CHOICES, NEW_JOB_TYPE_CHOICES
from awx.main.models.rbac import role_summary_fields_generator, RoleAncestorEntry
from awx.main.models.rbac import role_summary_fields_generator, give_creator_permissions, get_role_codenames, to_permissions, get_role_from_object_role
from awx.main.fields import ImplicitRoleField
from awx.main.utils import (
get_model_for_type,
@@ -190,6 +194,7 @@ SUMMARIZABLE_FK_FIELDS = {
'webhook_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'approved_or_denied_by': ('id', 'username', 'first_name', 'last_name'),
'credential_type': DEFAULT_SUMMARY_FIELDS,
'resource': ('ansible_id', 'resource_type'),
}
@@ -636,7 +641,7 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
exclusions = self.get_validation_exclusions(self.instance)
obj = self.instance or self.Meta.model()
for k, v in attrs.items():
if k not in exclusions:
if k not in exclusions and k != 'canonical_address_port':
setattr(obj, k, v)
obj.full_clean(exclude=exclusions)
# full_clean may modify values on the instance; copy those changes
@@ -2761,13 +2766,26 @@ class ResourceAccessListElementSerializer(UserSerializer):
team_content_type = ContentType.objects.get_for_model(Team)
content_type = ContentType.objects.get_for_model(obj)
def get_roles_on_resource(parent_role):
"Returns a string list of the roles a parent_role has for current obj."
return list(
RoleAncestorEntry.objects.filter(ancestor=parent_role, content_type_id=content_type.id, object_id=obj.id)
.values_list('role_field', flat=True)
.distinct()
)
reversed_org_map = {}
for k, v in org_role_to_permission.items():
reversed_org_map[v] = k
reversed_role_map = {}
for k, v in to_permissions.items():
reversed_role_map[v] = k
def get_roles_from_perms(perm_list):
"""given a list of permission codenames return a list of role names"""
role_names = set()
for codename in perm_list:
action = codename.split('_', 1)[0]
if action in reversed_role_map:
role_names.add(reversed_role_map[action])
elif codename in reversed_org_map:
if isinstance(obj, Organization):
role_names.add(reversed_org_map[codename])
if 'view_organization' not in role_names:
role_names.add('read_role')
return list(role_names)
def format_role_perm(role):
role_dict = {'id': role.id, 'name': role.name, 'description': role.description}
@@ -2784,13 +2802,21 @@ class ResourceAccessListElementSerializer(UserSerializer):
else:
# Singleton roles should not be managed from this view, as per copy/edit rework spec
role_dict['user_capabilities'] = {'unattach': False}
return {'role': role_dict, 'descendant_roles': get_roles_on_resource(role)}
model_name = content_type.model
if isinstance(obj, Organization):
descendant_perms = [codename for codename in get_role_codenames(role) if codename.endswith(model_name) or codename.startswith('add_')]
else:
descendant_perms = [codename for codename in get_role_codenames(role) if codename.endswith(model_name)]
return {'role': role_dict, 'descendant_roles': get_roles_from_perms(descendant_perms)}
def format_team_role_perm(naive_team_role, permissive_role_ids):
ret = []
team = naive_team_role.content_object
team_role = naive_team_role
if naive_team_role.role_field == 'admin_role':
team_role = naive_team_role.content_object.member_role
team_role = team.member_role
for role in team_role.children.filter(id__in=permissive_role_ids).all():
role_dict = {
'id': role.id,
@@ -2810,10 +2836,87 @@ class ResourceAccessListElementSerializer(UserSerializer):
else:
# Singleton roles should not be managed from this view, as per copy/edit rework spec
role_dict['user_capabilities'] = {'unattach': False}
ret.append({'role': role_dict, 'descendant_roles': get_roles_on_resource(team_role)})
descendant_perms = list(
RoleEvaluation.objects.filter(role__in=team.has_roles.all(), object_id=obj.id, content_type_id=content_type.id)
.values_list('codename', flat=True)
.distinct()
)
ret.append({'role': role_dict, 'descendant_roles': get_roles_from_perms(descendant_perms)})
return ret
gfk_kwargs = dict(content_type_id=content_type.id, object_id=obj.id)
direct_permissive_role_ids = Role.objects.filter(**gfk_kwargs).values_list('id', flat=True)
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
ret['summary_fields']['direct_access'] = []
ret['summary_fields']['indirect_access'] = []
new_roles_seen = set()
all_team_roles = set()
all_permissive_role_ids = set()
for evaluation in RoleEvaluation.objects.filter(role__in=user.has_roles.all(), **gfk_kwargs).prefetch_related('role'):
new_role = evaluation.role
if new_role.id in new_roles_seen:
continue
new_roles_seen.add(new_role.id)
old_role = get_role_from_object_role(new_role)
all_permissive_role_ids.add(old_role.id)
if int(new_role.object_id) == obj.id and new_role.content_type_id == content_type.id:
ret['summary_fields']['direct_access'].append(format_role_perm(old_role))
elif new_role.content_type_id == team_content_type.id:
all_team_roles.add(old_role)
else:
ret['summary_fields']['indirect_access'].append(format_role_perm(old_role))
# Lazy role creation gives us a big problem, where some intermediate roles are not easy to find
# like when a team has indirect permission, so here we get all roles the users teams have
# these contribute to all potential permission-granting roles of the object
user_teams_qs = permission_registry.team_model.objects.filter(member_roles__in=ObjectRole.objects.filter(users=user))
team_obj_roles = ObjectRole.objects.filter(teams__in=user_teams_qs)
for evaluation in RoleEvaluation.objects.filter(role__in=team_obj_roles, **gfk_kwargs).prefetch_related('role'):
new_role = evaluation.role
if new_role.id in new_roles_seen:
continue
new_roles_seen.add(new_role.id)
old_role = get_role_from_object_role(new_role)
all_permissive_role_ids.add(old_role.id)
# In DAB RBAC, superuser is strictly a user flag, and global roles are not in the RoleEvaluation table
if user.is_superuser:
ret['summary_fields'].setdefault('indirect_access', [])
all_role_names = [field.name for field in obj._meta.get_fields() if isinstance(field, ImplicitRoleField)]
ret['summary_fields']['indirect_access'].append(
{
"role": {
"id": None,
"name": _("System Administrator"),
"description": _("Can manage all aspects of the system"),
"user_capabilities": {"unattach": False},
},
"descendant_roles": all_role_names,
}
)
elif user.is_system_auditor:
ret['summary_fields'].setdefault('indirect_access', [])
ret['summary_fields']['indirect_access'].append(
{
"role": {
"id": None,
"name": _("System Auditor"),
"description": _("Can view all aspects of the system"),
"user_capabilities": {"unattach": False},
},
"descendant_roles": ["read_role"],
}
)
ret['summary_fields']['direct_access'].extend([y for x in (format_team_role_perm(r, all_permissive_role_ids) for r in all_team_roles) for y in x])
return ret
direct_permissive_role_ids = Role.objects.filter(content_type=content_type, object_id=obj.id).values_list('id', flat=True)
all_permissive_role_ids = Role.objects.filter(content_type=content_type, object_id=obj.id).values_list('ancestors__id', flat=True)
direct_access_roles = user.roles.filter(id__in=direct_permissive_role_ids).all()
@@ -3082,7 +3185,7 @@ class CredentialSerializerCreate(CredentialSerializer):
credential = super(CredentialSerializerCreate, self).create(validated_data)
if user:
credential.admin_role.members.add(user)
give_creator_permissions(user, credential)
if team:
if not credential.organization or team.organization.id != credential.organization.id:
raise serializers.ValidationError({"detail": _("Credential organization must be set and match before assigning to a team")})
@@ -5176,16 +5279,21 @@ class NotificationTemplateSerializer(BaseSerializer):
body = messages[event].get('body', {})
if body:
try:
rendered_body = (
sandbox.ImmutableSandboxedEnvironment(undefined=DescriptiveUndefined).from_string(body).render(JobNotificationMixin.context_stub())
)
potential_body = json.loads(rendered_body)
if not isinstance(potential_body, dict):
error_list.append(
_("Webhook body for '{}' should be a json dictionary. Found type '{}'.".format(event, type(potential_body).__name__))
)
except json.JSONDecodeError as exc:
error_list.append(_("Webhook body for '{}' is not a valid json dictionary ({}).".format(event, exc)))
sandbox.ImmutableSandboxedEnvironment(undefined=DescriptiveUndefined).from_string(body).render(JobNotificationMixin.context_stub())
# https://github.com/ansible/awx/issues/14410
# When rendering something such as "{{ job.id }}"
# the return type is not a dict, unlike "{{ job_metadata }}" which is a dict
# potential_body = json.loads(rendered_body)
# if not isinstance(potential_body, dict):
# error_list.append(
# _("Webhook body for '{}' should be a json dictionary. Found type '{}'.".format(event, type(potential_body).__name__))
# )
except Exception as exc:
error_list.append(_("Webhook body for '{}' is not valid. The following gave an error ({}).".format(event, exc)))
if error_list:
raise serializers.ValidationError(error_list)
@@ -5458,17 +5566,25 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
class InstanceLinkSerializer(BaseSerializer):
class Meta:
model = InstanceLink
fields = ('id', 'url', 'related', 'source', 'target', 'link_state')
fields = ('id', 'related', 'source', 'target', 'target_full_address', 'link_state')
source = serializers.SlugRelatedField(slug_field="hostname", queryset=Instance.objects.all())
target = serializers.SlugRelatedField(slug_field="hostname", queryset=Instance.objects.all())
target = serializers.SerializerMethodField()
target_full_address = serializers.SerializerMethodField()
def get_related(self, obj):
res = super(InstanceLinkSerializer, self).get_related(obj)
res['source_instance'] = self.reverse('api:instance_detail', kwargs={'pk': obj.source.id})
res['target_instance'] = self.reverse('api:instance_detail', kwargs={'pk': obj.target.id})
res['target_address'] = self.reverse('api:receptor_address_detail', kwargs={'pk': obj.target.id})
return res
def get_target(self, obj):
return obj.target.instance.hostname
def get_target_full_address(self, obj):
return obj.target.get_full_address()
class InstanceNodeSerializer(BaseSerializer):
class Meta:
@@ -5476,6 +5592,29 @@ class InstanceNodeSerializer(BaseSerializer):
fields = ('id', 'hostname', 'node_type', 'node_state', 'enabled')
class ReceptorAddressSerializer(BaseSerializer):
full_address = serializers.SerializerMethodField()
class Meta:
model = ReceptorAddress
fields = (
'id',
'url',
'address',
'port',
'protocol',
'websocket_path',
'is_internal',
'canonical',
'instance',
'peers_from_control_nodes',
'full_address',
)
def get_full_address(self, obj):
return obj.get_full_address()
class InstanceSerializer(BaseSerializer):
show_capabilities = ['edit']
@@ -5484,11 +5623,17 @@ class InstanceSerializer(BaseSerializer):
jobs_running = serializers.IntegerField(help_text=_('Count of jobs in the running or waiting state that are targeted for this instance'), read_only=True)
jobs_total = serializers.IntegerField(help_text=_('Count of all jobs that target this instance'), read_only=True)
health_check_pending = serializers.SerializerMethodField()
peers = serializers.SlugRelatedField(many=True, required=False, slug_field="hostname", queryset=Instance.objects.all())
peers = serializers.PrimaryKeyRelatedField(
help_text=_('Primary keys of receptor addresses to peer to.'), many=True, required=False, queryset=ReceptorAddress.objects.all()
)
reverse_peers = serializers.SerializerMethodField()
listener_port = serializers.IntegerField(source='canonical_address_port', required=False, allow_null=True)
peers_from_control_nodes = serializers.BooleanField(source='canonical_address_peers_from_control_nodes', required=False)
protocol = serializers.SerializerMethodField()
class Meta:
model = Instance
read_only_fields = ('ip_address', 'uuid', 'version')
read_only_fields = ('ip_address', 'uuid', 'version', 'managed', 'reverse_peers')
fields = (
'id',
'hostname',
@@ -5519,10 +5664,13 @@ class InstanceSerializer(BaseSerializer):
'managed_by_policy',
'node_type',
'node_state',
'managed',
'ip_address',
'listener_port',
'peers',
'reverse_peers',
'listener_port',
'peers_from_control_nodes',
'protocol',
)
extra_kwargs = {
'node_type': {'initial': Instance.Types.EXECUTION, 'default': Instance.Types.EXECUTION},
@@ -5544,16 +5692,54 @@ class InstanceSerializer(BaseSerializer):
def get_related(self, obj):
res = super(InstanceSerializer, self).get_related(obj)
res['receptor_addresses'] = self.reverse('api:instance_receptor_addresses_list', kwargs={'pk': obj.pk})
res['jobs'] = self.reverse('api:instance_unified_jobs_list', kwargs={'pk': obj.pk})
res['instance_groups'] = self.reverse('api:instance_instance_groups_list', kwargs={'pk': obj.pk})
if obj.node_type in [Instance.Types.EXECUTION, Instance.Types.HOP]:
res['install_bundle'] = self.reverse('api:instance_install_bundle', kwargs={'pk': obj.pk})
res['peers'] = self.reverse('api:instance_peers_list', kwargs={"pk": obj.pk})
res['instance_groups'] = self.reverse('api:instance_instance_groups_list', kwargs={'pk': obj.pk})
if obj.node_type in [Instance.Types.EXECUTION, Instance.Types.HOP] and not obj.managed:
res['install_bundle'] = self.reverse('api:instance_install_bundle', kwargs={'pk': obj.pk})
if self.context['request'].user.is_superuser or self.context['request'].user.is_system_auditor:
if obj.node_type == 'execution':
res['health_check'] = self.reverse('api:instance_health_check', kwargs={'pk': obj.pk})
return res
def create_or_update(self, validated_data, obj=None, create=True):
# create a managed receptor address if listener port is defined
port = validated_data.pop('listener_port', -1)
peers_from_control_nodes = validated_data.pop('peers_from_control_nodes', -1)
# delete the receptor address if the port is explicitly set to None
if obj and port == None:
obj.receptor_addresses.filter(address=obj.hostname).delete()
if create:
instance = super(InstanceSerializer, self).create(validated_data)
else:
instance = super(InstanceSerializer, self).update(obj, validated_data)
instance.refresh_from_db() # instance canonical address lookup is deferred, so needs to be reloaded
# only create or update if port is defined in validated_data or already exists in the
# canonical address
# this prevents creating a receptor address if peers_from_control_nodes is in
# validated_data but a port is not set
if (port != None and port != -1) or instance.canonical_address_port:
kwargs = {}
if port != -1:
kwargs['port'] = port
if peers_from_control_nodes != -1:
kwargs['peers_from_control_nodes'] = peers_from_control_nodes
if kwargs:
kwargs['canonical'] = True
instance.receptor_addresses.update_or_create(address=instance.hostname, defaults=kwargs)
return instance
def create(self, validated_data):
return self.create_or_update(validated_data, create=True)
def update(self, obj, validated_data):
return self.create_or_update(validated_data, obj, create=False)
def get_summary_fields(self, obj):
summary = super().get_summary_fields(obj)
@@ -5563,6 +5749,16 @@ class InstanceSerializer(BaseSerializer):
return summary
def get_reverse_peers(self, obj):
return Instance.objects.prefetch_related('peers').filter(peers__in=obj.receptor_addresses.all()).values_list('id', flat=True)
def get_protocol(self, obj):
# note: don't create a different query for receptor addresses, as this is prefetched on the View for optimization
for addr in obj.receptor_addresses.all():
if addr.canonical:
return addr.protocol
return ""
def get_consumed_capacity(self, obj):
return obj.consumed_capacity
@@ -5576,47 +5772,20 @@ class InstanceSerializer(BaseSerializer):
return obj.health_check_pending
def validate(self, attrs):
def get_field_from_model_or_attrs(fd):
return attrs.get(fd, self.instance and getattr(self.instance, fd) or None)
def check_peers_changed():
'''
return True if
- 'peers' in attrs
- instance peers matches peers in attrs
'''
return self.instance and 'peers' in attrs and set(self.instance.peers.all()) != set(attrs['peers'])
# Oddly, using 'source' on a DRF field populates attrs with the source name, so we should rename it back
if 'canonical_address_port' in attrs:
attrs['listener_port'] = attrs.pop('canonical_address_port')
if 'canonical_address_peers_from_control_nodes' in attrs:
attrs['peers_from_control_nodes'] = attrs.pop('canonical_address_peers_from_control_nodes')
if not self.instance and not settings.IS_K8S:
raise serializers.ValidationError(_("Can only create instances on Kubernetes or OpenShift."))
node_type = get_field_from_model_or_attrs("node_type")
peers_from_control_nodes = get_field_from_model_or_attrs("peers_from_control_nodes")
listener_port = get_field_from_model_or_attrs("listener_port")
peers = attrs.get('peers', [])
if peers_from_control_nodes and node_type not in (Instance.Types.EXECUTION, Instance.Types.HOP):
raise serializers.ValidationError(_("peers_from_control_nodes can only be enabled for execution or hop nodes."))
if node_type in [Instance.Types.CONTROL, Instance.Types.HYBRID]:
if check_peers_changed():
raise serializers.ValidationError(
_("Setting peers manually for control nodes is not allowed. Enable peers_from_control_nodes on the hop and execution nodes instead.")
)
if not listener_port and peers_from_control_nodes:
raise serializers.ValidationError(_("Field listener_port must be a valid integer when peers_from_control_nodes is enabled."))
if not listener_port and self.instance and self.instance.peers_from.exists():
raise serializers.ValidationError(_("Field listener_port must be a valid integer when other nodes peer to it."))
for peer in peers:
if peer.listener_port is None:
raise serializers.ValidationError(_("Field listener_port must be set on peer ") + peer.hostname + ".")
if not settings.IS_K8S:
if check_peers_changed():
raise serializers.ValidationError(_("Cannot change peers."))
# cannot enable peers_from_control_nodes if listener_port is not set
if attrs.get('peers_from_control_nodes'):
port = attrs.get('listener_port', -1) # -1 denotes missing, None denotes explicit null
if (port is None) or (port == -1 and self.instance and self.instance.canonical_address is None):
raise serializers.ValidationError(_("Cannot enable peers_from_control_nodes if listener_port is not set."))
return super().validate(attrs)
@@ -5636,8 +5805,8 @@ class InstanceSerializer(BaseSerializer):
raise serializers.ValidationError(_("Can only change the state on Kubernetes or OpenShift."))
if value != Instance.States.DEPROVISIONING:
raise serializers.ValidationError(_("Can only change instances to the 'deprovisioning' state."))
if self.instance.node_type not in (Instance.Types.EXECUTION, Instance.Types.HOP):
raise serializers.ValidationError(_("Can only deprovision execution or hop nodes."))
if self.instance.managed:
raise serializers.ValidationError(_("Cannot deprovision managed nodes."))
else:
if value and value != Instance.States.INSTALLED:
raise serializers.ValidationError(_("Can only create instances in the 'installed' state."))
@@ -5656,18 +5825,48 @@ class InstanceSerializer(BaseSerializer):
def validate_listener_port(self, value):
"""
Cannot change listener port, unless going from none to integer, and vice versa
If instance is managed, cannot change listener port at all
"""
if value and self.instance and self.instance.listener_port and self.instance.listener_port != value:
raise serializers.ValidationError(_("Cannot change listener port."))
if self.instance:
canonical_address_port = self.instance.canonical_address_port
if value and canonical_address_port and canonical_address_port != value:
raise serializers.ValidationError(_("Cannot change listener port."))
if self.instance.managed and value != canonical_address_port:
raise serializers.ValidationError(_("Cannot change listener port for managed nodes."))
return value
def validate_peers(self, value):
# cannot peer to an instance more than once
peers_instances = Counter(p.instance_id for p in value)
if any(count > 1 for count in peers_instances.values()):
raise serializers.ValidationError(_("Cannot peer to the same instance more than once."))
if self.instance:
instance_addresses = set(self.instance.receptor_addresses.all())
setting_peers = set(value)
peers_changed = set(self.instance.peers.all()) != setting_peers
if not settings.IS_K8S and peers_changed:
raise serializers.ValidationError(_("Cannot change peers."))
if self.instance.managed and peers_changed:
raise serializers.ValidationError(_("Setting peers manually for managed nodes is not allowed."))
# cannot peer to self
if instance_addresses & setting_peers:
raise serializers.ValidationError(_("Instance cannot peer to its own address."))
# cannot peer to an instance that is already peered to this instance
if instance_addresses:
for p in setting_peers:
if set(p.instance.peers.all()) & instance_addresses:
raise serializers.ValidationError(_(f"Instance {p.instance.hostname} is already peered to this instance."))
return value
def validate_peers_from_control_nodes(self, value):
"""
Can only enable for K8S based deployments
"""
if value and not settings.IS_K8S:
raise serializers.ValidationError(_("Can only be enabled on Kubernetes or Openshift."))
if self.instance and self.instance.managed and self.instance.canonical_address_peers_from_control_nodes != value:
raise serializers.ValidationError(_("Cannot change peers_from_control_nodes for managed nodes."))
return value

View File

@@ -17,19 +17,18 @@ custom_worksign_public_keyfile: receptor/work_public_key.pem
custom_tls_certfile: receptor/tls/receptor.crt
custom_tls_keyfile: receptor/tls/receptor.key
custom_ca_certfile: receptor/tls/ca/mesh-CA.crt
receptor_protocol: 'tcp'
{% if instance.listener_port %}
{% if listener_port %}
receptor_protocol: {{ listener_protocol }}
receptor_listener: true
receptor_port: {{ instance.listener_port }}
receptor_port: {{ listener_port }}
{% else %}
receptor_listener: false
{% endif %}
{% if peers %}
receptor_peers:
{% for peer in peers %}
- host: {{ peer.host }}
port: {{ peer.port }}
protocol: tcp
- address: {{ peer.address }}
protocol: {{ peer.protocol }}
{% endfor %}
{% endif %}
{% verbatim %}

View File

@@ -1,4 +1,4 @@
---
collections:
- name: ansible.receptor
version: 2.0.2
version: 2.0.3

View File

@@ -10,6 +10,7 @@ from awx.api.views import (
InstanceInstanceGroupsList,
InstanceHealthCheck,
InstancePeersList,
InstanceReceptorAddressesList,
)
from awx.api.views.instance_install_bundle import InstanceInstallBundle
@@ -21,6 +22,7 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/instance_groups/$', InstanceInstanceGroupsList.as_view(), name='instance_instance_groups_list'),
re_path(r'^(?P<pk>[0-9]+)/health_check/$', InstanceHealthCheck.as_view(), name='instance_health_check'),
re_path(r'^(?P<pk>[0-9]+)/peers/$', InstancePeersList.as_view(), name='instance_peers_list'),
re_path(r'^(?P<pk>[0-9]+)/receptor_addresses/$', InstanceReceptorAddressesList.as_view(), name='instance_receptor_addresses_list'),
re_path(r'^(?P<pk>[0-9]+)/install_bundle/$', InstanceInstallBundle.as_view(), name='instance_install_bundle'),
]

View File

@@ -0,0 +1,17 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.urls import re_path
from awx.api.views import (
ReceptorAddressesList,
ReceptorAddressDetail,
)
urls = [
re_path(r'^$', ReceptorAddressesList.as_view(), name='receptor_addresses_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ReceptorAddressDetail.as_view(), name='receptor_address_detail'),
]
__all__ = ['urls']

View File

@@ -85,6 +85,7 @@ from .oauth2_root import urls as oauth2_root_urls
from .workflow_approval_template import urls as workflow_approval_template_urls
from .workflow_approval import urls as workflow_approval_urls
from .analytics import urls as analytics_urls
from .receptor_address import urls as receptor_address_urls
v2_urls = [
re_path(r'^$', ApiV2RootView.as_view(), name='api_v2_root_view'),
@@ -155,6 +156,7 @@ v2_urls = [
re_path(r'^bulk/host_create/$', BulkHostCreateView.as_view(), name='bulk_host_create'),
re_path(r'^bulk/host_delete/$', BulkHostDeleteView.as_view(), name='bulk_host_delete'),
re_path(r'^bulk/job_launch/$', BulkJobLaunchView.as_view(), name='bulk_job_launch'),
re_path(r'^receptor_addresses/', include(receptor_address_urls)),
]

View File

@@ -2,28 +2,21 @@
# All Rights Reserved.
from django.conf import settings
from django.urls import NoReverseMatch
from rest_framework.reverse import _reverse
from rest_framework.reverse import reverse as drf_reverse
from rest_framework.versioning import URLPathVersioning as BaseVersioning
def drf_reverse(viewname, args=None, kwargs=None, request=None, format=None, **extra):
"""
Copy and monkey-patch `rest_framework.reverse.reverse` to prevent adding unwarranted
query string parameters.
"""
scheme = getattr(request, 'versioning_scheme', None)
if scheme is not None:
try:
url = scheme.reverse(viewname, args, kwargs, request, format, **extra)
except NoReverseMatch:
# In case the versioning scheme reversal fails, fallback to the
# default implementation
url = _reverse(viewname, args, kwargs, request, format, **extra)
else:
url = _reverse(viewname, args, kwargs, request, format, **extra)
def is_optional_api_urlpattern_prefix_request(request):
if settings.OPTIONAL_API_URLPATTERN_PREFIX and request:
if request.path.startswith(f"/api/{settings.OPTIONAL_API_URLPATTERN_PREFIX}"):
return True
return False
def transform_optional_api_urlpattern_prefix_url(request, url):
if is_optional_api_urlpattern_prefix_request(request):
url = url.replace('/api', f"/api/{settings.OPTIONAL_API_URLPATTERN_PREFIX}")
return url

View File

@@ -60,6 +60,9 @@ from oauth2_provider.models import get_access_token_model
import pytz
from wsgiref.util import FileWrapper
# django-ansible-base
from ansible_base.rbac.models import RoleEvaluation, ObjectRole
# AWX
from awx.main.tasks.system import send_notifications, update_inventory_computed_fields
from awx.main.access import get_user_queryset
@@ -87,6 +90,7 @@ from awx.api.generics import (
from awx.api.views.labels import LabelSubListCreateAttachDetachView
from awx.api.versioning import reverse
from awx.main import models
from awx.main.models.rbac import get_role_definition
from awx.main.utils import (
camelcase_to_underscore,
extract_ansible_vars,
@@ -272,16 +276,24 @@ class DashboardJobsGraphView(APIView):
success_query = user_unified_jobs.filter(status='successful')
failed_query = user_unified_jobs.filter(status='failed')
canceled_query = user_unified_jobs.filter(status='canceled')
error_query = user_unified_jobs.filter(status='error')
if job_type == 'inv_sync':
success_query = success_query.filter(instance_of=models.InventoryUpdate)
failed_query = failed_query.filter(instance_of=models.InventoryUpdate)
canceled_query = canceled_query.filter(instance_of=models.InventoryUpdate)
error_query = error_query.filter(instance_of=models.InventoryUpdate)
elif job_type == 'playbook_run':
success_query = success_query.filter(instance_of=models.Job)
failed_query = failed_query.filter(instance_of=models.Job)
canceled_query = canceled_query.filter(instance_of=models.Job)
error_query = error_query.filter(instance_of=models.Job)
elif job_type == 'scm_update':
success_query = success_query.filter(instance_of=models.ProjectUpdate)
failed_query = failed_query.filter(instance_of=models.ProjectUpdate)
canceled_query = canceled_query.filter(instance_of=models.ProjectUpdate)
error_query = error_query.filter(instance_of=models.ProjectUpdate)
end = now()
interval = 'day'
@@ -297,10 +309,12 @@ class DashboardJobsGraphView(APIView):
else:
return Response({'error': _('Unknown period "%s"') % str(period)}, status=status.HTTP_400_BAD_REQUEST)
dashboard_data = {"jobs": {"successful": [], "failed": []}}
dashboard_data = {"jobs": {"successful": [], "failed": [], "canceled": [], "error": []}}
succ_list = dashboard_data['jobs']['successful']
fail_list = dashboard_data['jobs']['failed']
canceled_list = dashboard_data['jobs']['canceled']
error_list = dashboard_data['jobs']['error']
qs_s = (
success_query.filter(finished__range=(start, end))
@@ -318,6 +332,22 @@ class DashboardJobsGraphView(APIView):
.annotate(agg=Count('id', distinct=True))
)
data_f = {item['d']: item['agg'] for item in qs_f}
qs_c = (
canceled_query.filter(finished__range=(start, end))
.annotate(d=Trunc('finished', interval, tzinfo=end.tzinfo))
.order_by()
.values('d')
.annotate(agg=Count('id', distinct=True))
)
data_c = {item['d']: item['agg'] for item in qs_c}
qs_e = (
error_query.filter(finished__range=(start, end))
.annotate(d=Trunc('finished', interval, tzinfo=end.tzinfo))
.order_by()
.values('d')
.annotate(agg=Count('id', distinct=True))
)
data_e = {item['d']: item['agg'] for item in qs_e}
start_date = start.replace(hour=0, minute=0, second=0, microsecond=0)
for d in itertools.count():
@@ -326,6 +356,8 @@ class DashboardJobsGraphView(APIView):
break
succ_list.append([time.mktime(date.timetuple()), data_s.get(date, 0)])
fail_list.append([time.mktime(date.timetuple()), data_f.get(date, 0)])
canceled_list.append([time.mktime(date.timetuple()), data_c.get(date, 0)])
error_list.append([time.mktime(date.timetuple()), data_e.get(date, 0)])
return Response(dashboard_data)
@@ -337,12 +369,20 @@ class InstanceList(ListCreateAPIView):
search_fields = ('hostname',)
ordering = ('id',)
def get_queryset(self):
qs = super().get_queryset().prefetch_related('receptor_addresses')
return qs
class InstanceDetail(RetrieveUpdateAPIView):
name = _("Instance Detail")
model = models.Instance
serializer_class = serializers.InstanceSerializer
def get_queryset(self):
qs = super().get_queryset().prefetch_related('receptor_addresses')
return qs
def update_raw_data(self, data):
# these fields are only valid on creation of an instance, so they unwanted on detail view
data.pop('node_type', None)
@@ -375,13 +415,37 @@ class InstanceUnifiedJobsList(SubListAPIView):
class InstancePeersList(SubListAPIView):
name = _("Instance Peers")
name = _("Peers")
model = models.ReceptorAddress
serializer_class = serializers.ReceptorAddressSerializer
parent_model = models.Instance
model = models.Instance
serializer_class = serializers.InstanceSerializer
parent_access = 'read'
search_fields = {'hostname'}
relationship = 'peers'
search_fields = ('address',)
class InstanceReceptorAddressesList(SubListAPIView):
name = _("Receptor Addresses")
model = models.ReceptorAddress
parent_key = 'instance'
parent_model = models.Instance
serializer_class = serializers.ReceptorAddressSerializer
search_fields = ('address',)
class ReceptorAddressesList(ListAPIView):
name = _("Receptor Addresses")
model = models.ReceptorAddress
serializer_class = serializers.ReceptorAddressSerializer
search_fields = ('address',)
class ReceptorAddressDetail(RetrieveAPIView):
name = _("Receptor Address Detail")
model = models.ReceptorAddress
serializer_class = serializers.ReceptorAddressSerializer
parent_model = models.Instance
relationship = 'receptor_addresses'
class InstanceInstanceGroupsList(InstanceGroupMembershipMixin, SubListCreateAttachDetachAPIView):
@@ -476,6 +540,7 @@ class InstanceGroupAccessList(ResourceAccessList):
class InstanceGroupObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.InstanceGroup
@@ -664,6 +729,7 @@ class TeamUsersList(BaseUsersList):
class TeamRolesList(SubListAttachDetachAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializerWithParentAccess
metadata_class = RoleMetadata
@@ -703,10 +769,12 @@ class TeamRolesList(SubListAttachDetachAPIView):
class TeamObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Team
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -724,8 +792,15 @@ class TeamProjectsList(SubListAPIView):
self.check_parent_access(team)
model_ct = ContentType.objects.get_for_model(self.model)
parent_ct = ContentType.objects.get_for_model(self.parent_model)
proj_roles = models.Role.objects.filter(Q(ancestors__content_type=parent_ct) & Q(ancestors__object_id=team.pk), content_type=model_ct)
return self.model.accessible_objects(self.request.user, 'read_role').filter(pk__in=[t.content_object.pk for t in proj_roles])
rd = get_role_definition(team.member_role)
role = ObjectRole.objects.filter(object_id=team.id, content_type=parent_ct, role_definition=rd).first()
if role is None:
# Team has no permissions, therefore team has no projects
return self.model.objects.none()
else:
project_qs = self.model.accessible_objects(self.request.user, 'read_role')
return project_qs.filter(id__in=RoleEvaluation.objects.filter(content_type_id=model_ct.id, role=role).values_list('object_id'))
class TeamActivityStreamList(SubListAPIView):
@@ -740,10 +815,23 @@ class TeamActivityStreamList(SubListAPIView):
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(
Q(team=parent)
| Q(project__in=models.Project.accessible_objects(parent.member_role, 'read_role'))
| Q(credential__in=models.Credential.accessible_objects(parent.member_role, 'read_role'))
| Q(
project__in=RoleEvaluation.objects.filter(
role__in=parent.has_roles.all(), content_type_id=ContentType.objects.get_for_model(models.Project).id, codename='view_project'
)
.values_list('object_id')
.distinct()
)
| Q(
credential__in=RoleEvaluation.objects.filter(
role__in=parent.has_roles.all(), content_type_id=ContentType.objects.get_for_model(models.Credential).id, codename='view_credential'
)
.values_list('object_id')
.distinct()
)
)
@@ -995,10 +1083,12 @@ class ProjectAccessList(ResourceAccessList):
class ProjectObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Project
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -1156,6 +1246,7 @@ class UserTeamsList(SubListAPIView):
class UserRolesList(SubListAttachDetachAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializerWithParentAccess
metadata_class = RoleMetadata
@@ -1430,10 +1521,12 @@ class CredentialAccessList(ResourceAccessList):
class CredentialObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Credential
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -2220,13 +2313,6 @@ class JobTemplateList(ListCreateAPIView):
serializer_class = serializers.JobTemplateSerializer
always_allow_superuser = False
def post(self, request, *args, **kwargs):
ret = super(JobTemplateList, self).post(request, *args, **kwargs)
if ret.status_code == 201:
job_template = models.JobTemplate.objects.get(id=ret.data['id'])
job_template.admin_role.members.add(request.user)
return ret
class JobTemplateDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = models.JobTemplate
@@ -2772,10 +2858,12 @@ class JobTemplateAccessList(ResourceAccessList):
class JobTemplateObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.JobTemplate
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -3158,10 +3246,12 @@ class WorkflowJobTemplateAccessList(ResourceAccessList):
class WorkflowJobTemplateObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.WorkflowJobTemplate
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -4170,6 +4260,7 @@ class ActivityStreamDetail(RetrieveAPIView):
class RoleList(ListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
permission_classes = (IsAuthenticated,)
@@ -4177,11 +4268,13 @@ class RoleList(ListAPIView):
class RoleDetail(RetrieveAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
class RoleUsersList(SubListAttachDetachAPIView):
deprecated = True
model = models.User
serializer_class = serializers.UserSerializer
parent_model = models.Role
@@ -4216,6 +4309,7 @@ class RoleUsersList(SubListAttachDetachAPIView):
class RoleTeamsList(SubListAttachDetachAPIView):
deprecated = True
model = models.Team
serializer_class = serializers.TeamSerializer
parent_model = models.Role
@@ -4260,10 +4354,12 @@ class RoleTeamsList(SubListAttachDetachAPIView):
team.member_role.children.remove(role)
else:
team.member_role.children.add(role)
return Response(status=status.HTTP_204_NO_CONTENT)
class RoleParentsList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Role
@@ -4277,6 +4373,7 @@ class RoleParentsList(SubListAPIView):
class RoleChildrenList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Role

View File

@@ -48,23 +48,23 @@ class AnalyticsRootView(APIView):
def get(self, request, format=None):
data = OrderedDict()
data['authorized'] = reverse('api:analytics_authorized')
data['reports'] = reverse('api:analytics_reports_list')
data['report_options'] = reverse('api:analytics_report_options_list')
data['adoption_rate'] = reverse('api:analytics_adoption_rate')
data['adoption_rate_options'] = reverse('api:analytics_adoption_rate_options')
data['event_explorer'] = reverse('api:analytics_event_explorer')
data['event_explorer_options'] = reverse('api:analytics_event_explorer_options')
data['host_explorer'] = reverse('api:analytics_host_explorer')
data['host_explorer_options'] = reverse('api:analytics_host_explorer_options')
data['job_explorer'] = reverse('api:analytics_job_explorer')
data['job_explorer_options'] = reverse('api:analytics_job_explorer_options')
data['probe_templates'] = reverse('api:analytics_probe_templates_explorer')
data['probe_templates_options'] = reverse('api:analytics_probe_templates_options')
data['probe_template_for_hosts'] = reverse('api:analytics_probe_template_for_hosts_explorer')
data['probe_template_for_hosts_options'] = reverse('api:analytics_probe_template_for_hosts_options')
data['roi_templates'] = reverse('api:analytics_roi_templates_explorer')
data['roi_templates_options'] = reverse('api:analytics_roi_templates_options')
data['authorized'] = reverse('api:analytics_authorized', request=request)
data['reports'] = reverse('api:analytics_reports_list', request=request)
data['report_options'] = reverse('api:analytics_report_options_list', request=request)
data['adoption_rate'] = reverse('api:analytics_adoption_rate', request=request)
data['adoption_rate_options'] = reverse('api:analytics_adoption_rate_options', request=request)
data['event_explorer'] = reverse('api:analytics_event_explorer', request=request)
data['event_explorer_options'] = reverse('api:analytics_event_explorer_options', request=request)
data['host_explorer'] = reverse('api:analytics_host_explorer', request=request)
data['host_explorer_options'] = reverse('api:analytics_host_explorer_options', request=request)
data['job_explorer'] = reverse('api:analytics_job_explorer', request=request)
data['job_explorer_options'] = reverse('api:analytics_job_explorer_options', request=request)
data['probe_templates'] = reverse('api:analytics_probe_templates_explorer', request=request)
data['probe_templates_options'] = reverse('api:analytics_probe_templates_options', request=request)
data['probe_template_for_hosts'] = reverse('api:analytics_probe_template_for_hosts_explorer', request=request)
data['probe_template_for_hosts_options'] = reverse('api:analytics_probe_template_for_hosts_options', request=request)
data['roi_templates'] = reverse('api:analytics_roi_templates_explorer', request=request)
data['roi_templates_options'] = reverse('api:analytics_roi_templates_options', request=request)
return Response(data)

View File

@@ -124,10 +124,19 @@ def generate_inventory_yml(instance_obj):
def generate_group_vars_all_yml(instance_obj):
# get peers
peers = []
for instance in instance_obj.peers.all():
peers.append(dict(host=instance.hostname, port=instance.listener_port))
all_yaml = render_to_string("instance_install_bundle/group_vars/all.yml", context=dict(instance=instance_obj, peers=peers))
for addr in instance_obj.peers.select_related('instance'):
peers.append(dict(address=addr.get_full_address(), protocol=addr.protocol))
context = dict(instance=instance_obj, peers=peers)
canonical_addr = instance_obj.canonical_address
if canonical_addr:
context['listener_port'] = canonical_addr.port
protocol = canonical_addr.protocol if canonical_addr.protocol != 'wss' else 'ws'
context['listener_protocol'] = protocol
all_yaml = render_to_string("instance_install_bundle/group_vars/all.yml", context=context)
# convert consecutive newlines with a single newline
return re.sub(r'\n+', '\n', all_yaml)

View File

@@ -152,6 +152,7 @@ class InventoryObjectRolesList(SubListAPIView):
serializer_class = RoleSerializer
parent_model = Inventory
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()

View File

@@ -17,7 +17,7 @@ class MeshVisualizer(APIView):
def get(self, request, format=None):
data = {
'nodes': InstanceNodeSerializer(Instance.objects.all(), many=True).data,
'links': InstanceLinkSerializer(InstanceLink.objects.select_related('target', 'source'), many=True).data,
'links': InstanceLinkSerializer(InstanceLink.objects.select_related('target__instance', 'source'), many=True).data,
}
return Response(data)

View File

@@ -226,6 +226,7 @@ class OrganizationObjectRolesList(SubListAPIView):
serializer_class = RoleSerializer
parent_model = Organization
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()

View File

@@ -13,6 +13,7 @@ from django.utils.decorators import method_decorator
from django.views.decorators.csrf import ensure_csrf_cookie
from django.template.loader import render_to_string
from django.utils.translation import gettext_lazy as _
from django.urls import reverse as django_reverse
from rest_framework.permissions import AllowAny, IsAuthenticated
from rest_framework.response import Response
@@ -27,7 +28,7 @@ from awx.main.analytics import all_collectors
from awx.main.ha import is_ha_environment
from awx.main.utils import get_awx_version, get_custom_venv_choices
from awx.main.utils.licensing import validate_entitlement_manifest
from awx.api.versioning import reverse, drf_reverse
from awx.api.versioning import URLPathVersioning, is_optional_api_urlpattern_prefix_request, reverse, drf_reverse
from awx.main.constants import PRIVILEGE_ESCALATION_METHODS
from awx.main.models import Project, Organization, Instance, InstanceGroup, JobTemplate
from awx.main.utils import set_environ
@@ -39,19 +40,19 @@ logger = logging.getLogger('awx.api.views.root')
class ApiRootView(APIView):
permission_classes = (AllowAny,)
name = _('REST API')
versioning_class = None
versioning_class = URLPathVersioning
swagger_topic = 'Versioning'
@method_decorator(ensure_csrf_cookie)
def get(self, request, format=None):
'''List supported API versions'''
v2 = reverse('api:api_v2_root_view', kwargs={'version': 'v2'})
v2 = reverse('api:api_v2_root_view', request=request, kwargs={'version': 'v2'})
data = OrderedDict()
data['description'] = _('AWX REST API')
data['current_version'] = v2
data['available_versions'] = dict(v2=v2)
data['oauth2'] = drf_reverse('api:oauth_authorization_root_view')
if not is_optional_api_urlpattern_prefix_request(request):
data['oauth2'] = drf_reverse('api:oauth_authorization_root_view')
data['custom_logo'] = settings.CUSTOM_LOGO
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
data['login_redirect_override'] = settings.LOGIN_REDIRECT_OVERRIDE
@@ -84,6 +85,7 @@ class ApiVersionRootView(APIView):
data['ping'] = reverse('api:api_v2_ping_view', request=request)
data['instances'] = reverse('api:instance_list', request=request)
data['instance_groups'] = reverse('api:instance_group_list', request=request)
data['receptor_addresses'] = reverse('api:receptor_addresses_list', request=request)
data['config'] = reverse('api:api_v2_config_view', request=request)
data['settings'] = reverse('api:setting_category_list', request=request)
data['me'] = reverse('api:user_me_list', request=request)
@@ -129,6 +131,10 @@ class ApiVersionRootView(APIView):
data['mesh_visualizer'] = reverse('api:mesh_visualizer_view', request=request)
data['bulk'] = reverse('api:bulk', request=request)
data['analytics'] = reverse('api:analytics_root_view', request=request)
data['service_index'] = django_reverse('service-index-root')
data['role_definitions'] = django_reverse('roledefinition-list')
data['role_user_assignments'] = django_reverse('roleuserassignment-list')
data['role_team_assignments'] = django_reverse('roleteamassignment-list')
return Response(data)

View File

@@ -55,6 +55,7 @@ register(
# Optional; category_slug will be slugified version of category if not
# explicitly provided.
category_slug='cows',
hidden=True,
)

View File

@@ -61,6 +61,10 @@ class StringListBooleanField(ListField):
def to_representation(self, value):
try:
if isinstance(value, str):
# https://github.com/encode/django-rest-framework/commit/a180bde0fd965915718b070932418cabc831cee1
# DRF changed truthy and falsy lists to be capitalized
value = value.lower()
if isinstance(value, (list, tuple)):
return super(StringListBooleanField, self).to_representation(value)
elif value in BooleanField.TRUE_VALUES:
@@ -78,6 +82,8 @@ class StringListBooleanField(ListField):
def to_internal_value(self, data):
try:
if isinstance(data, str):
data = data.lower()
if isinstance(data, (list, tuple)):
return super(StringListBooleanField, self).to_internal_value(data)
elif data in BooleanField.TRUE_VALUES:

View File

@@ -127,6 +127,8 @@ class SettingsRegistry(object):
encrypted = bool(field_kwargs.pop('encrypted', False))
defined_in_file = bool(field_kwargs.pop('defined_in_file', False))
unit = field_kwargs.pop('unit', None)
hidden = field_kwargs.pop('hidden', False)
warning_text = field_kwargs.pop('warning_text', None)
if getattr(field_kwargs.get('child', None), 'source', None) is not None:
field_kwargs['child'].source = None
field_instance = field_class(**field_kwargs)
@@ -134,12 +136,14 @@ class SettingsRegistry(object):
field_instance.category = category
field_instance.depends_on = depends_on
field_instance.unit = unit
field_instance.hidden = hidden
if placeholder is not empty:
field_instance.placeholder = placeholder
field_instance.defined_in_file = defined_in_file
if field_instance.defined_in_file:
field_instance.help_text = str(_('This value has been set manually in a settings file.')) + '\n\n' + str(field_instance.help_text)
field_instance.encrypted = encrypted
field_instance.warning_text = warning_text
original_field_instance = field_instance
if field_class != original_field_class:
original_field_instance = original_field_class(**field_kwargs)

View File

@@ -1,6 +1,7 @@
# Python
import contextlib
import logging
import psycopg
import threading
import time
import os
@@ -13,7 +14,7 @@ from django.conf import settings, UserSettingsHolder
from django.core.cache import cache as django_cache
from django.core.exceptions import ImproperlyConfigured, SynchronousOnlyOperation
from django.db import transaction, connection
from django.db.utils import Error as DBError, ProgrammingError
from django.db.utils import DatabaseError, ProgrammingError
from django.utils.functional import cached_property
# Django REST Framework
@@ -80,18 +81,26 @@ def _ctit_db_wrapper(trans_safe=False):
logger.debug('Obtaining database settings in spite of broken transaction.')
transaction.set_rollback(False)
yield
except DBError as exc:
except ProgrammingError as e:
# Exception raised for programming errors
# Examples may be table not found or already exists,
# this generally means we can't fetch Tower configuration
# because the database hasn't actually finished migrating yet;
# this is usually a sign that a service in a container (such as ws_broadcast)
# has come up *before* the database has finished migrating, and
# especially that the conf.settings table doesn't exist yet
# syntax error in the SQL statement, wrong number of parameters specified, etc.
if trans_safe:
level = logger.warning
if isinstance(exc, ProgrammingError):
if 'relation' in str(exc) and 'does not exist' in str(exc):
# this generally means we can't fetch Tower configuration
# because the database hasn't actually finished migrating yet;
# this is usually a sign that a service in a container (such as ws_broadcast)
# has come up *before* the database has finished migrating, and
# especially that the conf.settings table doesn't exist yet
level = logger.debug
level(f'Database settings are not available, using defaults. error: {str(exc)}')
logger.debug(f'Database settings are not available, using defaults. error: {str(e)}')
else:
logger.exception('Error modifying something related to database settings.')
except DatabaseError as e:
if trans_safe:
cause = e.__cause__
if cause and hasattr(cause, 'sqlstate'):
sqlstate = cause.sqlstate
sqlstate_str = psycopg.errors.lookup(sqlstate)
logger.error('SQL Error state: {} - {}'.format(sqlstate, sqlstate_str))
else:
logger.exception('Error modifying something related to database settings.')
finally:

View File

@@ -130,9 +130,9 @@ def test_default_setting(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system', default='DEFAULT')
settings_to_cache = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache):
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.cache.get('AWX_SOME_SETTING') == 'DEFAULT'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache)
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.cache.get('AWX_SOME_SETTING') == 'DEFAULT'
@pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT')
@@ -146,9 +146,9 @@ def test_setting_is_not_from_setting_file(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system', default='DEFAULT')
settings_to_cache = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache):
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.registry.get_setting_field('AWX_SOME_SETTING').defined_in_file is False
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache)
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.registry.get_setting_field('AWX_SOME_SETTING').defined_in_file is False
def test_empty_setting(settings, mocker):
@@ -156,10 +156,10 @@ def test_empty_setting(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([]), 'first.return_value': None})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
with pytest.raises(AttributeError):
settings.AWX_SOME_SETTING
assert settings.cache.get('AWX_SOME_SETTING') == SETTING_CACHE_NOTSET
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
with pytest.raises(AttributeError):
settings.AWX_SOME_SETTING
assert settings.cache.get('AWX_SOME_SETTING') == SETTING_CACHE_NOTSET
def test_setting_from_db(settings, mocker):
@@ -168,9 +168,9 @@ def test_setting_from_db(settings, mocker):
setting_from_db = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([setting_from_db]), 'first.return_value': setting_from_db})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
assert settings.AWX_SOME_SETTING == 'FROM_DB'
assert settings.cache.get('AWX_SOME_SETTING') == 'FROM_DB'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
assert settings.AWX_SOME_SETTING == 'FROM_DB'
assert settings.cache.get('AWX_SOME_SETTING') == 'FROM_DB'
@pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT')
@@ -205,8 +205,8 @@ def test_db_setting_update(settings, mocker):
existing_setting = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
setting_list = mocker.Mock(**{'order_by.return_value.first.return_value': existing_setting})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=setting_list):
settings.AWX_SOME_SETTING = 'NEW-VALUE'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=setting_list)
settings.AWX_SOME_SETTING = 'NEW-VALUE'
assert existing_setting.value == 'NEW-VALUE'
existing_setting.save.assert_called_with(update_fields=['value'])
@@ -217,8 +217,8 @@ def test_db_setting_deletion(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system')
existing_setting = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=[existing_setting]):
del settings.AWX_SOME_SETTING
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=[existing_setting])
del settings.AWX_SOME_SETTING
assert existing_setting.delete.call_count == 1
@@ -283,10 +283,10 @@ def test_sensitive_cache_data_is_encrypted(settings, mocker):
# use its primary key as part of the encryption key
setting_from_db = mocker.Mock(pk=123, key='AWX_ENCRYPTED', value='SECRET!')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([setting_from_db]), 'first.return_value': setting_from_db})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
cache.set('AWX_ENCRYPTED', 'SECRET!')
assert cache.get('AWX_ENCRYPTED') == 'SECRET!'
assert native_cache.get('AWX_ENCRYPTED') == 'FRPERG!'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
cache.set('AWX_ENCRYPTED', 'SECRET!')
assert cache.get('AWX_ENCRYPTED') == 'SECRET!'
assert native_cache.get('AWX_ENCRYPTED') == 'FRPERG!'
def test_readonly_sensitive_cache_data_is_encrypted(settings):

View File

@@ -20,7 +20,10 @@ from rest_framework.exceptions import ParseError, PermissionDenied
# Django OAuth Toolkit
from awx.main.models.oauth import OAuth2Application, OAuth2AccessToken
# django-ansible-base
from ansible_base.lib.utils.validation import to_python_boolean
from ansible_base.rbac.models import RoleEvaluation
from ansible_base.rbac import permission_registry
# AWX
from awx.main.utils import (
@@ -57,6 +60,7 @@ from awx.main.models import (
Project,
ProjectUpdate,
ProjectUpdateEvent,
ReceptorAddress,
Role,
Schedule,
SystemJob,
@@ -71,8 +75,6 @@ from awx.main.models import (
WorkflowJobTemplateNode,
WorkflowApproval,
WorkflowApprovalTemplate,
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
ROLE_SINGLETON_SYSTEM_AUDITOR,
)
from awx.main.models.mixins import ResourceMixin
@@ -263,7 +265,11 @@ class BaseAccess(object):
return self.can_change(obj, data)
def can_delete(self, obj):
return self.user.is_superuser
if self.user.is_superuser:
return True
if obj._meta.model_name in [cls._meta.model_name for cls in permission_registry.all_registered_models]:
return self.user.has_obj_perm(obj, 'delete')
return False
def can_copy(self, obj):
return self.can_add({'reference_obj': obj})
@@ -638,7 +644,10 @@ class UserAccess(BaseAccess):
"""
model = User
prefetch_related = ('profile',)
prefetch_related = (
'profile',
'resource',
)
def filtered_queryset(self):
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and (self.user.admin_of_organizations.exists() or self.user.auditor_of_organizations.exists()):
@@ -647,9 +656,7 @@ class UserAccess(BaseAccess):
qs = (
User.objects.filter(pk__in=Organization.accessible_objects(self.user, 'read_role').values('member_role__members'))
| User.objects.filter(pk=self.user.id)
| User.objects.filter(
pk__in=Role.objects.filter(singleton_name__in=[ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR]).values('members')
)
| User.objects.filter(is_superuser=True)
).distinct()
return qs
@@ -707,6 +714,15 @@ class UserAccess(BaseAccess):
if not allow_orphans:
# in these cases only superusers can modify orphan users
return False
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
# Permission granted if the user has all permissions that the target user has
target_perms = set(
RoleEvaluation.objects.filter(role__in=obj.has_roles.all()).values_list('object_id', 'content_type_id', 'codename').distinct()
)
user_perms = set(
RoleEvaluation.objects.filter(role__in=self.user.has_roles.all()).values_list('object_id', 'content_type_id', 'codename').distinct()
)
return not (target_perms - user_perms)
return not obj.roles.all().exclude(ancestors__in=self.user.roles.all()).exists()
else:
return self.is_all_org_admin(obj)
@@ -834,6 +850,7 @@ class OrganizationAccess(NotificationAttachMixin, BaseAccess):
prefetch_related = (
'created_by',
'modified_by',
'resource', # dab_resource_registry
)
# organization admin_role is not a parent of organization auditor_role
notification_attach_roles = ['admin_role', 'auditor_role']
@@ -944,9 +961,6 @@ class InventoryAccess(BaseAccess):
def can_update(self, obj):
return self.user in obj.update_role
def can_delete(self, obj):
return self.can_admin(obj, None)
def can_run_ad_hoc_commands(self, obj):
return self.user in obj.adhoc_role
@@ -1302,6 +1316,7 @@ class TeamAccess(BaseAccess):
'created_by',
'modified_by',
'organization',
'resource', # dab_resource_registry
)
def filtered_queryset(self):
@@ -1399,8 +1414,12 @@ class ExecutionEnvironmentAccess(BaseAccess):
def can_change(self, obj, data):
if obj and obj.organization_id is None:
raise PermissionDenied
if self.user not in obj.organization.execution_environment_admin_role:
raise PermissionDenied
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
if not self.user.has_obj_perm(obj, 'change'):
raise PermissionDenied
else:
if self.user not in obj.organization.execution_environment_admin_role:
raise PermissionDenied
if data and 'organization' in data:
new_org = get_object_from_data('organization', Organization, data, obj=obj)
if not new_org or self.user not in new_org.execution_environment_admin_role:
@@ -2430,6 +2449,29 @@ class InventoryUpdateEventAccess(BaseAccess):
return False
class ReceptorAddressAccess(BaseAccess):
"""
I can see receptor address records whenever I can access the instance
"""
model = ReceptorAddress
def filtered_queryset(self):
return self.model.objects.filter(Q(instance__in=Instance.accessible_pk_qs(self.user, 'read_role')))
@check_superuser
def can_add(self, data):
return False
@check_superuser
def can_change(self, obj, data):
return False
@check_superuser
def can_delete(self, obj):
return False
class SystemJobEventAccess(BaseAccess):
"""
I can only see manage System Jobs events if I'm a super user
@@ -2563,6 +2605,8 @@ class ScheduleAccess(UnifiedCredentialsMixin, BaseAccess):
if not JobLaunchConfigAccess(self.user).can_add(data):
return False
if not data:
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
return self.user.has_roles.filter(permission_partials__codename__in=['execute_jobtemplate', 'update_project', 'update_inventory']).exists()
return Role.objects.filter(role_field__in=['update_role', 'execute_role'], ancestors__in=self.user.roles.all()).exists()
return self.check_related('unified_job_template', UnifiedJobTemplate, data, role_field='execute_role', mandatory=True)
@@ -2591,6 +2635,8 @@ class NotificationTemplateAccess(BaseAccess):
prefetch_related = ('created_by', 'modified_by', 'organization')
def filtered_queryset(self):
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
return self.model.access_qs(self.user, 'view')
return self.model.objects.filter(
Q(organization__in=Organization.accessible_objects(self.user, 'notification_admin_role')) | Q(organization__in=self.user.auditor_of_organizations)
).distinct()
@@ -2759,7 +2805,7 @@ class ActivityStreamAccess(BaseAccess):
| Q(notification_template__organization__in=auditing_orgs)
| Q(notification__notification_template__organization__in=auditing_orgs)
| Q(label__organization__in=auditing_orgs)
| Q(role__in=Role.objects.filter(ancestors__in=self.user.roles.all()) if auditing_orgs else [])
| Q(role__in=Role.visible_roles(self.user) if auditing_orgs else [])
)
project_set = Project.accessible_pk_qs(self.user, 'read_role')
@@ -2816,13 +2862,10 @@ class RoleAccess(BaseAccess):
def filtered_queryset(self):
result = Role.visible_roles(self.user)
# Sanity check: is the requesting user an orphaned non-admin/auditor?
# if yes, make system admin/auditor mandatorily visible.
if not self.user.is_superuser and not self.user.is_system_auditor and not self.user.organizations.exists():
mandatories = ('system_administrator', 'system_auditor')
super_qs = Role.objects.filter(singleton_name__in=mandatories)
result = result | super_qs
return result
# Make system admin/auditor mandatorily visible.
mandatories = ('system_administrator', 'system_auditor')
super_qs = Role.objects.filter(singleton_name__in=mandatories)
return result | super_qs
def can_add(self, obj, data):
# Unsupported for now

View File

@@ -2,7 +2,7 @@
import logging
# AWX
from awx.main.analytics.subsystem_metrics import Metrics
from awx.main.analytics.subsystem_metrics import DispatcherMetrics, CallbackReceiverMetrics
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_task_queuename
@@ -11,4 +11,5 @@ logger = logging.getLogger('awx.main.scheduler')
@task(queue=get_task_queuename)
def send_subsystem_metrics():
Metrics().send_metrics()
DispatcherMetrics().send_metrics()
CallbackReceiverMetrics().send_metrics()

View File

@@ -419,7 +419,7 @@ def _events_table(since, full_path, until, tbl, where_column, project_job_create
resolved_action,
resolved_role,
-- '-' operator listed here:
-- https://www.postgresql.org/docs/12/functions-json.html
-- https://www.postgresql.org/docs/15/functions-json.html
-- note that operator is only supported by jsonb objects
-- https://www.postgresql.org/docs/current/datatype-json.html
(CASE WHEN event = 'playbook_on_stats' THEN {event_data} - 'artifact_data' END) as playbook_on_stats,

View File

@@ -1,10 +1,15 @@
import itertools
import redis
import json
import time
import logging
import prometheus_client
from prometheus_client.core import GaugeMetricFamily, HistogramMetricFamily
from prometheus_client.registry import CollectorRegistry
from django.conf import settings
from django.apps import apps
from django.http import HttpRequest
from rest_framework.request import Request
from awx.main.consumers import emit_channel_notification
from awx.main.utils import is_testing
@@ -13,6 +18,30 @@ root_key = settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX
logger = logging.getLogger('awx.main.analytics')
class MetricsNamespace:
def __init__(self, namespace):
self._namespace = namespace
class MetricsServerSettings(MetricsNamespace):
def port(self):
return settings.METRICS_SUBSYSTEM_CONFIG['server'][self._namespace]['port']
class MetricsServer(MetricsServerSettings):
def __init__(self, namespace, registry):
MetricsNamespace.__init__(self, namespace)
self._registry = registry
def start(self):
try:
# TODO: addr for ipv6 ?
prometheus_client.start_http_server(self.port(), addr='localhost', registry=self._registry)
except Exception:
logger.error(f"MetricsServer failed to start for service '{self._namespace}.")
raise
class BaseM:
def __init__(self, field, help_text):
self.field = field
@@ -148,76 +177,40 @@ class HistogramM(BaseM):
return output_text
class Metrics:
def __init__(self, auto_pipe_execute=False, instance_name=None):
class Metrics(MetricsNamespace):
# metric name, help_text
METRICSLIST = []
_METRICSLIST = [
FloatM('subsystem_metrics_pipe_execute_seconds', 'Time spent saving metrics to redis'),
IntM('subsystem_metrics_pipe_execute_calls', 'Number of calls to pipe_execute'),
FloatM('subsystem_metrics_send_metrics_seconds', 'Time spent sending metrics to other nodes'),
]
def __init__(self, namespace, auto_pipe_execute=False, instance_name=None, metrics_have_changed=True, **kwargs):
MetricsNamespace.__init__(self, namespace)
self.pipe = redis.Redis.from_url(settings.BROKER_URL).pipeline()
self.conn = redis.Redis.from_url(settings.BROKER_URL)
self.last_pipe_execute = time.time()
# track if metrics have been modified since last saved to redis
# start with True so that we get an initial save to redis
self.metrics_have_changed = True
self.metrics_have_changed = metrics_have_changed
self.pipe_execute_interval = settings.SUBSYSTEM_METRICS_INTERVAL_SAVE_TO_REDIS
self.send_metrics_interval = settings.SUBSYSTEM_METRICS_INTERVAL_SEND_METRICS
# auto pipe execute will commit transaction of metric data to redis
# at a regular interval (pipe_execute_interval). If set to False,
# the calling function should call .pipe_execute() explicitly
self.auto_pipe_execute = auto_pipe_execute
Instance = apps.get_model('main', 'Instance')
if instance_name:
self.instance_name = instance_name
elif is_testing():
self.instance_name = "awx_testing"
else:
self.instance_name = Instance.objects.my_hostname()
self.instance_name = settings.CLUSTER_HOST_ID # Same as Instance.objects.my_hostname() BUT we do not need to import Instance
# metric name, help_text
METRICSLIST = [
SetIntM('callback_receiver_events_queue_size_redis', 'Current number of events in redis queue'),
IntM('callback_receiver_events_popped_redis', 'Number of events popped from redis'),
IntM('callback_receiver_events_in_memory', 'Current number of events in memory (in transfer from redis to db)'),
IntM('callback_receiver_batch_events_errors', 'Number of times batch insertion failed'),
FloatM('callback_receiver_events_insert_db_seconds', 'Total time spent saving events to database'),
IntM('callback_receiver_events_insert_db', 'Number of events batch inserted into database'),
IntM('callback_receiver_events_broadcast', 'Number of events broadcast to other control plane nodes'),
HistogramM(
'callback_receiver_batch_events_insert_db', 'Number of events batch inserted into database', settings.SUBSYSTEM_METRICS_BATCH_INSERT_BUCKETS
),
SetFloatM('callback_receiver_event_processing_avg_seconds', 'Average processing time per event per callback receiver batch'),
FloatM('subsystem_metrics_pipe_execute_seconds', 'Time spent saving metrics to redis'),
IntM('subsystem_metrics_pipe_execute_calls', 'Number of calls to pipe_execute'),
FloatM('subsystem_metrics_send_metrics_seconds', 'Time spent sending metrics to other nodes'),
SetFloatM('task_manager_get_tasks_seconds', 'Time spent in loading tasks from db'),
SetFloatM('task_manager_start_task_seconds', 'Time spent starting task'),
SetFloatM('task_manager_process_running_tasks_seconds', 'Time spent processing running tasks'),
SetFloatM('task_manager_process_pending_tasks_seconds', 'Time spent processing pending tasks'),
SetFloatM('task_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('task_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('task_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('task_manager_tasks_started', 'Number of tasks started'),
SetIntM('task_manager_running_processed', 'Number of running tasks processed'),
SetIntM('task_manager_pending_processed', 'Number of pending tasks processed'),
SetIntM('task_manager_tasks_blocked', 'Number of tasks blocked from running'),
SetFloatM('task_manager_commit_seconds', 'Time spent in db transaction, including on_commit calls'),
SetFloatM('dependency_manager_get_tasks_seconds', 'Time spent loading pending tasks from db'),
SetFloatM('dependency_manager_generate_dependencies_seconds', 'Time spent generating dependencies for pending tasks'),
SetFloatM('dependency_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('dependency_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('dependency_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('dependency_manager_pending_processed', 'Number of pending tasks processed'),
SetFloatM('workflow_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('workflow_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
# dispatcher subsystem metrics
SetIntM('dispatcher_pool_scale_up_events', 'Number of times local dispatcher scaled up a worker since startup'),
SetIntM('dispatcher_pool_active_task_count', 'Number of active tasks in the worker pool when last task was submitted'),
SetIntM('dispatcher_pool_max_worker_count', 'Highest number of workers in worker pool in last collection interval, about 20s'),
SetFloatM('dispatcher_availability', 'Fraction of time (in last collection interval) dispatcher was able to receive messages'),
]
# turn metric list into dictionary with the metric name as a key
self.METRICS = {}
for m in METRICSLIST:
for m in itertools.chain(self.METRICSLIST, self._METRICSLIST):
self.METRICS[m.field] = m
# track last time metrics were sent to other nodes
@@ -230,7 +223,7 @@ class Metrics:
m.reset_value(self.conn)
self.metrics_have_changed = True
self.conn.delete(root_key + "_lock")
for m in self.conn.scan_iter(root_key + '_instance_*'):
for m in self.conn.scan_iter(root_key + '-' + self._namespace + '_instance_*'):
self.conn.delete(m)
def inc(self, field, value):
@@ -297,7 +290,7 @@ class Metrics:
def send_metrics(self):
# more than one thread could be calling this at the same time, so should
# acquire redis lock before sending metrics
lock = self.conn.lock(root_key + '_lock')
lock = self.conn.lock(root_key + '-' + self._namespace + '_lock')
if not lock.acquire(blocking=False):
return
try:
@@ -307,9 +300,10 @@ class Metrics:
payload = {
'instance': self.instance_name,
'metrics': serialized_metrics,
'metrics_namespace': self._namespace,
}
# store the serialized data locally as well, so that load_other_metrics will read it
self.conn.set(root_key + '_instance_' + self.instance_name, serialized_metrics)
self.conn.set(root_key + '-' + self._namespace + '_instance_' + self.instance_name, serialized_metrics)
emit_channel_notification("metrics", payload)
self.previous_send_metrics.set(current_time)
@@ -331,14 +325,14 @@ class Metrics:
instances_filter = request.query_params.getlist("node")
# get a sorted list of instance names
instance_names = [self.instance_name]
for m in self.conn.scan_iter(root_key + '_instance_*'):
for m in self.conn.scan_iter(root_key + '-' + self._namespace + '_instance_*'):
instance_names.append(m.decode('UTF-8').split('_instance_')[1])
instance_names.sort()
# load data, including data from the this local instance
instance_data = {}
for instance in instance_names:
if len(instances_filter) == 0 or instance in instances_filter:
instance_data_from_redis = self.conn.get(root_key + '_instance_' + instance)
instance_data_from_redis = self.conn.get(root_key + '-' + self._namespace + '_instance_' + instance)
# data from other instances may not be available. That is OK.
if instance_data_from_redis:
instance_data[instance] = json.loads(instance_data_from_redis.decode('UTF-8'))
@@ -357,6 +351,120 @@ class Metrics:
return output_text
class DispatcherMetrics(Metrics):
METRICSLIST = [
SetFloatM('task_manager_get_tasks_seconds', 'Time spent in loading tasks from db'),
SetFloatM('task_manager_start_task_seconds', 'Time spent starting task'),
SetFloatM('task_manager_process_running_tasks_seconds', 'Time spent processing running tasks'),
SetFloatM('task_manager_process_pending_tasks_seconds', 'Time spent processing pending tasks'),
SetFloatM('task_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('task_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('task_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('task_manager_tasks_started', 'Number of tasks started'),
SetIntM('task_manager_running_processed', 'Number of running tasks processed'),
SetIntM('task_manager_pending_processed', 'Number of pending tasks processed'),
SetIntM('task_manager_tasks_blocked', 'Number of tasks blocked from running'),
SetFloatM('task_manager_commit_seconds', 'Time spent in db transaction, including on_commit calls'),
SetFloatM('dependency_manager_get_tasks_seconds', 'Time spent loading pending tasks from db'),
SetFloatM('dependency_manager_generate_dependencies_seconds', 'Time spent generating dependencies for pending tasks'),
SetFloatM('dependency_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('dependency_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('dependency_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('dependency_manager_pending_processed', 'Number of pending tasks processed'),
SetFloatM('workflow_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('workflow_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
# dispatcher subsystem metrics
SetIntM('dispatcher_pool_scale_up_events', 'Number of times local dispatcher scaled up a worker since startup'),
SetIntM('dispatcher_pool_active_task_count', 'Number of active tasks in the worker pool when last task was submitted'),
SetIntM('dispatcher_pool_max_worker_count', 'Highest number of workers in worker pool in last collection interval, about 20s'),
SetFloatM('dispatcher_availability', 'Fraction of time (in last collection interval) dispatcher was able to receive messages'),
]
def __init__(self, *args, **kwargs):
super().__init__(settings.METRICS_SERVICE_DISPATCHER, *args, **kwargs)
class CallbackReceiverMetrics(Metrics):
METRICSLIST = [
SetIntM('callback_receiver_events_queue_size_redis', 'Current number of events in redis queue'),
IntM('callback_receiver_events_popped_redis', 'Number of events popped from redis'),
IntM('callback_receiver_events_in_memory', 'Current number of events in memory (in transfer from redis to db)'),
IntM('callback_receiver_batch_events_errors', 'Number of times batch insertion failed'),
FloatM('callback_receiver_events_insert_db_seconds', 'Total time spent saving events to database'),
IntM('callback_receiver_events_insert_db', 'Number of events batch inserted into database'),
IntM('callback_receiver_events_broadcast', 'Number of events broadcast to other control plane nodes'),
HistogramM(
'callback_receiver_batch_events_insert_db', 'Number of events batch inserted into database', settings.SUBSYSTEM_METRICS_BATCH_INSERT_BUCKETS
),
SetFloatM('callback_receiver_event_processing_avg_seconds', 'Average processing time per event per callback receiver batch'),
]
def __init__(self, *args, **kwargs):
super().__init__(settings.METRICS_SERVICE_CALLBACK_RECEIVER, *args, **kwargs)
def metrics(request):
m = Metrics()
return m.generate_metrics(request)
output_text = ''
for m in [DispatcherMetrics(), CallbackReceiverMetrics()]:
output_text += m.generate_metrics(request)
return output_text
class CustomToPrometheusMetricsCollector(prometheus_client.registry.Collector):
"""
Takes the metric data from redis -> our custom metric fields -> prometheus
library metric fields.
The plan is to get rid of the use of redis, our custom metric fields, and
to switch fully to the prometheus library. At that point, this translation
code will be deleted.
"""
def __init__(self, metrics_obj, *args, **kwargs):
super().__init__(*args, **kwargs)
self._metrics = metrics_obj
def collect(self):
my_hostname = settings.CLUSTER_HOST_ID
instance_data = self._metrics.load_other_metrics(Request(HttpRequest()))
if not instance_data:
logger.debug(f"No metric data not found in redis for metric namespace '{self._metrics._namespace}'")
return None
host_metrics = instance_data.get(my_hostname)
for _, metric in self._metrics.METRICS.items():
entry = host_metrics.get(metric.field)
if not entry:
logger.debug(f"{self._metrics._namespace} metric '{metric.field}' not found in redis data payload {json.dumps(instance_data, indent=2)}")
continue
if isinstance(metric, HistogramM):
buckets = list(zip(metric.buckets, entry['counts']))
buckets = [[str(i[0]), str(i[1])] for i in buckets]
yield HistogramMetricFamily(metric.field, metric.help_text, buckets=buckets, sum_value=entry['sum'])
else:
yield GaugeMetricFamily(metric.field, metric.help_text, value=entry)
class CallbackReceiverMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
registry.register(CustomToPrometheusMetricsCollector(DispatcherMetrics(metrics_have_changed=False)))
super().__init__(settings.METRICS_SERVICE_CALLBACK_RECEIVER, registry)
class DispatcherMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
registry.register(CustomToPrometheusMetricsCollector(CallbackReceiverMetrics(metrics_have_changed=False)))
super().__init__(settings.METRICS_SERVICE_DISPATCHER, registry)
class WebsocketsMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
# registry.register()
super().__init__(settings.METRICS_SERVICE_WEBSOCKETS, registry)

View File

@@ -1,7 +1,40 @@
from django.apps import AppConfig
from django.utils.translation import gettext_lazy as _
from awx.main.utils.named_url_graph import _customize_graph, generate_graph
from awx.conf import register, fields
class MainConfig(AppConfig):
name = 'awx.main'
verbose_name = _('Main')
def load_named_url_feature(self):
models = [m for m in self.get_models() if hasattr(m, 'get_absolute_url')]
generate_graph(models)
_customize_graph()
register(
'NAMED_URL_FORMATS',
field_class=fields.DictField,
read_only=True,
label=_('Formats of all available named urls'),
help_text=_('Read-only list of key-value pairs that shows the standard format of all available named URLs.'),
category=_('Named URL'),
category_slug='named-url',
)
register(
'NAMED_URL_GRAPH_NODES',
field_class=fields.DictField,
read_only=True,
label=_('List of all named url graph nodes.'),
help_text=_(
'Read-only list of key-value pairs that exposes named URL graph topology.'
' Use this list to programmatically generate named URLs for resources'
),
category=_('Named URL'),
category_slug='named-url',
)
def ready(self):
super().ready()
self.load_named_url_feature()

View File

@@ -2,6 +2,7 @@
import logging
# Django
from django.core.checks import Error
from django.utils.translation import gettext_lazy as _
# Django REST Framework
@@ -92,6 +93,7 @@ register(
),
category=_('System'),
category_slug='system',
required=False,
)
register(
@@ -774,6 +776,7 @@ register(
allow_null=True,
category=_('System'),
category_slug='system',
required=False,
)
register(
'AUTOMATION_ANALYTICS_LAST_ENTRIES',
@@ -815,6 +818,7 @@ register(
help_text=_('Max jobs to allow bulk jobs to launch'),
category=_('Bulk Actions'),
category_slug='bulk',
hidden=True,
)
register(
@@ -825,6 +829,7 @@ register(
help_text=_('Max number of hosts to allow to be created in a single bulk action'),
category=_('Bulk Actions'),
category_slug='bulk',
hidden=True,
)
register(
@@ -835,6 +840,7 @@ register(
help_text=_('Max number of hosts to allow to be deleted in a single bulk action'),
category=_('Bulk Actions'),
category_slug='bulk',
hidden=True,
)
register(
@@ -845,6 +851,7 @@ register(
help_text=_('Enable preview of new user interface.'),
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -948,3 +955,27 @@ def logging_validate(serializer, attrs):
register_validate('logging', logging_validate)
def csrf_trusted_origins_validate(serializer, attrs):
if not serializer.instance or not hasattr(serializer.instance, 'CSRF_TRUSTED_ORIGINS'):
return attrs
if 'CSRF_TRUSTED_ORIGINS' not in attrs:
return attrs
errors = []
for origin in attrs['CSRF_TRUSTED_ORIGINS']:
if "://" not in origin:
errors.append(
Error(
"As of Django 4.0, the values in the CSRF_TRUSTED_ORIGINS "
"setting must start with a scheme (usually http:// or "
"https://) but found %s. See the release notes for details." % origin,
)
)
if errors:
error_messages = [error.msg for error in errors]
raise serializers.ValidationError(_('\n'.join(error_messages)))
return attrs
register_validate('system', csrf_trusted_origins_validate)

View File

@@ -14,7 +14,7 @@ __all__ = [
'STANDARD_INVENTORY_UPDATE_ENV',
]
CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'controller', 'insights')
CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'controller', 'insights', 'terraform')
PRIVILEGE_ESCALATION_METHODS = [
('sudo', _('Sudo')),
('su', _('Su')),
@@ -114,3 +114,28 @@ SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS = 'unique_managed_hosts'
# Shared prefetch to use for creating a queryset for the purpose of writing or saving facts
HOST_FACTS_FIELDS = ('name', 'ansible_facts', 'ansible_facts_modified', 'modified', 'inventory_id')
# Data for RBAC compatibility layer
role_name_to_perm_mapping = {
'adhoc_role': ['adhoc_'],
'approval_role': ['approve_'],
'auditor_role': ['audit_'],
'admin_role': ['change_', 'add_', 'delete_'],
'execute_role': ['execute_'],
'read_role': ['view_'],
'update_role': ['update_'],
'member_role': ['member_'],
'use_role': ['use_'],
}
org_role_to_permission = {
'notification_admin_role': 'add_notificationtemplate',
'project_admin_role': 'add_project',
'execute_role': 'execute_jobtemplate',
'inventory_admin_role': 'add_inventory',
'credential_admin_role': 'add_credential',
'workflow_admin_role': 'add_workflowjobtemplate',
'job_template_admin_role': 'change_jobtemplate', # TODO: this doesnt really work, solution not clear
'execution_environment_admin_role': 'add_executionenvironment',
'auditor_role': 'view_project', # TODO: also doesnt really work
}

View File

@@ -106,7 +106,7 @@ class RelayConsumer(AsyncJsonWebsocketConsumer):
if group == "metrics":
message = json.loads(message['text'])
conn = redis.Redis.from_url(settings.BROKER_URL)
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "_instance_" + message['instance'], message['metrics'])
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "-" + message['metrics_namespace'] + "_instance_" + message['instance'], message['metrics'])
else:
await self.channel_layer.group_send(group, message)

View File

@@ -1,9 +1,10 @@
from azure.keyvault.secrets import SecretClient
from azure.identity import ClientSecretCredential
from msrestazure import azure_cloud
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
from azure.keyvault import KeyVaultClient, KeyVaultAuthentication
from azure.common.credentials import ServicePrincipalCredentials
from msrestazure import azure_cloud
# https://github.com/Azure/msrestazure-for-python/blob/master/msrestazure/azure_cloud.py
@@ -54,22 +55,9 @@ azure_keyvault_inputs = {
def azure_keyvault_backend(**kwargs):
url = kwargs['url']
[cloud] = [c for c in clouds if c.name == kwargs.get('cloud_name', default_cloud.name)]
def auth_callback(server, resource, scope):
credentials = ServicePrincipalCredentials(
url=url,
client_id=kwargs['client'],
secret=kwargs['secret'],
tenant=kwargs['tenant'],
resource=f"https://{cloud.suffixes.keyvault_dns.split('.', 1).pop()}",
)
token = credentials.token
return token['token_type'], token['access_token']
kv = KeyVaultClient(KeyVaultAuthentication(auth_callback))
return kv.get_secret(url, kwargs['secret_field'], kwargs.get('secret_version', '')).value
csc = ClientSecretCredential(tenant_id=kwargs['tenant'], client_id=kwargs['client'], client_secret=kwargs['secret'])
kv = SecretClient(credential=csc, vault_url=kwargs['url'])
return kv.get_secret(name=kwargs['secret_field'], version=kwargs.get('secret_version', '')).value
azure_keyvault_plugin = CredentialPlugin('Microsoft Azure Key Vault', inputs=azure_keyvault_inputs, backend=azure_keyvault_backend)

View File

@@ -1,6 +1,7 @@
import os
import psycopg
import select
from copy import deepcopy
from contextlib import contextmanager
@@ -94,8 +95,8 @@ class PubSub(object):
def create_listener_connection():
conf = settings.DATABASES['default'].copy()
conf['OPTIONS'] = conf.get('OPTIONS', {}).copy()
conf = deepcopy(settings.DATABASES['default'])
conf['OPTIONS'] = deepcopy(conf.get('OPTIONS', {}))
# Modify the application name to distinguish from other connections the process might use
conf['OPTIONS']['application_name'] = get_application_name(settings.CLUSTER_HOST_ID, function='listener')
@@ -105,7 +106,11 @@ def create_listener_connection():
for k, v in settings.LISTENER_DATABASES.get('default', {}).get('OPTIONS', {}).items():
conf['OPTIONS'][k] = v
connection_data = f"dbname={conf['NAME']} host={conf['HOST']} user={conf['USER']} password={conf['PASSWORD']} port={conf['PORT']}"
# Allow password-less authentication
if 'PASSWORD' in conf:
conf['OPTIONS']['password'] = conf.pop('PASSWORD')
connection_data = f"dbname={conf['NAME']} host={conf['HOST']} user={conf['USER']} port={conf['PORT']}"
return psycopg.connect(connection_data, autocommit=True, **conf['OPTIONS'])

View File

@@ -162,13 +162,13 @@ class AWXConsumerRedis(AWXConsumerBase):
class AWXConsumerPG(AWXConsumerBase):
def __init__(self, *args, schedule=None, **kwargs):
super().__init__(*args, **kwargs)
self.pg_max_wait = settings.DISPATCHER_DB_DOWNTIME_TOLERANCE
self.pg_max_wait = getattr(settings, 'DISPATCHER_DB_DOWNTOWN_TOLLERANCE', settings.DISPATCHER_DB_DOWNTIME_TOLERANCE)
# if no successful loops have ran since startup, then we should fail right away
self.pg_is_down = True # set so that we fail if we get database errors on startup
init_time = time.time()
self.pg_down_time = init_time - self.pg_max_wait # allow no grace period
self.last_cleanup = init_time
self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False)
self.subsystem_metrics = s_metrics.DispatcherMetrics(auto_pipe_execute=False)
self.last_metrics_gather = init_time
self.listen_cumulative_time = 0.0
if schedule:
@@ -259,6 +259,12 @@ class AWXConsumerPG(AWXConsumerBase):
current_downtime = time.time() - self.pg_down_time
if current_downtime > self.pg_max_wait:
logger.exception(f"Postgres event consumer has not recovered in {current_downtime} s, exiting")
# Sending QUIT to multiprocess queue to signal workers to exit
for worker in self.pool.workers:
try:
worker.quit()
except Exception:
logger.exception(f"Error sending QUIT to worker {worker}")
raise
# Wait for a second before next attempt, but still listen for any shutdown signals
for i in range(10):
@@ -270,6 +276,12 @@ class AWXConsumerPG(AWXConsumerBase):
except Exception:
# Log unanticipated exception in addition to writing to stderr to get timestamps and other metadata
logger.exception('Encountered unhandled error in dispatcher main loop')
# Sending QUIT to multiprocess queue to signal workers to exit
for worker in self.pool.workers:
try:
worker.quit()
except Exception:
logger.exception(f"Error sending QUIT to worker {worker}")
raise

View File

@@ -72,7 +72,7 @@ class CallbackBrokerWorker(BaseWorker):
def __init__(self):
self.buff = {}
self.redis = redis.Redis.from_url(settings.BROKER_URL)
self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False)
self.subsystem_metrics = s_metrics.CallbackReceiverMetrics(auto_pipe_execute=False)
self.queue_pop = 0
self.queue_name = settings.CALLBACK_QUEUE
self.prof = AWXProfiler("CallbackBrokerWorker")

View File

@@ -5,6 +5,7 @@
import copy
import json
import re
import sys
import urllib.parse
from jinja2 import sandbox, StrictUndefined
@@ -406,11 +407,13 @@ class SmartFilterField(models.TextField):
# https://docs.python.org/2/library/stdtypes.html#truth-value-testing
if not value:
return None
value = urllib.parse.unquote(value)
try:
SmartFilter().query_from_string(value)
except RuntimeError as e:
raise models.base.ValidationError(e)
# avoid doing too much during migrations
if 'migrate' not in sys.argv:
value = urllib.parse.unquote(value)
try:
SmartFilter().query_from_string(value)
except RuntimeError as e:
raise models.base.ValidationError(e)
return super(SmartFilterField, self).get_prep_value(value)

View File

@@ -0,0 +1,53 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved
from django.core.management.base import BaseCommand
from awx.main.models import Instance, ReceptorAddress
def add_address(**kwargs):
try:
instance = Instance.objects.get(hostname=kwargs.pop('instance'))
kwargs['instance'] = instance
if kwargs.get('canonical') and instance.receptor_addresses.filter(canonical=True).exclude(address=kwargs['address']).exists():
print(f"Instance {instance.hostname} already has a canonical address, skipping")
return False
# if ReceptorAddress already exists with address, just update
# otherwise, create new ReceptorAddress
addr, _ = ReceptorAddress.objects.update_or_create(address=kwargs.pop('address'), defaults=kwargs)
print(f"Successfully added receptor address {addr.get_full_address()}")
return True
except Exception as e:
print(f"Error adding receptor address: {e}")
return False
class Command(BaseCommand):
"""
Internal controller command.
Register receptor address to an already-registered instance.
"""
help = "Add receptor address to an instance."
def add_arguments(self, parser):
parser.add_argument('--instance', dest='instance', required=True, type=str, help="Instance hostname this address is added to")
parser.add_argument('--address', dest='address', required=True, type=str, help="Receptor address")
parser.add_argument('--port', dest='port', type=int, help="Receptor listener port")
parser.add_argument('--websocket_path', dest='websocket_path', type=str, default="", help="Path for websockets")
parser.add_argument('--is_internal', action='store_true', help="If true, address only resolvable within the Kubernetes cluster")
parser.add_argument('--protocol', type=str, default='tcp', choices=['tcp', 'ws', 'wss'], help="Protocol to use for the Receptor listener")
parser.add_argument('--canonical', action='store_true', help="If true, address is the canonical address for the instance")
parser.add_argument('--peers_from_control_nodes', action='store_true', help="If true, control nodes will peer to this address")
def handle(self, **options):
address_options = {
k: options[k]
for k in ('instance', 'address', 'port', 'websocket_path', 'is_internal', 'protocol', 'peers_from_control_nodes', 'canonical')
if options[k]
}
changed = add_address(**address_options)
if changed:
print("(changed: True)")

View File

@@ -0,0 +1,195 @@
import json
import os
import sys
import re
from typing import Any
from django.core.management.base import BaseCommand
from django.conf import settings
from awx.conf import settings_registry
class Command(BaseCommand):
help = 'Dump the current auth configuration in django_ansible_base.authenticator format, currently supports LDAP and SAML'
DAB_SAML_AUTHENTICATOR_KEYS = {
"SP_ENTITY_ID": True,
"SP_PUBLIC_CERT": True,
"SP_PRIVATE_KEY": True,
"ORG_INFO": True,
"TECHNICAL_CONTACT": True,
"SUPPORT_CONTACT": True,
"SP_EXTRA": False,
"SECURITY_CONFIG": False,
"EXTRA_DATA": False,
"ENABLED_IDPS": True,
"CALLBACK_URL": False,
}
DAB_LDAP_AUTHENTICATOR_KEYS = {
"SERVER_URI": True,
"BIND_DN": False,
"BIND_PASSWORD": False,
"CONNECTION_OPTIONS": False,
"GROUP_TYPE": True,
"GROUP_TYPE_PARAMS": True,
"GROUP_SEARCH": False,
"START_TLS": False,
"USER_DN_TEMPLATE": True,
"USER_ATTR_MAP": True,
"USER_SEARCH": False,
}
def is_enabled(self, settings, keys):
missing_fields = []
for key, required in keys.items():
if required and not settings.get(key):
missing_fields.append(key)
if missing_fields:
return False, missing_fields
return True, None
def get_awx_ldap_settings(self) -> dict[str, dict[str, Any]]:
awx_ldap_settings = {}
for awx_ldap_setting in settings_registry.get_registered_settings(category_slug='ldap'):
key = awx_ldap_setting.removeprefix("AUTH_LDAP_")
value = getattr(settings, awx_ldap_setting, None)
awx_ldap_settings[key] = value
grouped_settings = {}
for key, value in awx_ldap_settings.items():
match = re.search(r'(\d+)', key)
index = int(match.group()) if match else 0
new_key = re.sub(r'\d+_', '', key)
if index not in grouped_settings:
grouped_settings[index] = {}
grouped_settings[index][new_key] = value
if new_key == "GROUP_TYPE" and value:
grouped_settings[index][new_key] = type(value).__name__
if new_key == "SERVER_URI" and value:
value = value.split(", ")
grouped_settings[index][new_key] = value
if type(value).__name__ == "LDAPSearch":
data = []
data.append(value.base_dn)
data.append("SCOPE_SUBTREE")
data.append(value.filterstr)
grouped_settings[index][new_key] = data
return grouped_settings
def get_awx_saml_settings(self) -> dict[str, Any]:
awx_saml_settings = {}
for awx_saml_setting in settings_registry.get_registered_settings(category_slug='saml'):
awx_saml_settings[awx_saml_setting.removeprefix("SOCIAL_AUTH_SAML_")] = getattr(settings, awx_saml_setting, None)
return awx_saml_settings
def format_config_data(self, enabled, awx_settings, type, keys, name):
config = {
"type": f"ansible_base.authentication.authenticator_plugins.{type}",
"name": name,
"enabled": enabled,
"create_objects": True,
"users_unique": False,
"remove_users": True,
"configuration": {},
}
for k in keys:
v = awx_settings.get(k)
config["configuration"].update({k: v})
if type == "saml":
idp_to_key_mapping = {
"url": "IDP_URL",
"x509cert": "IDP_X509_CERT",
"entity_id": "IDP_ENTITY_ID",
"attr_email": "IDP_ATTR_EMAIL",
"attr_groups": "IDP_GROUPS",
"attr_username": "IDP_ATTR_USERNAME",
"attr_last_name": "IDP_ATTR_LAST_NAME",
"attr_first_name": "IDP_ATTR_FIRST_NAME",
"attr_user_permanent_id": "IDP_ATTR_USER_PERMANENT_ID",
}
for idp_name in awx_settings.get("ENABLED_IDPS", {}):
for key in idp_to_key_mapping:
value = awx_settings["ENABLED_IDPS"][idp_name].get(key)
if value is not None:
config["name"] = idp_name
config["configuration"].update({idp_to_key_mapping[key]: value})
return config
def add_arguments(self, parser):
parser.add_argument(
"output_file",
nargs="?",
type=str,
default=None,
help="Output JSON file path",
)
def handle(self, *args, **options):
try:
data = []
# dump SAML settings
awx_saml_settings = self.get_awx_saml_settings()
awx_saml_enabled, saml_missing_fields = self.is_enabled(awx_saml_settings, self.DAB_SAML_AUTHENTICATOR_KEYS)
if awx_saml_enabled:
awx_saml_name = awx_saml_settings["ENABLED_IDPS"]
data.append(
self.format_config_data(
awx_saml_enabled,
awx_saml_settings,
"saml",
self.DAB_SAML_AUTHENTICATOR_KEYS,
awx_saml_name,
)
)
else:
data.append({"SAML_missing_fields": saml_missing_fields})
# dump LDAP settings
awx_ldap_group_settings = self.get_awx_ldap_settings()
for awx_ldap_name, awx_ldap_settings in awx_ldap_group_settings.items():
awx_ldap_enabled, ldap_missing_fields = self.is_enabled(awx_ldap_settings, self.DAB_LDAP_AUTHENTICATOR_KEYS)
if awx_ldap_enabled:
data.append(
self.format_config_data(
awx_ldap_enabled,
awx_ldap_settings,
"ldap",
self.DAB_LDAP_AUTHENTICATOR_KEYS,
f"LDAP_{awx_ldap_name}",
)
)
else:
data.append({f"LDAP_{awx_ldap_name}_missing_fields": ldap_missing_fields})
# write to file if requested
if options["output_file"]:
# Define the path for the output JSON file
output_file = options["output_file"]
# Ensure the directory exists
os.makedirs(os.path.dirname(output_file), exist_ok=True)
# Write data to the JSON file
with open(output_file, "w") as f:
json.dump(data, f, indent=4)
self.stdout.write(self.style.SUCCESS(f"Auth config data dumped to {output_file}"))
else:
self.stdout.write(json.dumps(data, indent=4))
except Exception as e:
self.stdout.write(self.style.ERROR(f"An error occurred: {str(e)}"))
sys.exit(1)

View File

@@ -55,7 +55,7 @@ class Command(BaseCommand):
capacity = f' capacity={x.capacity}' if x.node_type != 'hop' else ''
version = f" version={x.version or '?'}" if x.node_type != 'hop' else ''
heartbeat = f' heartbeat="{x.last_seen:%Y-%m-%d %H:%M:%S}"' if x.capacity or x.node_type == 'hop' else ''
heartbeat = f' heartbeat="{x.last_seen:%Y-%m-%d %H:%M:%S}"' if x.last_seen else ''
print(f'\t{color}{x.hostname}{capacity} node_type={x.node_type}{version}{heartbeat}{end_color}')
print()

View File

@@ -25,20 +25,17 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--hostname', dest='hostname', type=str, help="Hostname used during provisioning")
parser.add_argument('--listener_port', dest='listener_port', type=int, help="Receptor listener port")
parser.add_argument('--node_type', type=str, default='hybrid', choices=['control', 'execution', 'hop', 'hybrid'], help="Instance Node type")
parser.add_argument('--uuid', type=str, help="Instance UUID")
def _register_hostname(self, hostname, node_type, uuid, listener_port):
def _register_hostname(self, hostname, node_type, uuid):
if not hostname:
if not settings.AWX_AUTO_DEPROVISION_INSTANCES:
raise CommandError('Registering with values from settings only intended for use in K8s installs')
from awx.main.management.commands.register_queue import RegisterQueue
(changed, instance) = Instance.objects.register(
ip_address=os.environ.get('MY_POD_IP'), listener_port=listener_port, node_type='control', node_uuid=settings.SYSTEM_UUID
)
(changed, instance) = Instance.objects.register(ip_address=os.environ.get('MY_POD_IP'), node_type='control', node_uuid=settings.SYSTEM_UUID)
RegisterQueue(settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, 100, 0, [], is_container_group=False).register()
RegisterQueue(
settings.DEFAULT_EXECUTION_QUEUE_NAME,
@@ -51,16 +48,17 @@ class Command(BaseCommand):
max_concurrent_jobs=settings.DEFAULT_EXECUTION_QUEUE_MAX_CONCURRENT_JOBS,
).register()
else:
(changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, node_uuid=uuid, listener_port=listener_port)
(changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, node_uuid=uuid)
if changed:
print("Successfully registered instance {}".format(hostname))
else:
print("Instance already registered {}".format(instance.hostname))
self.changed = changed
@transaction.atomic
def handle(self, **options):
self.changed = False
self._register_hostname(options.get('hostname'), options.get('node_type'), options.get('uuid'), options.get('listener_port'))
self._register_hostname(options.get('hostname'), options.get('node_type'), options.get('uuid'))
if self.changed:
print("(changed: True)")

View File

@@ -1,9 +1,7 @@
import warnings
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction
from awx.main.models import Instance, InstanceLink
from awx.main.models import Instance, InstanceLink, ReceptorAddress
class Command(BaseCommand):
@@ -28,7 +26,9 @@ class Command(BaseCommand):
def handle(self, **options):
# provides a mapping of hostname to Instance objects
nodes = Instance.objects.in_bulk(field_name='hostname')
nodes = Instance.objects.all().in_bulk(field_name='hostname')
# provides a mapping of address to ReceptorAddress objects
addresses = ReceptorAddress.objects.all().in_bulk(field_name='address')
if options['source'] not in nodes:
raise CommandError(f"Host {options['source']} is not a registered instance.")
@@ -39,6 +39,14 @@ class Command(BaseCommand):
if options['exact'] is not None and options['disconnect']:
raise CommandError("The option --disconnect may not be used with --exact.")
# make sure each target has a receptor address
peers = options['peers'] or []
disconnect = options['disconnect'] or []
exact = options['exact'] or []
for peer in peers + disconnect + exact:
if peer not in addresses:
raise CommandError(f"Peer {peer} does not have a receptor address.")
# No 1-cycles
for collection in ('peers', 'disconnect', 'exact'):
if options[collection] is not None and options['source'] in options[collection]:
@@ -47,9 +55,12 @@ class Command(BaseCommand):
# No 2-cycles
if options['peers'] or options['exact'] is not None:
peers = set(options['peers'] or options['exact'])
incoming = set(InstanceLink.objects.filter(target=nodes[options['source']]).values_list('source__hostname', flat=True))
if options['source'] in addresses:
incoming = set(InstanceLink.objects.filter(target=addresses[options['source']]).values_list('source__hostname', flat=True))
else:
incoming = set()
if peers & incoming:
warnings.warn(f"Source node {options['source']} should not link to nodes already peering to it: {peers & incoming}.")
raise CommandError(f"Source node {options['source']} should not link to nodes already peering to it: {peers & incoming}.")
if options['peers']:
missing_peers = set(options['peers']) - set(nodes)
@@ -60,7 +71,7 @@ class Command(BaseCommand):
results = 0
for target in options['peers']:
_, created = InstanceLink.objects.update_or_create(
source=nodes[options['source']], target=nodes[target], defaults={'link_state': InstanceLink.States.ESTABLISHED}
source=nodes[options['source']], target=addresses[target], defaults={'link_state': InstanceLink.States.ESTABLISHED}
)
if created:
results += 1
@@ -70,9 +81,9 @@ class Command(BaseCommand):
if options['disconnect']:
results = 0
for target in options['disconnect']:
if target not in nodes: # Be permissive, the node might have already been de-registered.
if target not in addresses: # Be permissive, the node might have already been de-registered.
continue
n, _ = InstanceLink.objects.filter(source=nodes[options['source']], target=nodes[target]).delete()
n, _ = InstanceLink.objects.filter(source=nodes[options['source']], target=addresses[target]).delete()
results += n
print(f"{results} peer links removed from the database.")
@@ -81,11 +92,11 @@ class Command(BaseCommand):
additions = 0
with transaction.atomic():
peers = set(options['exact'])
links = set(InstanceLink.objects.filter(source=nodes[options['source']]).values_list('target__hostname', flat=True))
removals, _ = InstanceLink.objects.filter(source=nodes[options['source']], target__hostname__in=links - peers).delete()
links = set(InstanceLink.objects.filter(source=nodes[options['source']]).values_list('target__address', flat=True))
removals, _ = InstanceLink.objects.filter(source=nodes[options['source']], target__instance__hostname__in=links - peers).delete()
for target in peers - links:
_, created = InstanceLink.objects.update_or_create(
source=nodes[options['source']], target=nodes[target], defaults={'link_state': InstanceLink.States.ESTABLISHED}
source=nodes[options['source']], target=addresses[target], defaults={'link_state': InstanceLink.States.ESTABLISHED}
)
if created:
additions += 1

View File

@@ -0,0 +1,26 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved
from django.core.management.base import BaseCommand
from awx.main.models import ReceptorAddress
class Command(BaseCommand):
"""
Internal controller command.
Delete a receptor address.
"""
help = "Add receptor address to an instance."
def add_arguments(self, parser):
parser.add_argument('--address', dest='address', type=str, help="Receptor address to remove")
def handle(self, **options):
deleted = ReceptorAddress.objects.filter(address=options['address']).delete()
if deleted[0]:
print(f"Successfully removed {options['address']}")
print("(changed: True)")
else:
print(f"Did not remove {options['address']}, not found")

View File

@@ -3,6 +3,7 @@
from django.conf import settings
from django.core.management.base import BaseCommand
from awx.main.analytics.subsystem_metrics import CallbackReceiverMetricsServer
from awx.main.dispatch.control import Control
from awx.main.dispatch.worker import AWXConsumerRedis, CallbackBrokerWorker
@@ -25,6 +26,9 @@ class Command(BaseCommand):
print(Control('callback_receiver').status())
return
consumer = None
CallbackReceiverMetricsServer().start()
try:
consumer = AWXConsumerRedis(
'callback_receiver',

View File

@@ -10,6 +10,7 @@ from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.control import Control
from awx.main.dispatch.pool import AutoscalePool
from awx.main.dispatch.worker import AWXConsumerPG, TaskWorker
from awx.main.analytics.subsystem_metrics import DispatcherMetricsServer
logger = logging.getLogger('awx.main.dispatch')
@@ -62,6 +63,8 @@ class Command(BaseCommand):
consumer = None
DispatcherMetricsServer().start()
try:
queues = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()]
consumer = AWXConsumerPG('dispatcher', TaskWorker(), queues, AutoscalePool(min_workers=4), schedule=settings.CELERYBEAT_SCHEDULE)

View File

@@ -16,6 +16,7 @@ from awx.main.analytics.broadcast_websocket import (
RelayWebsocketStatsManager,
safe_name,
)
from awx.main.analytics.subsystem_metrics import WebsocketsMetricsServer
from awx.main.wsrelay import WebSocketRelayManager
@@ -100,8 +101,9 @@ class Command(BaseCommand):
migrating = bool(executor.migration_plan(executor.loader.graph.leaf_nodes()))
connection.close() # Because of async nature, main loop will use new connection, so close this
except Exception as exc:
logger.warning(f'Error on startup of run_wsrelay (error: {exc}), retry in 10s...')
time.sleep(10)
time.sleep(10) # Prevent supervisor from restarting the service too quickly and the service to enter FATAL state
# sleeping before logging because logging rely on setting which require database connection...
logger.warning(f'Error on startup of run_wsrelay (error: {exc}), slept for 10s...')
return
# In containerized deployments, migrations happen in the task container,
@@ -120,13 +122,14 @@ class Command(BaseCommand):
return
try:
my_hostname = Instance.objects.my_hostname()
my_hostname = Instance.objects.my_hostname() # This relies on settings.CLUSTER_HOST_ID which requires database connection
logger.info('Active instance with hostname {} is registered.'.format(my_hostname))
except RuntimeError as e:
# the CLUSTER_HOST_ID in the task, and web instance must match and
# ensure network connectivity between the task and web instance
logger.info('Unable to return currently active instance: {}, retry in 5s...'.format(e))
time.sleep(5)
time.sleep(10) # Prevent supervisor from restarting the service too quickly and the service to enter FATAL state
# sleeping before logging because logging rely on setting which require database connection...
logger.warning(f"Unable to return currently active instance: {e}, slept for 10s before return.")
return
if options.get('status'):
@@ -163,8 +166,16 @@ class Command(BaseCommand):
return
WebsocketsMetricsServer().start()
try:
logger.info('Starting Websocket Relayer...')
websocket_relay_manager = WebSocketRelayManager()
asyncio.run(websocket_relay_manager.run())
except KeyboardInterrupt:
logger.info('Terminating Websocket Relayer')
except BaseException as e: # BaseException is used to catch all exceptions including asyncio.CancelledError
time.sleep(10) # Prevent supervisor from restarting the service too quickly and the service to enter FATAL state
# sleeping before logging because logging rely on setting which require database connection...
logger.warning(f"Encounter error while running Websocket Relayer {e}, slept for 10s...")
return

View File

@@ -115,7 +115,14 @@ class InstanceManager(models.Manager):
return node[0]
raise RuntimeError("No instance found with the current cluster host id")
def register(self, node_uuid=None, hostname=None, ip_address="", listener_port=None, node_type='hybrid', defaults=None):
def register(
self,
node_uuid=None,
hostname=None,
ip_address="",
node_type='hybrid',
defaults=None,
):
if not hostname:
hostname = settings.CLUSTER_HOST_ID
@@ -161,9 +168,6 @@ class InstanceManager(models.Manager):
if instance.node_type != node_type:
instance.node_type = node_type
update_fields.append('node_type')
if instance.listener_port != listener_port:
instance.listener_port = listener_port
update_fields.append('listener_port')
if update_fields:
instance.save(update_fields=update_fields)
return (True, instance)
@@ -174,11 +178,13 @@ class InstanceManager(models.Manager):
create_defaults = {
'node_state': Instance.States.INSTALLED,
'capacity': 0,
'managed': True,
}
if defaults is not None:
create_defaults.update(defaults)
uuid_option = {'uuid': node_uuid if node_uuid is not None else uuid.uuid4()}
if node_type == 'execution' and 'version' not in create_defaults:
create_defaults['version'] = RECEPTOR_PENDING
instance = self.create(hostname=hostname, ip_address=ip_address, listener_port=listener_port, node_type=node_type, **create_defaults, **uuid_option)
instance = self.create(hostname=hostname, ip_address=ip_address, node_type=node_type, **create_defaults, **uuid_option)
return (True, instance)

View File

@@ -1,25 +1,25 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
import functools
import logging
import threading
import time
import urllib.parse
from pathlib import Path
from django.conf import settings
from django.contrib.auth import logout
from django.contrib.auth.models import User
from django.db.migrations.executor import MigrationExecutor
from django.db.migrations.recorder import MigrationRecorder
from django.db import connection
from django.shortcuts import redirect
from django.apps import apps
from django.utils.deprecation import MiddlewareMixin
from django.utils.translation import gettext_lazy as _
from django.urls import reverse, resolve
from awx.main.utils.named_url_graph import generate_graph, GraphNode
from awx.conf import fields, register
from awx.main import migrations
from awx.main.utils.profiling import AWXProfiler
from awx.main.utils.common import memoize
from awx.urls import get_urlpatterns
logger = logging.getLogger('awx.main.middleware')
@@ -97,49 +97,7 @@ class DisableLocalAuthMiddleware(MiddlewareMixin):
logout(request)
def _customize_graph():
from awx.main.models import Instance, Schedule, UnifiedJobTemplate
for model in [Schedule, UnifiedJobTemplate]:
if model in settings.NAMED_URL_GRAPH:
settings.NAMED_URL_GRAPH[model].remove_bindings()
settings.NAMED_URL_GRAPH.pop(model)
if User not in settings.NAMED_URL_GRAPH:
settings.NAMED_URL_GRAPH[User] = GraphNode(User, ['username'], [])
settings.NAMED_URL_GRAPH[User].add_bindings()
if Instance not in settings.NAMED_URL_GRAPH:
settings.NAMED_URL_GRAPH[Instance] = GraphNode(Instance, ['hostname'], [])
settings.NAMED_URL_GRAPH[Instance].add_bindings()
class URLModificationMiddleware(MiddlewareMixin):
def __init__(self, get_response):
models = [m for m in apps.get_app_config('main').get_models() if hasattr(m, 'get_absolute_url')]
generate_graph(models)
_customize_graph()
register(
'NAMED_URL_FORMATS',
field_class=fields.DictField,
read_only=True,
label=_('Formats of all available named urls'),
help_text=_('Read-only list of key-value pairs that shows the standard format of all available named URLs.'),
category=_('Named URL'),
category_slug='named-url',
)
register(
'NAMED_URL_GRAPH_NODES',
field_class=fields.DictField,
read_only=True,
label=_('List of all named url graph nodes.'),
help_text=_(
'Read-only list of key-value pairs that exposes named URL graph topology.'
' Use this list to programmatically generate named URLs for resources'
),
category=_('Named URL'),
category_slug='named-url',
)
super().__init__(get_response)
@staticmethod
def _hijack_for_old_jt_name(node, kwargs, named_url):
try:
@@ -198,9 +156,46 @@ class URLModificationMiddleware(MiddlewareMixin):
request.path_info = new_path
@memoize(ttl=20)
def is_migrating():
latest_number = 0
latest_name = ''
for migration_path in Path(migrations.__path__[0]).glob('[0-9]*.py'):
try:
migration_number = int(migration_path.name.split('_', 1)[0])
except ValueError:
continue
if migration_number > latest_number:
latest_number = migration_number
latest_name = migration_path.name[: -len('.py')]
return not MigrationRecorder(connection).migration_qs.filter(app='main', name=latest_name).exists()
class MigrationRanCheckMiddleware(MiddlewareMixin):
def process_request(self, request):
executor = MigrationExecutor(connection)
plan = executor.migration_plan(executor.loader.graph.leaf_nodes())
if bool(plan) and getattr(resolve(request.path), 'url_name', '') != 'migrations_notran':
if is_migrating() and getattr(resolve(request.path), 'url_name', '') != 'migrations_notran':
return redirect(reverse("ui:migrations_notran"))
class OptionalURLPrefixPath(MiddlewareMixin):
@functools.lru_cache
def _url_optional(self, prefix):
# Relavant Django code path https://github.com/django/django/blob/stable/4.2.x/django/core/handlers/base.py#L300
#
# resolve_request(request)
# get_resolver(request.urlconf)
# _get_cached_resolver(request.urlconf) <-- cached via @functools.cache
#
# Django will attempt to cache the value(s) of request.urlconf
# Being hashable is a prerequisit for being cachable.
# tuple() is hashable list() is not.
# Hence the tuple(list()) wrap.
return tuple(get_urlpatterns(prefix=prefix))
def process_request(self, request):
prefix = settings.OPTIONAL_API_URLPATTERN_PREFIX
if request.path.startswith(f"/api/{prefix}"):
request.urlconf = self._url_optional(prefix)
else:
request.urlconf = 'awx.urls'

View File

@@ -0,0 +1,150 @@
# Generated by Django 4.2.6 on 2024-01-19 19:24
import django.core.validators
from django.db import migrations, models
import django.db.models.deletion
def create_receptor_addresses(apps, schema_editor):
"""
If listener_port was defined on an instance, create a receptor address for it
"""
Instance = apps.get_model('main', 'Instance')
ReceptorAddress = apps.get_model('main', 'ReceptorAddress')
for instance in Instance.objects.exclude(listener_port=None):
ReceptorAddress.objects.create(
instance=instance,
address=instance.hostname,
port=instance.listener_port,
peers_from_control_nodes=instance.peers_from_control_nodes,
protocol='tcp',
is_internal=False,
canonical=True,
)
def link_to_receptor_addresses(apps, schema_editor):
"""
Modify each InstanceLink to point to the newly created
ReceptorAddresses, using the new target field
"""
InstanceLink = apps.get_model('main', 'InstanceLink')
for link in InstanceLink.objects.all():
link.target = link.target_old.receptor_addresses.get()
link.save()
class Migration(migrations.Migration):
dependencies = [
('main', '0188_add_bitbucket_dc_webhook'),
]
operations = [
migrations.CreateModel(
name='ReceptorAddress',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('address', models.CharField(help_text='Routable address for this instance.', max_length=255)),
(
'port',
models.IntegerField(
default=27199,
help_text='Port for the address.',
validators=[django.core.validators.MinValueValidator(0), django.core.validators.MaxValueValidator(65535)],
),
),
('websocket_path', models.CharField(blank=True, default='', help_text='Websocket path.', max_length=255)),
(
'protocol',
models.CharField(
choices=[('tcp', 'TCP'), ('ws', 'WS'), ('wss', 'WSS')],
default='tcp',
help_text="Protocol to use for the Receptor listener, 'tcp', 'wss', or 'ws'.",
max_length=10,
),
),
('is_internal', models.BooleanField(default=False, help_text='If True, only routable within the Kubernetes cluster.')),
('canonical', models.BooleanField(default=False, help_text='If True, this address is the canonical address for the instance.')),
(
'peers_from_control_nodes',
models.BooleanField(default=False, help_text='If True, control plane cluster nodes should automatically peer to it.'),
),
],
),
migrations.RemoveConstraint(
model_name='instancelink',
name='source_and_target_can_not_be_equal',
),
migrations.RenameField(
model_name='instancelink',
old_name='target',
new_name='target_old',
),
migrations.AlterUniqueTogether(
name='instancelink',
unique_together=set(),
),
migrations.AddField(
model_name='instance',
name='managed',
field=models.BooleanField(default=False, editable=False, help_text='If True, this instance is managed by the control plane.'),
),
migrations.AlterField(
model_name='instancelink',
name='source',
field=models.ForeignKey(help_text='The source instance of this peer link.', on_delete=django.db.models.deletion.CASCADE, to='main.instance'),
),
migrations.AddField(
model_name='receptoraddress',
name='instance',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='receptor_addresses', to='main.instance'),
),
migrations.AddField(
model_name='activitystream',
name='receptor_address',
field=models.ManyToManyField(blank=True, to='main.receptoraddress'),
),
migrations.AddConstraint(
model_name='receptoraddress',
constraint=models.UniqueConstraint(fields=('address',), name='unique_receptor_address', violation_error_message='Receptor address must be unique.'),
),
migrations.AddField(
model_name='instancelink',
name='target',
field=models.ForeignKey(
help_text='The target receptor address of this peer link.', null=True, on_delete=django.db.models.deletion.CASCADE, to='main.receptoraddress'
),
),
migrations.RunPython(create_receptor_addresses),
migrations.RunPython(link_to_receptor_addresses),
migrations.RemoveField(
model_name='instance',
name='peers_from_control_nodes',
),
migrations.RemoveField(
model_name='instance',
name='listener_port',
),
migrations.RemoveField(
model_name='instancelink',
name='target_old',
),
migrations.AlterField(
model_name='instance',
name='peers',
field=models.ManyToManyField(related_name='peers_from', through='main.InstanceLink', to='main.receptoraddress'),
),
migrations.AlterField(
model_name='instancelink',
name='target',
field=models.ForeignKey(
help_text='The target receptor address of this peer link.', on_delete=django.db.models.deletion.CASCADE, to='main.receptoraddress'
),
),
migrations.AddConstraint(
model_name='instancelink',
constraint=models.UniqueConstraint(
fields=('source', 'target'), name='unique_source_target', violation_error_message='Field source and target must be unique together.'
),
),
]

View File

@@ -0,0 +1,58 @@
# Generated by Django 4.2.6 on 2024-02-15 20:51
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0189_inbound_hop_nodes'),
]
operations = [
migrations.AlterField(
model_name='inventorysource',
name='source',
field=models.CharField(
choices=[
('file', 'File, Directory or Script'),
('constructed', 'Template additional groups and hostvars at runtime'),
('scm', 'Sourced from a Project'),
('ec2', 'Amazon EC2'),
('gce', 'Google Compute Engine'),
('azure_rm', 'Microsoft Azure Resource Manager'),
('vmware', 'VMware vCenter'),
('satellite6', 'Red Hat Satellite 6'),
('openstack', 'OpenStack'),
('rhv', 'Red Hat Virtualization'),
('controller', 'Red Hat Ansible Automation Platform'),
('insights', 'Red Hat Insights'),
('terraform', 'Terraform State'),
],
default=None,
max_length=32,
),
),
migrations.AlterField(
model_name='inventoryupdate',
name='source',
field=models.CharField(
choices=[
('file', 'File, Directory or Script'),
('constructed', 'Template additional groups and hostvars at runtime'),
('scm', 'Sourced from a Project'),
('ec2', 'Amazon EC2'),
('gce', 'Google Compute Engine'),
('azure_rm', 'Microsoft Azure Resource Manager'),
('vmware', 'VMware vCenter'),
('satellite6', 'Red Hat Satellite 6'),
('openstack', 'OpenStack'),
('rhv', 'Red Hat Virtualization'),
('controller', 'Red Hat Ansible Automation Platform'),
('insights', 'Red Hat Insights'),
('terraform', 'Terraform State'),
],
default=None,
max_length=32,
),
),
]

View File

@@ -0,0 +1,85 @@
# Generated by Django 4.2.6 on 2023-11-13 20:10
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('main', '0190_alter_inventorysource_source_and_more'),
('dab_rbac', '__first__'),
]
operations = [
# Add custom permissions for all special actions, like update, use, adhoc, and so on
migrations.AlterModelOptions(
name='credential',
options={'ordering': ('name',), 'permissions': [('use_credential', 'Can use credential in a job or related resource')]},
),
migrations.AlterModelOptions(
name='instancegroup',
options={'permissions': [('use_instancegroup', 'Can use instance group in a preference list of a resource')]},
),
migrations.AlterModelOptions(
name='inventory',
options={
'ordering': ('name',),
'permissions': [
('use_inventory', 'Can use inventory in a job template'),
('adhoc_inventory', 'Can run ad hoc commands'),
('update_inventory', 'Can update inventory sources in inventory'),
],
'verbose_name_plural': 'inventories',
},
),
migrations.AlterModelOptions(
name='jobtemplate',
options={'ordering': ('name',), 'permissions': [('execute_jobtemplate', 'Can run this job template')]},
),
migrations.AlterModelOptions(
name='project',
options={
'ordering': ('id',),
'permissions': [('update_project', 'Can run a project update'), ('use_project', 'Can use project in a job template')],
},
),
migrations.AlterModelOptions(
name='workflowjobtemplate',
options={
'permissions': [
('execute_workflowjobtemplate', 'Can run this workflow job template'),
('approve_workflowjobtemplate', 'Can approve steps in this workflow job template'),
]
},
),
migrations.AlterModelOptions(
name='organization',
options={
'default_permissions': ('change', 'delete', 'view'),
'ordering': ('name',),
'permissions': [
('member_organization', 'Basic participation permissions for organization'),
('audit_organization', 'Audit everything inside the organization'),
],
},
),
migrations.AlterModelOptions(
name='team',
options={'ordering': ('organization__name', 'name'), 'permissions': [('member_team', 'Inherit all roles assigned to this team')]},
),
# Remove add default permission for a few models
migrations.AlterModelOptions(
name='jobtemplate',
options={
'default_permissions': ('change', 'delete', 'view'),
'ordering': ('name',),
'permissions': [('execute_jobtemplate', 'Can run this job template')],
},
),
migrations.AlterModelOptions(
name='instancegroup',
options={
'default_permissions': ('change', 'delete', 'view'),
'permissions': [('use_instancegroup', 'Can use instance group in a preference list of a resource')],
},
),
]

View File

@@ -0,0 +1,20 @@
# Generated by Django 4.2.6 on 2023-11-21 02:06
from django.db import migrations
from awx.main.migrations._dab_rbac import migrate_to_new_rbac, create_permissions_as_operation, setup_managed_role_definitions
class Migration(migrations.Migration):
dependencies = [
('main', '0191_add_django_permissions'),
('dab_rbac', '__first__'),
]
operations = [
# make sure permissions and content types have been created by now
# these normally run in a post_migrate signal but we need them for our logic
migrations.RunPython(create_permissions_as_operation, migrations.RunPython.noop),
migrations.RunPython(setup_managed_role_definitions, migrations.RunPython.noop),
migrations.RunPython(migrate_to_new_rbac, migrations.RunPython.noop),
]

View File

@@ -0,0 +1,359 @@
import json
import logging
from django.apps import apps as global_apps
from django.db.models import ForeignKey
from django.conf import settings
from ansible_base.rbac.migrations._utils import give_permissions
from ansible_base.rbac.management import create_dab_permissions
from awx.main.fields import ImplicitRoleField
from awx.main.constants import role_name_to_perm_mapping
from ansible_base.rbac.permission_registry import permission_registry
logger = logging.getLogger('awx.main.migrations._dab_rbac')
def create_permissions_as_operation(apps, schema_editor):
create_dab_permissions(global_apps.get_app_config("main"), apps=apps)
"""
Data structures and methods for the migration of old Role model to ObjectRole
"""
system_admin = ImplicitRoleField(name='system_administrator')
system_auditor = ImplicitRoleField(name='system_auditor')
system_admin.model = None
system_auditor.model = None
def resolve_parent_role(f, role_path):
"""
Given a field and a path declared in parent_role from the field definition, like
execute_role = ImplicitRoleField(parent_role='admin_role')
This expects to be passed in (execute_role object, "admin_role")
It hould return the admin_role from that object
"""
if role_path == 'singleton:system_administrator':
return system_admin
elif role_path == 'singleton:system_auditor':
return system_auditor
else:
related_field = f
current_model = f.model
for related_field_name in role_path.split('.'):
related_field = current_model._meta.get_field(related_field_name)
if isinstance(related_field, ForeignKey) and not isinstance(related_field, ImplicitRoleField):
current_model = related_field.related_model
return related_field
def build_role_map(apps):
"""
For the old Role model, this builds and returns dictionaries (children, parents)
which give a global mapping of the ImplicitRoleField instances according to the graph
"""
models = set(apps.get_app_config('main').get_models())
all_fields = set()
parents = {}
children = {}
all_fields.add(system_admin)
all_fields.add(system_auditor)
for cls in models:
for f in cls._meta.get_fields():
if isinstance(f, ImplicitRoleField):
all_fields.add(f)
for f in all_fields:
if f.parent_role is not None:
if isinstance(f.parent_role, str):
parent_roles = [f.parent_role]
else:
parent_roles = f.parent_role
# SPECIAL CASE: organization auditor_role is not a child of admin_role
# this makes no practical sense and conflicts with expected managed role
# so we put it in as a hack here
if f.name == 'auditor_role' and f.model._meta.model_name == 'organization':
parent_roles.append('admin_role')
parent_list = []
for rel_name in parent_roles:
parent_list.append(resolve_parent_role(f, rel_name))
parents[f] = parent_list
# build children lookup from parents lookup
for child_field, parent_list in parents.items():
for parent_field in parent_list:
children.setdefault(parent_field, [])
children[parent_field].append(child_field)
return (parents, children)
def get_descendents(f, children_map):
"""
Given ImplicitRoleField F and the children mapping, returns all descendents
of that field, as a set of other fields, including itself
"""
ret = {f}
if f in children_map:
for child_field in children_map[f]:
ret.update(get_descendents(child_field, children_map))
return ret
def get_permissions_for_role(role_field, children_map, apps):
Permission = apps.get_model('dab_rbac', 'DABPermission')
ContentType = apps.get_model('contenttypes', 'ContentType')
perm_list = []
for child_field in get_descendents(role_field, children_map):
if child_field.name in role_name_to_perm_mapping:
for perm_name in role_name_to_perm_mapping[child_field.name]:
if perm_name == 'add_' and role_field.model._meta.model_name != 'organization':
continue # only organizations can contain add permissions
perm = Permission.objects.filter(content_type=ContentType.objects.get_for_model(child_field.model), codename__startswith=perm_name).first()
if perm is not None and perm not in perm_list:
perm_list.append(perm)
# special case for two models that have object roles but no organization roles in old system
if role_field.name == 'notification_admin_role' or (role_field.name == 'admin_role' and role_field.model._meta.model_name == 'organization'):
ct = ContentType.objects.get_for_model(apps.get_model('main', 'NotificationTemplate'))
perm_list.extend(list(Permission.objects.filter(content_type=ct)))
if role_field.name == 'execution_environment_admin_role' or (role_field.name == 'admin_role' and role_field.model._meta.model_name == 'organization'):
ct = ContentType.objects.get_for_model(apps.get_model('main', 'ExecutionEnvironment'))
perm_list.extend(list(Permission.objects.filter(content_type=ct)))
# more special cases for those same above special org-level roles
if role_field.name == 'auditor_role':
for codename in ('view_notificationtemplate', 'view_executionenvironment'):
perm_list.append(Permission.objects.get(codename=codename))
return perm_list
def model_class(ct, apps):
"""
You can not use model methods in migrations, so this duplicates
what ContentType.model_class does, using current apps
"""
try:
return apps.get_model(ct.app_label, ct.model)
except LookupError:
return None
def migrate_to_new_rbac(apps, schema_editor):
"""
This method moves the assigned permissions from the old rbac.py models
to the new RoleDefinition and ObjectRole models
"""
Role = apps.get_model('main', 'Role')
RoleDefinition = apps.get_model('dab_rbac', 'RoleDefinition')
RoleUserAssignment = apps.get_model('dab_rbac', 'RoleUserAssignment')
Permission = apps.get_model('dab_rbac', 'DABPermission')
# remove add premissions that are not valid for migrations from old versions
for perm_str in ('add_organization', 'add_jobtemplate'):
perm = Permission.objects.filter(codename=perm_str).first()
if perm:
perm.delete()
managed_definitions = dict()
for role_definition in RoleDefinition.objects.filter(managed=True):
permissions = frozenset(role_definition.permissions.values_list('id', flat=True))
managed_definitions[permissions] = role_definition
# Build map of old role model
parents, children = build_role_map(apps)
# NOTE: this import is expected to break at some point, and then just move the data here
from awx.main.models.rbac import role_descriptions
for role in Role.objects.prefetch_related('members', 'parents').iterator():
if role.singleton_name:
continue # only bothering to migrate object roles
team_roles = []
for parent in role.parents.all():
if parent.id not in json.loads(role.implicit_parents):
team_roles.append(parent)
# we will not create any roles that do not have any users or teams
if not (role.members.all() or team_roles):
logger.debug(f'Skipping role {role.role_field} for {role.content_type.model}-{role.object_id} due to no members')
continue
# get a list of permissions that the old role would grant
object_cls = apps.get_model(f'main.{role.content_type.model}')
object = object_cls.objects.get(pk=role.object_id) # WORKAROUND, role.content_object does not work in migrations
f = object._meta.get_field(role.role_field) # should be ImplicitRoleField
perm_list = get_permissions_for_role(f, children, apps)
permissions = frozenset(perm.id for perm in perm_list)
# With the needed permissions established, obtain the RoleDefinition this will need, priorities:
# 1. If it exists as a managed RoleDefinition then obviously use that
# 2. If we already created this for a prior role, use that
# 3. Create a new RoleDefinition that lists those permissions
if permissions in managed_definitions:
role_definition = managed_definitions[permissions]
else:
action = role.role_field.rsplit('_', 1)[0] # remove the _field ending of the name
role_definition_name = f'{model_class(role.content_type, apps).__name__} {action.title()}'
description = role_descriptions[role.role_field]
if type(description) == dict:
if role.content_type.model in description:
description = description.get(role.content_type.model)
else:
description = description.get('default')
if '%s' in description:
description = description % role.content_type.model
role_definition, created = RoleDefinition.objects.get_or_create(
name=role_definition_name,
defaults={'description': description, 'content_type_id': role.content_type_id},
)
if created:
logger.info(f'Created custom Role Definition {role_definition_name}, pk={role_definition.pk}')
role_definition.permissions.set(perm_list)
# Create the object role and add users to it
give_permissions(
apps,
role_definition,
users=role.members.all(),
teams=[tr.object_id for tr in team_roles],
object_id=role.object_id,
content_type_id=role.content_type_id,
)
# Create new replacement system auditor role
new_system_auditor, created = RoleDefinition.objects.get_or_create(
name='System Auditor',
defaults={'description': 'Migrated singleton role giving read permission to everything', 'managed': True},
)
new_system_auditor.permissions.add(*list(Permission.objects.filter(codename__startswith='view')))
# migrate is_system_auditor flag, because it is no longer handled by a system role
old_system_auditor = Role.objects.filter(singleton_name='system_auditor').first()
if old_system_auditor:
# if the system auditor role is not present, this is a new install and no users should exist
ct = 0
for user in role.members.all():
RoleUserAssignment.objects.create(user=user, role_definition=new_system_auditor)
ct += 1
if ct:
logger.info(f'Migrated {ct} users to new system auditor flag')
def get_or_create_managed(name, description, ct, permissions, RoleDefinition):
role_definition, created = RoleDefinition.objects.get_or_create(name=name, defaults={'managed': True, 'description': description, 'content_type': ct})
role_definition.permissions.set(list(permissions))
if not role_definition.managed:
role_definition.managed = True
role_definition.save(update_fields=['managed'])
if created:
logger.info(f'Created RoleDefinition {role_definition.name} pk={role_definition} with {len(permissions)} permissions')
return role_definition
def setup_managed_role_definitions(apps, schema_editor):
"""
Idepotent method to create or sync the managed role definitions
"""
to_create = {
'object_admin': '{cls.__name__} Admin',
'org_admin': 'Organization Admin',
'org_children': 'Organization {cls.__name__} Admin',
'special': '{cls.__name__} {action}',
}
ContentType = apps.get_model('contenttypes', 'ContentType')
Permission = apps.get_model('dab_rbac', 'DABPermission')
RoleDefinition = apps.get_model('dab_rbac', 'RoleDefinition')
Organization = apps.get_model(settings.ANSIBLE_BASE_ORGANIZATION_MODEL)
org_ct = ContentType.objects.get_for_model(Organization)
managed_role_definitions = []
org_perms = set()
for cls in permission_registry._registry:
ct = ContentType.objects.get_for_model(cls)
object_perms = set(Permission.objects.filter(content_type=ct))
# Special case for InstanceGroup which has an organiation field, but is not an organization child object
if cls._meta.model_name != 'instancegroup':
org_perms.update(object_perms)
if 'object_admin' in to_create and cls != Organization:
indiv_perms = object_perms.copy()
add_perms = [perm for perm in indiv_perms if perm.codename.startswith('add_')]
if add_perms:
for perm in add_perms:
indiv_perms.remove(perm)
managed_role_definitions.append(
get_or_create_managed(
to_create['object_admin'].format(cls=cls), f'Has all permissions to a single {cls._meta.verbose_name}', ct, indiv_perms, RoleDefinition
)
)
if 'org_children' in to_create and cls != Organization:
org_child_perms = object_perms.copy()
org_child_perms.add(Permission.objects.get(codename='view_organization'))
managed_role_definitions.append(
get_or_create_managed(
to_create['org_children'].format(cls=cls),
f'Has all permissions to {cls._meta.verbose_name_plural} within an organization',
org_ct,
org_child_perms,
RoleDefinition,
)
)
if 'special' in to_create:
special_perms = []
for perm in object_perms:
if perm.codename.split('_')[0] not in ('add', 'change', 'update', 'delete', 'view'):
special_perms.append(perm)
for perm in special_perms:
action = perm.codename.split('_')[0]
view_perm = Permission.objects.get(content_type=ct, codename__startswith='view_')
managed_role_definitions.append(
get_or_create_managed(
to_create['special'].format(cls=cls, action=action.title()),
f'Has {action} permissions to a single {cls._meta.verbose_name}',
ct,
[perm, view_perm],
RoleDefinition,
)
)
if 'org_admin' in to_create:
managed_role_definitions.append(
get_or_create_managed(
to_create['org_admin'].format(cls=Organization),
'Has all permissions to a single organization and all objects inside of it',
org_ct,
org_perms,
RoleDefinition,
)
)
unexpected_role_definitions = RoleDefinition.objects.filter(managed=True).exclude(pk__in=[rd.pk for rd in managed_role_definitions])
for role_definition in unexpected_role_definitions:
logger.info(f'Deleting old managed role definition {role_definition.name}, pk={role_definition.pk}')
role_definition.delete()

View File

@@ -1,12 +1,19 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
import json
# Django
from django.conf import settings # noqa
from django.db import connection
from django.db.models.signals import pre_delete # noqa
# django-ansible-base
from ansible_base.resource_registry.fields import AnsibleResourceField
from ansible_base.rbac import permission_registry
from ansible_base.rbac.models import RoleDefinition, RoleUserAssignment
from ansible_base.lib.utils.models import prevent_search
from ansible_base.lib.utils.models import user_summary_fields
# AWX
from awx.main.models.base import BaseModel, PrimordialModel, accepts_json, CLOUD_INVENTORY_SOURCES, VERBOSITY_CHOICES # noqa
@@ -14,6 +21,7 @@ from awx.main.models.unified_jobs import UnifiedJob, UnifiedJobTemplate, StdoutM
from awx.main.models.organization import Organization, Profile, Team, UserSessionMembership # noqa
from awx.main.models.credential import Credential, CredentialType, CredentialInputSource, ManagedCredentialType, build_safe_env # noqa
from awx.main.models.projects import Project, ProjectUpdate # noqa
from awx.main.models.receptor_address import ReceptorAddress # noqa
from awx.main.models.inventory import ( # noqa
CustomInventoryScript,
Group,
@@ -98,6 +106,8 @@ from awx.main.access import get_user_queryset, check_user_access, check_user_acc
User.add_to_class('get_queryset', get_user_queryset)
User.add_to_class('can_access', check_user_access)
User.add_to_class('can_access_with_errors', check_user_access_with_errors)
User.add_to_class('resource', AnsibleResourceField(primary_key_field="id"))
User.add_to_class('summary_fields', user_summary_fields)
def convert_jsonfields():
@@ -190,11 +200,21 @@ User.add_to_class('auditor_of_organizations', user_get_auditor_of_organizations)
User.add_to_class('created', created)
def get_system_auditor_role():
rd, created = RoleDefinition.objects.get_or_create(
name='System Auditor', defaults={'description': 'Migrated singleton role giving read permission to everything'}
)
if created:
rd.permissions.add(*list(permission_registry.permission_qs.filter(codename__startswith='view')))
return rd
@property
def user_is_system_auditor(user):
if not hasattr(user, '_is_system_auditor'):
if user.pk:
user._is_system_auditor = user.roles.filter(singleton_name='system_auditor', role_field='system_auditor').exists()
rd = get_system_auditor_role()
user._is_system_auditor = RoleUserAssignment.objects.filter(user=user, role_definition=rd).exists()
else:
# Odd case where user is unsaved, this should never be relied on
return False
@@ -208,17 +228,17 @@ def user_is_system_auditor(user, tf):
# time they've logged in, and we've just created the new User in this
# request), we need one to set up the system auditor role
user.save()
if tf:
role = Role.singleton('system_auditor')
# must check if member to not duplicate activity stream
if user not in role.members.all():
role.members.add(user)
user._is_system_auditor = True
else:
role = Role.singleton('system_auditor')
if user in role.members.all():
role.members.remove(user)
user._is_system_auditor = False
rd = get_system_auditor_role()
assignment = RoleUserAssignment.objects.filter(user=user, role_definition=rd).first()
prior_value = bool(assignment)
if prior_value != bool(tf):
if assignment:
assignment.delete()
else:
rd.give_global_permission(user)
user._is_system_auditor = bool(tf)
entry = ActivityStream.objects.create(changes=json.dumps({"is_system_auditor": [prior_value, bool(tf)]}), object1='user', operation='update')
entry.user.add(user)
User.add_to_class('is_system_auditor', user_is_system_auditor)
@@ -286,6 +306,10 @@ activity_stream_registrar.connect(WorkflowApprovalTemplate)
activity_stream_registrar.connect(OAuth2Application)
activity_stream_registrar.connect(OAuth2AccessToken)
# Register models
permission_registry.register(Project, Team, WorkflowJobTemplate, JobTemplate, Inventory, Organization, Credential, NotificationTemplate, ExecutionEnvironment)
permission_registry.register(InstanceGroup, parent_field_name=None) # Not part of an organization
# prevent API filtering on certain Django-supplied sensitive fields
prevent_search(User._meta.get_field('password'))
prevent_search(OAuth2AccessToken._meta.get_field('token'))

View File

@@ -77,6 +77,7 @@ class ActivityStream(models.Model):
notification_template = models.ManyToManyField("NotificationTemplate", blank=True)
notification = models.ManyToManyField("Notification", blank=True)
label = models.ManyToManyField("Label", blank=True)
receptor_address = models.ManyToManyField("ReceptorAddress", blank=True)
role = models.ManyToManyField("Role", blank=True)
instance = models.ManyToManyField("Instance", blank=True)
instance_group = models.ManyToManyField("InstanceGroup", blank=True)

View File

@@ -7,6 +7,9 @@ from django.core.exceptions import ValidationError, ObjectDoesNotExist
from django.utils.translation import gettext_lazy as _
from django.utils.timezone import now
# django-ansible-base
from ansible_base.lib.utils.models import get_type_for_model
# Django-CRUM
from crum import get_current_user
@@ -139,6 +142,23 @@ class BaseModel(models.Model):
self.save(update_fields=update_fields)
return update_fields
def summary_fields(self):
"""
This exists for use by django-ansible-base,
which has standard patterns that differ from AWX, but we enable views from DAB
for those views to list summary_fields for AWX models, those models need to provide this
"""
from awx.api.serializers import SUMMARIZABLE_FK_FIELDS
model_name = get_type_for_model(self)
related_fields = SUMMARIZABLE_FK_FIELDS.get(model_name, {})
summary_data = {}
for field_name in related_fields:
fval = getattr(self, field_name, None)
if fval is not None:
summary_data[field_name] = fval
return summary_data
class CreatedModifiedModel(BaseModel):
"""

View File

@@ -83,6 +83,7 @@ class Credential(PasswordFieldsModel, CommonModelNameNotUnique, ResourceMixin):
app_label = 'main'
ordering = ('name',)
unique_together = ('organization', 'name', 'credential_type')
permissions = [('use_credential', 'Can use credential in a job or related resource')]
PASSWORD_FIELDS = ['inputs']
FIELDS_TO_PRESERVE_AT_COPY = ['input_sources']
@@ -1216,6 +1217,34 @@ ManagedCredentialType(
},
)
ManagedCredentialType(
namespace='terraform',
kind='cloud',
name=gettext_noop('Terraform backend configuration'),
managed=True,
inputs={
'fields': [
{
'id': 'configuration',
'label': gettext_noop('Backend configuration'),
'type': 'string',
'secret': True,
'multiline': True,
'help_text': gettext_noop('Terraform backend config as Hashicorp configuration language.'),
},
{
'id': 'gce_credentials',
'label': gettext_noop('Google Cloud Platform account credentials'),
'type': 'string',
'secret': True,
'multiline': True,
'help_text': gettext_noop('Google Cloud Platform account credentials in JSON format.'),
},
],
'required': ['configuration'],
},
)
class CredentialInputSource(PrimordialModel):
class Meta:

View File

@@ -122,3 +122,18 @@ def kubernetes_bearer_token(cred, env, private_data_dir):
env['K8S_AUTH_SSL_CA_CERT'] = to_container_path(path, private_data_dir)
else:
env['K8S_AUTH_VERIFY_SSL'] = 'False'
def terraform(cred, env, private_data_dir):
handle, path = tempfile.mkstemp(dir=os.path.join(private_data_dir, 'env'))
with os.fdopen(handle, 'w') as f:
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
f.write(cred.get_input('configuration'))
env['TF_BACKEND_CONFIG_FILE'] = to_container_path(path, private_data_dir)
# Handle env variables for GCP account credentials
if 'gce_credentials' in cred.inputs:
handle, path = tempfile.mkstemp(dir=os.path.join(private_data_dir, 'env'))
with os.fdopen(handle, 'w') as f:
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
f.write(cred.get_input('gce_credentials'))
env['GOOGLE_BACKEND_CREDENTIALS'] = to_container_path(path, private_data_dir)

View File

@@ -124,8 +124,6 @@ class BasePlaybookEvent(CreatedModifiedModel):
'parent_uuid',
'start_line',
'end_line',
'host_id',
'host_name',
'verbosity',
]
WRAPUP_EVENT = 'playbook_on_stats'
@@ -473,7 +471,7 @@ class JobEvent(BasePlaybookEvent):
An event/message logged from the callback when running a job.
"""
VALID_KEYS = BasePlaybookEvent.VALID_KEYS + ['job_id', 'workflow_job_id', 'job_created']
VALID_KEYS = BasePlaybookEvent.VALID_KEYS + ['job_id', 'workflow_job_id', 'job_created', 'host_id', 'host_name']
JOB_REFERENCE = 'job_id'
objects = DeferJobCreatedManager()

View File

@@ -5,7 +5,7 @@ from decimal import Decimal
import logging
import os
from django.core.validators import MinValueValidator, MaxValueValidator
from django.core.validators import MinValueValidator
from django.db import models, connection
from django.db.models.signals import post_save, post_delete
from django.dispatch import receiver
@@ -34,6 +34,7 @@ from awx.main.models.rbac import (
from awx.main.models.unified_jobs import UnifiedJob
from awx.main.utils.common import get_corrected_cpu, get_cpu_effective_capacity, get_corrected_memory, get_mem_effective_capacity
from awx.main.models.mixins import RelatedJobsMixin, ResourceMixin
from awx.main.models.receptor_address import ReceptorAddress
# ansible-runner
from ansible_runner.utils.capacity import get_cpu_count, get_mem_in_bytes
@@ -64,8 +65,19 @@ class HasPolicyEditsMixin(HasEditsMixin):
class InstanceLink(BaseModel):
source = models.ForeignKey('Instance', on_delete=models.CASCADE, related_name='+')
target = models.ForeignKey('Instance', on_delete=models.CASCADE, related_name='reverse_peers')
class Meta:
ordering = ("id",)
# add constraint for source and target to be unique together
constraints = [
models.UniqueConstraint(
fields=["source", "target"],
name="unique_source_target",
violation_error_message=_("Field source and target must be unique together."),
)
]
source = models.ForeignKey('Instance', on_delete=models.CASCADE, help_text=_("The source instance of this peer link."))
target = models.ForeignKey('ReceptorAddress', on_delete=models.CASCADE, help_text=_("The target receptor address of this peer link."))
class States(models.TextChoices):
ADDING = 'adding', _('Adding')
@@ -76,11 +88,6 @@ class InstanceLink(BaseModel):
choices=States.choices, default=States.ADDING, max_length=16, help_text=_("Indicates the current life cycle stage of this peer link.")
)
class Meta:
unique_together = ('source', 'target')
ordering = ("id",)
constraints = [models.CheckConstraint(check=~models.Q(source=models.F('target')), name='source_and_target_can_not_be_equal')]
class Instance(HasPolicyEditsMixin, BaseModel):
"""A model representing an AWX instance running against this database."""
@@ -110,6 +117,7 @@ class Instance(HasPolicyEditsMixin, BaseModel):
default="",
max_length=50,
)
# Auto-fields, implementation is different from BaseModel
created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True)
@@ -185,16 +193,9 @@ class Instance(HasPolicyEditsMixin, BaseModel):
node_state = models.CharField(
choices=States.choices, default=States.READY, max_length=16, help_text=_("Indicates the current life cycle stage of this instance.")
)
listener_port = models.PositiveIntegerField(
blank=True,
null=True,
default=None,
validators=[MinValueValidator(1024), MaxValueValidator(65535)],
help_text=_("Port that Receptor will listen for incoming connections on."),
)
peers = models.ManyToManyField('self', symmetrical=False, through=InstanceLink, through_fields=('source', 'target'), related_name='peers_from')
peers_from_control_nodes = models.BooleanField(default=False, help_text=_("If True, control plane cluster nodes should automatically peer to it."))
managed = models.BooleanField(help_text=_("If True, this instance is managed by the control plane."), default=False, editable=False)
peers = models.ManyToManyField('ReceptorAddress', through=InstanceLink, through_fields=('source', 'target'), related_name='peers_from')
POLICY_FIELDS = frozenset(('managed_by_policy', 'hostname', 'capacity_adjustment'))
@@ -241,6 +242,26 @@ class Instance(HasPolicyEditsMixin, BaseModel):
return True
return self.health_check_started > self.last_health_check
@property
def canonical_address(self):
return self.receptor_addresses.filter(canonical=True).first()
@property
def canonical_address_port(self):
# note: don't create a different query for receptor addresses, as this is prefetched on the View for optimization
for addr in self.receptor_addresses.all():
if addr.canonical:
return addr.port
return None
@property
def canonical_address_peers_from_control_nodes(self):
# note: don't create a different query for receptor addresses, as this is prefetched on the View for optimization
for addr in self.receptor_addresses.all():
if addr.canonical:
return addr.peers_from_control_nodes
return False
def get_cleanup_task_kwargs(self, **kwargs):
"""
Produce options to use for the command: ansible-runner worker cleanup
@@ -464,6 +485,9 @@ class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin, ResourceMi
class Meta:
app_label = 'main'
permissions = [('use_instancegroup', 'Can use instance group in a preference list of a resource')]
# Since this has no direct organization field only superuser can add, so remove add permission
default_permissions = ('change', 'delete', 'view')
def set_default_policy_fields(self):
self.policy_instance_list = []
@@ -501,6 +525,35 @@ def schedule_write_receptor_config(broadcast=True):
write_receptor_config() # just run locally
@receiver(post_save, sender=ReceptorAddress)
def receptor_address_saved(sender, instance, **kwargs):
from awx.main.signals import disable_activity_stream
address = instance
control_instances = set(Instance.objects.filter(node_type__in=[Instance.Types.CONTROL, Instance.Types.HYBRID]))
if address.peers_from_control_nodes:
# if control_instances is not a subset of current peers of address, then
# that means we need to add some InstanceLinks
if not control_instances <= set(address.peers_from.all()):
with disable_activity_stream():
for control_instance in control_instances:
InstanceLink.objects.update_or_create(source=control_instance, target=address)
schedule_write_receptor_config()
else:
if address.peers_from.exists():
with disable_activity_stream():
address.peers_from.remove(*control_instances)
schedule_write_receptor_config()
@receiver(post_delete, sender=ReceptorAddress)
def receptor_address_deleted(sender, instance, **kwargs):
address = instance
if address.peers_from_control_nodes:
schedule_write_receptor_config()
@receiver(post_save, sender=Instance)
def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
'''
@@ -511,11 +564,14 @@ def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
2. a node changes its value of peers_from_control_nodes
3. a new control node comes online and has instances to peer to
'''
from awx.main.signals import disable_activity_stream
if created and settings.IS_K8S and instance.node_type in [Instance.Types.CONTROL, Instance.Types.HYBRID]:
inst = Instance.objects.filter(peers_from_control_nodes=True)
if set(instance.peers.all()) != set(inst):
instance.peers.set(inst)
schedule_write_receptor_config(broadcast=False)
peers_addresses = ReceptorAddress.objects.filter(peers_from_control_nodes=True)
if peers_addresses.exists():
with disable_activity_stream():
instance.peers.add(*peers_addresses)
schedule_write_receptor_config(broadcast=False)
if settings.IS_K8S and instance.node_type in [Instance.Types.HOP, Instance.Types.EXECUTION]:
if instance.node_state == Instance.States.DEPROVISIONING:
@@ -524,16 +580,6 @@ def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
# wait for jobs on the node to complete, then delete the
# node and kick off write_receptor_config
connection.on_commit(lambda: remove_deprovisioned_node.apply_async([instance.hostname]))
else:
control_instances = set(Instance.objects.filter(node_type__in=[Instance.Types.CONTROL, Instance.Types.HYBRID]))
if instance.peers_from_control_nodes:
if (control_instances & set(instance.peers_from.all())) != set(control_instances):
instance.peers_from.add(*control_instances)
schedule_write_receptor_config() # keep method separate to make pytest mocking easier
else:
if set(control_instances) & set(instance.peers_from.all()):
instance.peers_from.remove(*control_instances)
schedule_write_receptor_config()
if created or instance.has_policy_changes():
schedule_policy_task()
@@ -548,8 +594,6 @@ def on_instance_group_deleted(sender, instance, using, **kwargs):
@receiver(post_delete, sender=Instance)
def on_instance_deleted(sender, instance, using, **kwargs):
schedule_policy_task()
if settings.IS_K8S and instance.node_type in (Instance.Types.EXECUTION, Instance.Types.HOP) and instance.peers_from_control_nodes:
schedule_write_receptor_config()
class UnifiedJobTemplateInstanceGroupMembership(models.Model):

View File

@@ -11,6 +11,8 @@ import os.path
from urllib.parse import urljoin
import yaml
import tempfile
import stat
# Django
from django.conf import settings
@@ -89,6 +91,11 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
verbose_name_plural = _('inventories')
unique_together = [('name', 'organization')]
ordering = ('name',)
permissions = [
('use_inventory', 'Can use inventory in a job template'),
('adhoc_inventory', 'Can run ad hoc commands'),
('update_inventory', 'Can update inventory sources in inventory'),
]
organization = models.ForeignKey(
'Organization',
@@ -925,6 +932,7 @@ class InventorySourceOptions(BaseModel):
('rhv', _('Red Hat Virtualization')),
('controller', _('Red Hat Ansible Automation Platform')),
('insights', _('Red Hat Insights')),
('terraform', _('Terraform State')),
]
# From the options of the Django management base command
@@ -1399,7 +1407,7 @@ class InventoryUpdate(UnifiedJob, InventorySourceOptions, JobNotificationMixin,
return selected_groups
class CustomInventoryScript(CommonModelNameNotUnique, ResourceMixin):
class CustomInventoryScript(CommonModelNameNotUnique):
class Meta:
app_label = 'main'
ordering = ('name',)
@@ -1630,6 +1638,42 @@ class satellite6(PluginFileInjector):
return ret
class terraform(PluginFileInjector):
plugin_name = 'terraform_state'
namespace = 'cloud'
collection = 'terraform'
use_fqcn = True
def inventory_as_dict(self, inventory_update, private_data_dir):
ret = super().inventory_as_dict(inventory_update, private_data_dir)
credential = inventory_update.get_cloud_credential()
config_cred = credential.get_input('configuration')
if config_cred:
handle, path = tempfile.mkstemp(dir=os.path.join(private_data_dir, 'env'))
with os.fdopen(handle, 'w') as f:
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
f.write(config_cred)
ret['backend_config_files'] = to_container_path(path, private_data_dir)
return ret
def build_plugin_private_data(self, inventory_update, private_data_dir):
credential = inventory_update.get_cloud_credential()
private_data = {'credentials': {}}
gce_cred = credential.get_input('gce_credentials', default=None)
if gce_cred:
private_data['credentials'][credential] = gce_cred
return private_data
def get_plugin_env(self, inventory_update, private_data_dir, private_data_files):
env = super(terraform, self).get_plugin_env(inventory_update, private_data_dir, private_data_files)
credential = inventory_update.get_cloud_credential()
cred_data = private_data_files['credentials']
if credential in cred_data:
env['GOOGLE_BACKEND_CREDENTIALS'] = to_container_path(cred_data[credential], private_data_dir)
return env
class controller(PluginFileInjector):
plugin_name = 'tower' # TODO: relying on routing for now, update after EEs pick up revised collection
base_injector = 'template'

View File

@@ -205,6 +205,9 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
class Meta:
app_label = 'main'
ordering = ('name',)
permissions = [('execute_jobtemplate', 'Can run this job template')]
# Remove add permission, ability to add comes from use permission for inventory, project, credentials
default_permissions = ('change', 'delete', 'view')
job_type = models.CharField(
max_length=64,

View File

@@ -19,13 +19,14 @@ from django.utils.translation import gettext_lazy as _
from ansible_base.lib.utils.models import prevent_search
# AWX
from awx.main.models.rbac import Role, RoleAncestorEntry
from awx.main.models.rbac import Role, RoleAncestorEntry, to_permissions
from awx.main.utils import parse_yaml_or_json, get_custom_venv_choices, get_licenser, polymorphic
from awx.main.utils.execution_environments import get_default_execution_environment
from awx.main.utils.encryption import decrypt_value, get_encryption_key, is_encrypted
from awx.main.utils.polymorphic import build_polymorphic_ctypes_map
from awx.main.fields import AskForField
from awx.main.constants import ACTIVE_STATES
from awx.main.constants import ACTIVE_STATES, org_role_to_permission
logger = logging.getLogger('awx.main.models.mixins')
@@ -64,6 +65,18 @@ class ResourceMixin(models.Model):
@staticmethod
def _accessible_pk_qs(cls, accessor, role_field, content_types=None):
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
if cls._meta.model_name == 'organization' and role_field in org_role_to_permission:
# Organization roles can not use the DAB RBAC shortcuts
# like Organization.access_qs(user, 'change_jobtemplate') is needed
# not just Organization.access_qs(user, 'change') is needed
if accessor.is_superuser:
return cls.objects.values_list('id')
codename = org_role_to_permission[role_field]
return cls.access_ids_qs(accessor, codename, content_types=content_types)
return cls.access_ids_qs(accessor, to_permissions[role_field], content_types=content_types)
if accessor._meta.model_name == 'user':
ancestor_roles = accessor.roles.all()
elif type(accessor) == Role:

View File

@@ -5,6 +5,7 @@ from copy import deepcopy
import datetime
import logging
import json
import traceback
from django.db import models
from django.conf import settings
@@ -484,14 +485,29 @@ class JobNotificationMixin(object):
if msg_template:
try:
msg = env.from_string(msg_template).render(**context)
except (TemplateSyntaxError, UndefinedError, SecurityError):
msg = ''
except (TemplateSyntaxError, UndefinedError, SecurityError) as e:
msg = '\r\n'.join([e.message, ''.join(traceback.format_exception(None, e, e.__traceback__).replace('\n', '\r\n'))])
if body_template:
try:
body = env.from_string(body_template).render(**context)
except (TemplateSyntaxError, UndefinedError, SecurityError):
body = ''
except (TemplateSyntaxError, UndefinedError, SecurityError) as e:
body = '\r\n'.join([e.message, ''.join(traceback.format_exception(None, e, e.__traceback__).replace('\n', '\r\n'))])
# https://datatracker.ietf.org/doc/html/rfc2822#section-2.2
# Body should have at least 2 CRLF, some clients will interpret
# the email incorrectly with blank body. So we will check that
if len(body.strip().splitlines()) < 1:
# blank body
body = '\r\n'.join(
[
"The template rendering return a blank body.",
"Please check the template.",
"Refer to https://github.com/ansible/awx/issues/13983",
"for further information.",
]
)
return (msg, body)

View File

@@ -10,6 +10,8 @@ from django.contrib.sessions.models import Session
from django.utils.timezone import now as tz_now
from django.utils.translation import gettext_lazy as _
# django-ansible-base
from ansible_base.resource_registry.fields import AnsibleResourceField
# AWX
from awx.api.versioning import reverse
@@ -33,6 +35,12 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
class Meta:
app_label = 'main'
ordering = ('name',)
permissions = [
('member_organization', 'Basic participation permissions for organization'),
('audit_organization', 'Audit everything inside the organization'),
]
# Remove add permission, only superuser can add
default_permissions = ('change', 'delete', 'view')
instance_groups = OrderedManyToManyField('InstanceGroup', blank=True, through='OrganizationInstanceGroupMembership')
galaxy_credentials = OrderedManyToManyField(
@@ -103,6 +111,7 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
approval_role = ImplicitRoleField(
parent_role='admin_role',
)
resource = AnsibleResourceField(primary_key_field="id")
def get_absolute_url(self, request=None):
return reverse('api:organization_detail', kwargs={'pk': self.pk}, request=request)
@@ -134,6 +143,7 @@ class Team(CommonModelNameNotUnique, ResourceMixin):
app_label = 'main'
unique_together = [('organization', 'name')]
ordering = ('organization__name', 'name')
permissions = [('member_team', 'Inherit all roles assigned to this team')]
organization = models.ForeignKey(
'Organization',
@@ -151,6 +161,7 @@ class Team(CommonModelNameNotUnique, ResourceMixin):
read_role = ImplicitRoleField(
parent_role=['organization.auditor_role', 'member_role'],
)
resource = AnsibleResourceField(primary_key_field="id")
def get_absolute_url(self, request=None):
return reverse('api:team_detail', kwargs={'pk': self.pk}, request=request)

View File

@@ -259,6 +259,7 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
class Meta:
app_label = 'main'
ordering = ('id',)
permissions = [('update_project', 'Can run a project update'), ('use_project', 'Can use project in a job template')]
default_environment = models.ForeignKey(
'ExecutionEnvironment',

View File

@@ -7,14 +7,30 @@ import threading
import contextlib
import re
# django-rest-framework
from rest_framework.serializers import ValidationError
# crum to impersonate users
from crum import impersonate
# Django
from django.db import models, transaction, connection
from django.db.models.signals import m2m_changed
from django.contrib.auth import get_user_model
from django.contrib.contenttypes.models import ContentType
from django.contrib.contenttypes.fields import GenericForeignKey
from django.utils.translation import gettext_lazy as _
from django.apps import apps
from django.conf import settings
# Ansible_base app
from ansible_base.rbac.models import RoleDefinition
from ansible_base.lib.utils.models import get_type_for_model
# AWX
from awx.api.versioning import reverse
from awx.main.migrations._dab_rbac import build_role_map, get_permissions_for_role
from awx.main.constants import role_name_to_perm_mapping, org_role_to_permission
__all__ = [
'Role',
@@ -75,6 +91,11 @@ role_descriptions = {
}
to_permissions = {}
for k, v in role_name_to_perm_mapping.items():
to_permissions[k] = v[0].strip('_')
tls = threading.local() # thread local storage
@@ -86,10 +107,8 @@ def check_singleton(func):
"""
def wrapper(*args, **kwargs):
sys_admin = Role.singleton(ROLE_SINGLETON_SYSTEM_ADMINISTRATOR)
sys_audit = Role.singleton(ROLE_SINGLETON_SYSTEM_AUDITOR)
user = args[0]
if user in sys_admin or user in sys_audit:
if user.is_superuser or user.is_system_auditor:
if len(args) == 2:
return args[1]
return Role.objects.all()
@@ -169,6 +188,24 @@ class Role(models.Model):
def __contains__(self, accessor):
if accessor._meta.model_name == 'user':
if accessor.is_superuser:
return True
if self.role_field == 'system_administrator':
return accessor.is_superuser
elif self.role_field == 'system_auditor':
return accessor.is_system_auditor
elif self.role_field in ('read_role', 'auditor_role') and accessor.is_system_auditor:
return True
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
if self.content_object and self.content_object._meta.model_name == 'organization' and self.role_field in org_role_to_permission:
codename = org_role_to_permission[self.role_field]
return accessor.has_obj_perm(self.content_object, codename)
if self.role_field not in to_permissions:
raise Exception(f'{self.role_field} evaluated but not a translatable permission')
return accessor.has_obj_perm(self.content_object, to_permissions[self.role_field])
return self.ancestors.filter(members=accessor).exists()
else:
raise RuntimeError(f'Role evaluations only valid for users, received {accessor}')
@@ -280,6 +317,9 @@ class Role(models.Model):
#
#
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
return
if len(additions) == 0 and len(removals) == 0:
return
@@ -412,6 +452,12 @@ class Role(models.Model):
in their organization, but some of those roles descend from
organization admin_role, but not auditor_role.
"""
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
from ansible_base.rbac.models import RoleEvaluation
q = RoleEvaluation.objects.filter(role__in=user.has_roles.all()).values_list('object_id', 'content_type_id').query
return roles_qs.extra(where=[f'(object_id,content_type_id) in ({q})'])
return roles_qs.filter(
id__in=RoleAncestorEntry.objects.filter(
descendent__in=RoleAncestorEntry.objects.filter(ancestor_id__in=list(user.roles.values_list('id', flat=True))).values_list(
@@ -434,6 +480,13 @@ class Role(models.Model):
return self.singleton_name in [ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR]
class AncestorManager(models.Manager):
def get_queryset(self):
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
raise RuntimeError('The old RBAC system has been disabled, this should never be called')
return super(AncestorManager, self).get_queryset()
class RoleAncestorEntry(models.Model):
class Meta:
app_label = 'main'
@@ -451,6 +504,8 @@ class RoleAncestorEntry(models.Model):
content_type_id = models.PositiveIntegerField(null=False)
object_id = models.PositiveIntegerField(null=False)
objects = AncestorManager()
def role_summary_fields_generator(content_object, role_field):
global role_descriptions
@@ -479,3 +534,173 @@ def role_summary_fields_generator(content_object, role_field):
summary['name'] = role_names[role_field]
summary['id'] = getattr(content_object, '{}_id'.format(role_field))
return summary
# ----------------- Custom Role Compatibility -------------------------
# The following are methods to connect this (old) RBAC system to the new
# system which allows custom roles
# this follows the ORM interface layer documented in docs/rbac.md
def get_role_codenames(role):
obj = role.content_object
if obj is None:
return
f = obj._meta.get_field(role.role_field)
parents, children = build_role_map(apps)
return [perm.codename for perm in get_permissions_for_role(f, children, apps)]
def get_role_definition(role):
"""Given a old-style role, this gives a role definition in the new RBAC system for it"""
obj = role.content_object
if obj is None:
return
f = obj._meta.get_field(role.role_field)
action_name = f.name.rsplit("_", 1)[0]
model_print = type(obj).__name__
rd_name = f'{model_print} {action_name.title()} Compat'
perm_list = get_role_codenames(role)
defaults = {
'content_type_id': role.content_type_id,
'description': f'Has {action_name.title()} permission to {model_print} for backwards API compatibility',
}
with impersonate(None):
try:
rd, created = RoleDefinition.objects.get_or_create(name=rd_name, permissions=perm_list, defaults=defaults)
except ValidationError:
# This is a tricky case - practically speaking, users should not be allowed to create team roles
# or roles that include the team member permission.
# If we need to create this for compatibility purposes then we will create it as a managed non-editable role
defaults['managed'] = True
rd, created = RoleDefinition.objects.get_or_create(name=rd_name, permissions=perm_list, defaults=defaults)
return rd
def get_role_from_object_role(object_role):
"""
Given an object role from the new system, return the corresponding role from the old system
reverses naming from get_role_definition, and the ANSIBLE_BASE_ROLE_PRECREATE setting.
"""
rd = object_role.role_definition
if rd.name.endswith(' Compat'):
model_name, role_name, _ = rd.name.split()
role_name = role_name.lower()
role_name += '_role'
elif rd.name.endswith(' Admin') and rd.name.count(' ') == 2:
# cases like "Organization Project Admin"
model_name, target_model_name, role_name = rd.name.split()
role_name = role_name.lower()
model_cls = apps.get_model('main', target_model_name)
target_model_name = get_type_for_model(model_cls)
if target_model_name == 'notification_template':
target_model_name = 'notification' # total exception
role_name = f'{target_model_name}_admin_role'
elif rd.name.endswith(' Admin'):
# cases like "project-admin"
role_name = 'admin_role'
else:
print(rd.name)
model_name, role_name = rd.name.split()
role_name = role_name.lower()
role_name += '_role'
return getattr(object_role.content_object, role_name)
def give_or_remove_permission(role, actor, giving=True):
obj = role.content_object
if obj is None:
return
rd = get_role_definition(role)
rd.give_or_remove_permission(actor, obj, giving=giving)
class SyncEnabled(threading.local):
def __init__(self):
self.enabled = True
rbac_sync_enabled = SyncEnabled()
@contextlib.contextmanager
def disable_rbac_sync():
try:
previous_value = rbac_sync_enabled.enabled
rbac_sync_enabled.enabled = False
yield
finally:
rbac_sync_enabled.enabled = previous_value
def give_creator_permissions(user, obj):
assignment = RoleDefinition.objects.give_creator_permissions(user, obj)
if assignment:
with disable_rbac_sync():
old_role = get_role_from_object_role(assignment.object_role)
old_role.members.add(user)
def sync_members_to_new_rbac(instance, action, model, pk_set, reverse, **kwargs):
if action.startswith('pre_'):
return
if not rbac_sync_enabled.enabled:
return
if action == 'post_add':
is_giving = True
elif action == 'post_remove':
is_giving = False
elif action == 'post_clear':
raise RuntimeError('Clearing of role members not supported')
if reverse:
user = instance
else:
role = instance
for user_or_role_id in pk_set:
if reverse:
role = Role.objects.get(pk=user_or_role_id)
else:
user = get_user_model().objects.get(pk=user_or_role_id)
give_or_remove_permission(role, user, giving=is_giving)
def sync_parents_to_new_rbac(instance, action, model, pk_set, reverse, **kwargs):
if action.startswith('pre_'):
return
if action == 'post_add':
is_giving = True
elif action == 'post_remove':
is_giving = False
elif action == 'post_clear':
raise RuntimeError('Clearing of role members not supported')
if reverse:
parent_role = instance
else:
child_role = instance
for role_id in pk_set:
if reverse:
child_role = Role.objects.get(id=role_id)
else:
parent_role = Role.objects.get(id=role_id)
# To a fault, we want to avoid running this if triggered from implicit_parents management
# we only want to do anything if we know for sure this is a non-implicit team role
if parent_role.role_field == 'member_role' and parent_role.content_type.model == 'team':
# Team internal parents are member_role->read_role and admin_role->member_role
# for the same object, this parenting will also be implicit_parents management
# do nothing for internal parents, but OTHER teams may still be assigned permissions to a team
if (child_role.content_type_id == parent_role.content_type_id) and (child_role.object_id == parent_role.object_id):
return
from awx.main.models.organization import Team
team = Team.objects.get(pk=parent_role.object_id)
give_or_remove_permission(child_role, team, giving=is_giving)
m2m_changed.connect(sync_members_to_new_rbac, Role.members.through)
m2m_changed.connect(sync_parents_to_new_rbac, Role.parents.through)

View File

@@ -0,0 +1,67 @@
from django.db import models
from django.core.validators import MinValueValidator, MaxValueValidator
from django.utils.translation import gettext_lazy as _
from awx.api.versioning import reverse
class Protocols(models.TextChoices):
TCP = 'tcp', 'TCP'
WS = 'ws', 'WS'
WSS = 'wss', 'WSS'
class ReceptorAddress(models.Model):
class Meta:
app_label = 'main'
constraints = [
models.UniqueConstraint(
fields=["address"],
name="unique_receptor_address",
violation_error_message=_("Receptor address must be unique."),
)
]
address = models.CharField(help_text=_("Routable address for this instance."), max_length=255)
port = models.IntegerField(help_text=_("Port for the address."), default=27199, validators=[MinValueValidator(0), MaxValueValidator(65535)])
websocket_path = models.CharField(help_text=_("Websocket path."), max_length=255, default="", blank=True)
protocol = models.CharField(
help_text=_("Protocol to use for the Receptor listener, 'tcp', 'wss', or 'ws'."), max_length=10, default=Protocols.TCP, choices=Protocols.choices
)
is_internal = models.BooleanField(help_text=_("If True, only routable within the Kubernetes cluster."), default=False)
canonical = models.BooleanField(help_text=_("If True, this address is the canonical address for the instance."), default=False)
peers_from_control_nodes = models.BooleanField(help_text=_("If True, control plane cluster nodes should automatically peer to it."), default=False)
instance = models.ForeignKey(
'Instance',
related_name='receptor_addresses',
on_delete=models.CASCADE,
null=False,
)
def __str__(self):
return self.get_full_address()
def get_full_address(self):
scheme = ""
path = ""
port = ""
if self.protocol == "ws":
scheme = "wss://"
if self.protocol == "ws" and self.websocket_path:
path = f"/{self.websocket_path}"
if self.port:
port = f":{self.port}"
return f"{scheme}{self.address}{port}{path}"
def get_peer_type(self):
if self.protocol == 'tcp':
return 'tcp-peer'
elif self.protocol in ['ws', 'wss']:
return 'ws-peer'
else:
return None
def get_absolute_url(self, request=None):
return reverse('api:receptor_address_detail', kwargs={'pk': self.pk}, request=request)

View File

@@ -37,7 +37,8 @@ from awx.main.models.base import CommonModelNameNotUnique, PasswordFieldsModel,
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.control import Control as ControlDispatcher
from awx.main.registrar import activity_stream_registrar
from awx.main.models.mixins import ResourceMixin, TaskManagerUnifiedJobMixin, ExecutionEnvironmentMixin
from awx.main.models.mixins import TaskManagerUnifiedJobMixin, ExecutionEnvironmentMixin
from awx.main.models.rbac import to_permissions
from awx.main.utils.common import (
camelcase_to_underscore,
get_model_for_type,
@@ -210,7 +211,15 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, ExecutionEn
# do not use this if in a subclass
if cls != UnifiedJobTemplate:
return super(UnifiedJobTemplate, cls).accessible_pk_qs(accessor, role_field)
return ResourceMixin._accessible_pk_qs(cls, accessor, role_field, content_types=cls._submodels_with_roles())
from ansible_base.rbac.models import RoleEvaluation
action = to_permissions[role_field]
return (
RoleEvaluation.objects.filter(role__in=accessor.has_roles.all(), codename__startswith=action, content_type_id__in=cls._submodels_with_roles())
.values_list('object_id')
.distinct()
)
def _perform_unique_checks(self, unique_checks):
# Handle the list of unique fields returned above. Replace with an
@@ -814,7 +823,7 @@ class UnifiedJob(
update_fields.append(key)
if parent_instance:
if self.status in ('pending', 'waiting', 'running'):
if self.status in ('pending', 'running'):
if parent_instance.current_job != self:
parent_instance_set('current_job', self)
# Update parent with all the 'good' states of it's child
@@ -851,7 +860,7 @@ class UnifiedJob(
# If this job already exists in the database, retrieve a copy of
# the job in its prior state.
# If update_fields are given without status, then that indicates no change
if self.pk and ((not update_fields) or ('status' in update_fields)):
if self.status != 'waiting' and self.pk and ((not update_fields) or ('status' in update_fields)):
self_before = self.__class__.objects.get(pk=self.pk)
if self_before.status != self.status:
status_before = self_before.status
@@ -893,7 +902,8 @@ class UnifiedJob(
update_fields.append('elapsed')
# Ensure that the job template information is current.
if self.unified_job_template != self._get_parent_instance():
# unless status is 'waiting', because this happens in large batches at end of task manager runs and is blocking
if self.status != 'waiting' and self.unified_job_template != self._get_parent_instance():
self.unified_job_template = self._get_parent_instance()
if 'unified_job_template' not in update_fields:
update_fields.append('unified_job_template')
@@ -906,8 +916,9 @@ class UnifiedJob(
# Okay; we're done. Perform the actual save.
result = super(UnifiedJob, self).save(*args, **kwargs)
# If status changed, update the parent instance.
if self.status != status_before:
# If status changed, update the parent instance
# unless status is 'waiting', because this happens in large batches at end of task manager runs and is blocking
if self.status != status_before and self.status != 'waiting':
# Update parent outside of the transaction for Job w/ allow_simultaneous=True
# This dodges lock contention at the expense of the foreign key not being
# completely correct.
@@ -1599,7 +1610,8 @@ class UnifiedJob(
extra["controller_node"] = self.controller_node or "NOT_SET"
elif state == "execution_node_chosen":
extra["execution_node"] = self.execution_node or "NOT_SET"
logger_job_lifecycle.info(msg, extra=extra)
logger_job_lifecycle.info(f"{msg} {json.dumps(extra)}")
@property
def launched_by(self):

View File

@@ -467,6 +467,10 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
class Meta:
app_label = 'main'
permissions = [
('execute_workflowjobtemplate', 'Can run this workflow job template'),
('approve_workflowjobtemplate', 'Can approve steps in this workflow job template'),
]
notification_templates_approvals = models.ManyToManyField(
"NotificationTemplate",

View File

@@ -1,5 +1,6 @@
# Copyright (c) 2019 Ansible, Inc.
# All Rights Reserved.
# -*-coding:utf-8-*-
class CustomNotificationBase(object):

View File

@@ -4,13 +4,15 @@ import logging
from django.conf import settings
from django.urls import re_path
from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter, URLRouter
from ansible_base.lib.channels.middleware import DrfAuthMiddlewareStack
from . import consumers
logger = logging.getLogger('awx.main.routing')
_application = None
class AWXProtocolTypeRouter(ProtocolTypeRouter):
@@ -26,13 +28,95 @@ class AWXProtocolTypeRouter(ProtocolTypeRouter):
super().__init__(*args, **kwargs)
class MultipleURLRouterAdapter:
"""
Django channels doesn't nicely support Auth_1(urls_1), Auth_2(urls_2), ..., Auth_n(urls_n)
This class allows assocating a websocket url with an auth
Ordering matters. The first matching url will be used.
"""
def __init__(self, *auths):
self._auths = [a for a in auths]
async def __call__(self, scope, receive, send):
"""
Loop through the list of passed in URLRouter's (they may or may not be wrapped by auth).
We know we have exhausted the list of URLRouter patterns when we get a
ValueError('No route found for path %s'). When that happens, move onto the next
URLRouter.
If the final URLRouter raises an error, re-raise it in the end.
We know that we found a match when no error is raised, end the loop.
"""
last_index = len(self._auths) - 1
for i, auth in enumerate(self._auths):
try:
return await auth.__call__(scope, receive, send)
except ValueError as e:
if str(e).startswith('No route found for path'):
# Only surface the error if on the last URLRouter
if i == last_index:
raise
websocket_urlpatterns = [
re_path(r'api/websocket/$', consumers.EventConsumer.as_asgi()),
re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),
]
if settings.OPTIONAL_API_URLPATTERN_PREFIX:
websocket_urlpatterns.append(re_path(r'api/{}/v2/websocket/$'.format(settings.OPTIONAL_API_URLPATTERN_PREFIX), consumers.EventConsumer.as_asgi()))
websocket_relay_urlpatterns = [
re_path(r'websocket/relay/$', consumers.RelayConsumer.as_asgi()),
]
application = AWXProtocolTypeRouter(
{
'websocket': AuthMiddlewareStack(URLRouter(websocket_urlpatterns)),
}
)
def application_func(cls=AWXProtocolTypeRouter) -> ProtocolTypeRouter:
return cls(
{
'websocket': MultipleURLRouterAdapter(
URLRouter(websocket_relay_urlpatterns),
DrfAuthMiddlewareStack(URLRouter(websocket_urlpatterns)),
)
}
)
def __getattr__(name: str) -> ProtocolTypeRouter:
"""
Defer instantiating application.
For testing, we just need it to NOT run on import.
https://peps.python.org/pep-0562/#specification
Normally, someone would get application from this module via:
from awx.main.routing import application
and do something with the application:
application.do_something()
What does the callstack look like when the import runs?
...
awx.main.routing.__getattribute__(...) # <-- we don't define this so NOOP as far as we are concerned
if '__getattr__' in awx.main.routing.__dict__: # <-- this triggers the function we are in
return awx.main.routing.__dict__.__getattr__("application")
Why isn't this function simply implemented as:
def __getattr__(name):
if not _application:
_application = application_func()
return _application
It could. I manually tested it and it passes test_routing.py.
But my understanding after reading the PEP-0562 specification link above is that
performance would be a bit worse due to the extra __getattribute__ calls when
we reference non-global variables.
"""
if name == "application":
globs = globals()
if not globs['_application']:
globs['_application'] = application_func()
return globs['_application']
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")

View File

@@ -68,7 +68,7 @@ class TaskBase:
# initialize each metric to 0 and force metric_has_changed to true. This
# ensures each task manager metric will be overridden when pipe_execute
# is called later.
self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False)
self.subsystem_metrics = s_metrics.DispatcherMetrics(auto_pipe_execute=False)
self.start_time = time.time()
# We want to avoid calling settings in loops, so cache these settings at init time
@@ -105,7 +105,7 @@ class TaskBase:
try:
# increment task_manager_schedule_calls regardless if the other
# metrics are recorded
s_metrics.Metrics(auto_pipe_execute=True).inc(f"{self.prefix}__schedule_calls", 1)
s_metrics.DispatcherMetrics(auto_pipe_execute=True).inc(f"{self.prefix}__schedule_calls", 1)
# Only record metrics if the last time recording was more
# than SUBSYSTEM_METRICS_TASK_MANAGER_RECORD_INTERVAL ago.
# Prevents a short-duration task manager that runs directly after a

View File

@@ -126,6 +126,8 @@ def rebuild_role_ancestor_list(reverse, model, instance, pk_set, action, **kwarg
def sync_superuser_status_to_rbac(instance, **kwargs):
'When the is_superuser flag is changed on a user, reflect that in the membership of the System Admnistrator role'
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
return
update_fields = kwargs.get('update_fields', None)
if update_fields and 'is_superuser' not in update_fields:
return
@@ -137,6 +139,8 @@ def sync_superuser_status_to_rbac(instance, **kwargs):
def sync_rbac_to_superuser_status(instance, sender, **kwargs):
'When the is_superuser flag is false but a user has the System Admin role, update the database to reflect that'
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
return
if kwargs['action'] in ['post_add', 'post_remove', 'post_clear']:
new_status_value = bool(kwargs['action'] == 'post_add')
if hasattr(instance, 'singleton_name'): # duck typing, role.members.add() vs user.roles.add()

View File

@@ -29,7 +29,7 @@ class RunnerCallback:
self.safe_env = {}
self.event_ct = 0
self.model = model
self.update_attempts = int(settings.DISPATCHER_DB_DOWNTIME_TOLERANCE / 5)
self.update_attempts = int(getattr(settings, 'DISPATCHER_DB_DOWNTOWN_TOLLERANCE', settings.DISPATCHER_DB_DOWNTIME_TOLERANCE) / 5)
self.wrapup_event_dispatched = False
self.artifacts_processed = False
self.extra_update_fields = {}
@@ -95,17 +95,17 @@ class RunnerCallback:
if self.parent_workflow_job_id:
event_data['workflow_job_id'] = self.parent_workflow_job_id
event_data['job_created'] = self.job_created
if self.host_map:
host = event_data.get('event_data', {}).get('host', '').strip()
if host:
event_data['host_name'] = host
if host in self.host_map:
event_data['host_id'] = self.host_map[host]
else:
event_data['host_name'] = ''
event_data['host_id'] = ''
if event_data.get('event') == 'playbook_on_stats':
event_data['host_map'] = self.host_map
host = event_data.get('event_data', {}).get('host', '').strip()
if host:
event_data['host_name'] = host
if host in self.host_map:
event_data['host_id'] = self.host_map[host]
else:
event_data['host_name'] = ''
event_data['host_id'] = ''
if event_data.get('event') == 'playbook_on_stats':
event_data['host_map'] = self.host_map
if isinstance(self, RunnerCallbackForProjectUpdate):
# need a better way to have this check.

View File

@@ -114,7 +114,7 @@ class BaseTask(object):
def __init__(self):
self.cleanup_paths = []
self.update_attempts = int(settings.DISPATCHER_DB_DOWNTIME_TOLERANCE / 5)
self.update_attempts = int(getattr(settings, 'DISPATCHER_DB_DOWNTOWN_TOLLERANCE', settings.DISPATCHER_DB_DOWNTIME_TOLERANCE) / 5)
self.runner_callback = self.callback_class(model=self.model)
def update_model(self, pk, _attempt=0, **updates):

View File

@@ -27,7 +27,7 @@ from awx.main.utils.common import (
)
from awx.main.constants import MAX_ISOLATED_PATH_COLON_DELIMITER
from awx.main.tasks.signals import signal_state, signal_callback, SignalExit
from awx.main.models import Instance, InstanceLink, UnifiedJob
from awx.main.models import Instance, InstanceLink, UnifiedJob, ReceptorAddress
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.publish import task
from awx.main.utils.pglock import advisory_lock
@@ -49,6 +49,70 @@ class ReceptorConnectionType(Enum):
STREAMTLS = 2
"""
Translate receptorctl messages that come in over stdout into
structured messages. Currently, these are error messages.
"""
class ReceptorErrorBase:
_MESSAGE = 'Receptor Error'
def __init__(self, node: str = 'N/A', state_name: str = 'N/A'):
self.node = node
self.state_name = state_name
def __str__(self):
return f"{self.__class__.__name__} '{self._MESSAGE}' on node '{self.node}' with state '{self.state_name}'"
class WorkUnitError(ReceptorErrorBase):
_MESSAGE = 'unknown work unit '
def __init__(self, work_unit_id: str, *args, **kwargs):
super().__init__(*args, **kwargs)
self.work_unit_id = work_unit_id
def __str__(self):
return f"{super().__str__()} work unit id '{self.work_unit_id}'"
class WorkUnitCancelError(WorkUnitError):
_MESSAGE = 'error cancelling remote unit: unknown work unit '
class WorkUnitResultsError(WorkUnitError):
_MESSAGE = 'Failed to get results: unknown work unit '
class UnknownError(ReceptorErrorBase):
_MESSAGE = 'Unknown receptor ctl error'
def __init__(self, msg, *args, **kwargs):
super().__init__(*args, **kwargs)
self._MESSAGE = msg
class FuzzyError:
def __new__(self, e: RuntimeError, node: str, state_name: str):
"""
At the time of writing this comment all of the sub-classes detection
is centralized in this parent class. It's like a Router().
Someone may find it better to push down the error detection logic into
each sub-class.
"""
msg = e.args[0]
common_startswith = (WorkUnitCancelError, WorkUnitResultsError, WorkUnitError)
for klass in common_startswith:
if msg.startswith(klass._MESSAGE):
work_unit_id = msg[len(klass._MESSAGE) :]
return klass(work_unit_id, node=node, state_name=state_name)
return UnknownError(msg, node=node, state_name=state_name)
def read_receptor_config():
# for K8S deployments, getting a lock is necessary as another process
# may be re-writing the config at this time
@@ -185,6 +249,7 @@ def run_until_complete(node, timing_data=None, **kwargs):
timing_data['transmit_timing'] = run_start - transmit_start
run_timing = 0.0
stdout = ''
state_name = 'local var never set'
try:
resultfile = receptor_ctl.get_work_results(unit_id)
@@ -205,13 +270,33 @@ def run_until_complete(node, timing_data=None, **kwargs):
stdout = resultfile.read()
stdout = str(stdout, encoding='utf-8')
except RuntimeError as e:
receptor_e = FuzzyError(e, node, state_name)
if type(receptor_e) in (
WorkUnitError,
WorkUnitResultsError,
):
logger.warning(f'While consuming job results: {receptor_e}')
else:
raise
finally:
if settings.RECEPTOR_RELEASE_WORK:
res = receptor_ctl.simple_command(f"work release {unit_id}")
if res != {'released': unit_id}:
logger.warning(f'Could not confirm release of receptor work unit id {unit_id} from {node}, data: {res}')
try:
res = receptor_ctl.simple_command(f"work release {unit_id}")
receptor_ctl.close()
if res != {'released': unit_id}:
logger.warning(f'Could not confirm release of receptor work unit id {unit_id} from {node}, data: {res}')
receptor_ctl.close()
except RuntimeError as e:
receptor_e = FuzzyError(e, node, state_name)
if type(receptor_e) in (
WorkUnitError,
WorkUnitCancelError,
):
logger.warning(f"While releasing work: {receptor_e}")
else:
logger.error(f"While releasing work: {receptor_e}")
if state_name.lower() == 'failed':
work_detail = status.get('Detail', '')
@@ -275,7 +360,7 @@ def _convert_args_to_cli(vargs):
args = ['cleanup']
for option in ('exclude_strings', 'remove_images'):
if vargs.get(option):
args.append('--{}={}'.format(option.replace('_', '-'), ' '.join(vargs.get(option))))
args.append('--{}="{}"'.format(option.replace('_', '-'), ' '.join(vargs.get(option))))
for option in ('file_pattern', 'image_prune', 'process_isolation_executable', 'grace_period'):
if vargs.get(option) is True:
args.append('--{}'.format(option.replace('_', '-')))
@@ -676,36 +761,44 @@ RECEPTOR_CONFIG_STARTER = (
)
def should_update_config(instances):
def should_update_config(new_config):
'''
checks that the list of instances matches the list of
tcp-peers in the config
'''
current_config = read_receptor_config() # this gets receptor conf lock
current_peers = []
for config_entry in current_config:
for key, value in config_entry.items():
if key.endswith('-peer'):
current_peers.append(value['address'])
intended_peers = [f"{i.hostname}:{i.listener_port}" for i in instances]
logger.debug(f"Peers current {current_peers} intended {intended_peers}")
if set(current_peers) == set(intended_peers):
return False # config file is already update to date
return True
current_config = read_receptor_config() # this gets receptor conf lock
for config_entry in current_config:
if config_entry not in new_config:
logger.warning(f"{config_entry} should not be in receptor config. Updating.")
return True
for config_entry in new_config:
if config_entry not in current_config:
logger.warning(f"{config_entry} missing from receptor config. Updating.")
return True
return False
def generate_config_data():
# returns two values
# receptor config - based on current database peers
# should_update - If True, receptor_config differs from the receptor conf file on disk
instances = Instance.objects.filter(node_type__in=(Instance.Types.EXECUTION, Instance.Types.HOP), peers_from_control_nodes=True)
addresses = ReceptorAddress.objects.filter(peers_from_control_nodes=True)
receptor_config = list(RECEPTOR_CONFIG_STARTER)
for instance in instances:
peer = {'tcp-peer': {'address': f'{instance.hostname}:{instance.listener_port}', 'tls': 'tlsclient'}}
receptor_config.append(peer)
should_update = should_update_config(instances)
for address in addresses:
if address.get_peer_type():
peer = {
f'{address.get_peer_type()}': {
'address': f'{address.get_full_address()}',
'tls': 'tlsclient',
}
}
receptor_config.append(peer)
else:
logger.warning(f"Receptor address {address} has unsupported peer type, skipping.")
should_update = should_update_config(receptor_config)
return receptor_config, should_update
@@ -747,14 +840,13 @@ def write_receptor_config():
with lock:
with open(__RECEPTOR_CONF, 'w') as file:
yaml.dump(receptor_config, file, default_flow_style=False)
reload_receptor()
@task(queue=get_task_queuename)
def remove_deprovisioned_node(hostname):
InstanceLink.objects.filter(source__hostname=hostname).update(link_state=InstanceLink.States.REMOVING)
InstanceLink.objects.filter(target__hostname=hostname).update(link_state=InstanceLink.States.REMOVING)
InstanceLink.objects.filter(target__instance__hostname=hostname).update(link_state=InstanceLink.States.REMOVING)
node_jobs = UnifiedJob.objects.filter(
execution_node=hostname,

View File

@@ -6,6 +6,7 @@ import itertools
import json
import logging
import os
import psycopg
from io import StringIO
from contextlib import redirect_stdout
import shutil
@@ -62,7 +63,7 @@ from awx.main.tasks.receptor import get_receptor_ctl, worker_info, worker_cleanu
from awx.main.consumers import emit_channel_notification
from awx.main import analytics
from awx.conf import settings_registry
from awx.main.analytics.subsystem_metrics import Metrics
from awx.main.analytics.subsystem_metrics import DispatcherMetrics
from rest_framework.exceptions import PermissionDenied
@@ -113,7 +114,7 @@ def dispatch_startup():
cluster_node_heartbeat()
reaper.startup_reaping()
reaper.reap_waiting(grace_period=0)
m = Metrics()
m = DispatcherMetrics()
m.reset_values()
@@ -416,7 +417,7 @@ def handle_removed_image(remove_images=None):
@task(queue=get_task_queuename)
def cleanup_images_and_files():
_cleanup_images_and_files()
_cleanup_images_and_files(image_prune=True)
@task(queue=get_task_queuename)
@@ -495,7 +496,7 @@ def inspect_established_receptor_connections(mesh_status):
update_links = []
for link in all_links:
if link.link_state != InstanceLink.States.REMOVING:
if link.target.hostname in active_receptor_conns.get(link.source.hostname, {}):
if link.target.instance.hostname in active_receptor_conns.get(link.source.hostname, {}):
if link.link_state is not InstanceLink.States.ESTABLISHED:
link.link_state = InstanceLink.States.ESTABLISHED
update_links.append(link)
@@ -630,10 +631,18 @@ def cluster_node_heartbeat(dispatch_time=None, worker_tasks=None):
logger.error("Host {} last checked in at {}, marked as lost.".format(other_inst.hostname, other_inst.last_seen))
except DatabaseError as e:
if 'did not affect any rows' in str(e):
logger.debug('Another instance has marked {} as lost'.format(other_inst.hostname))
cause = e.__cause__
if cause and hasattr(cause, 'sqlstate'):
sqlstate = cause.sqlstate
sqlstate_str = psycopg.errors.lookup(sqlstate)
logger.debug('SQL Error state: {} - {}'.format(sqlstate, sqlstate_str))
if sqlstate == psycopg.errors.NoData:
logger.debug('Another instance has marked {} as lost'.format(other_inst.hostname))
else:
logger.exception("Error marking {} as lost.".format(other_inst.hostname))
else:
logger.exception('Error marking {} as lost'.format(other_inst.hostname))
logger.exception('No SQL state available. Error marking {} as lost'.format(other_inst.hostname))
# Run local reaper
if worker_tasks is not None:
@@ -788,10 +797,19 @@ def update_inventory_computed_fields(inventory_id):
try:
i.update_computed_fields()
except DatabaseError as e:
if 'did not affect any rows' in str(e):
logger.debug('Exiting duplicate update_inventory_computed_fields task.')
return
raise
# https://github.com/django/django/blob/eff21d8e7a1cb297aedf1c702668b590a1b618f3/django/db/models/base.py#L1105
# django raises DatabaseError("Forced update did not affect any rows.")
# if sqlstate is set then there was a database error and otherwise will re-raise that error
cause = e.__cause__
if cause and hasattr(cause, 'sqlstate'):
sqlstate = cause.sqlstate
sqlstate_str = psycopg.errors.lookup(sqlstate)
logger.error('SQL Error state: {} - {}'.format(sqlstate, sqlstate_str))
raise
# otherwise
logger.debug('Exiting duplicate update_inventory_computed_fields task.')
def update_smart_memberships_for_inventory(smart_inventory):

View File

@@ -3,5 +3,5 @@
hosts: all
tasks:
- name: Hello Message
debug:
ansible.builtin.debug:
msg: "Hello World!"

View File

@@ -0,0 +1,3 @@
{
"GOOGLE_BACKEND_CREDENTIALS": "{{ file_reference }}"
}

View File

@@ -1,13 +1,8 @@
from awx.main.tests.functional.conftest import * # noqa
import os
import pytest
def pytest_addoption(parser):
parser.addoption("--release", action="store", help="a release version number, e.g., 3.3.0")
def pytest_generate_tests(metafunc):
# This is called for every test. Only get/set command line arguments
# if the argument is specified in the list of test "fixturenames".
option_value = metafunc.config.option.release
if 'release' in metafunc.fixturenames and option_value is not None:
metafunc.parametrize("release", [option_value])
@pytest.fixture()
def release():
return os.environ.get('VERSION_TARGET', '')

View File

@@ -99,7 +99,7 @@ class TestSwaggerGeneration:
# The number of API endpoints changes over time, but let's just check
# for a reasonable number here; if this test starts failing, raise/lower the bounds
paths = JSON['paths']
assert 250 < len(paths) < 375
assert 250 < len(paths) < 400
assert set(list(paths['/api/'].keys())) == set(['get', 'parameters'])
assert set(list(paths['/api/v2/'].keys())) == set(['get', 'parameters'])
assert set(list(sorted(paths['/api/v2/credentials/'].keys()))) == set(['get', 'post', 'parameters'])

View File

@@ -4,7 +4,6 @@ from prometheus_client.parser import text_string_to_metric_families
from awx.main import models
from awx.main.analytics.metrics import metrics
from awx.api.versioning import reverse
from awx.main.models.rbac import Role
EXPECTED_VALUES = {
'awx_system_info': 1.0,
@@ -66,7 +65,6 @@ def test_metrics_permissions(get, admin, org_admin, alice, bob, organization):
organization.auditor_role.members.add(bob)
assert get(get_metrics_view_db_only(), user=bob).status_code == 403
Role.singleton('system_auditor').members.add(bob)
bob.is_system_auditor = True
assert get(get_metrics_view_db_only(), user=bob).status_code == 200

View File

@@ -6,7 +6,7 @@ from django.test import Client
from rest_framework.test import APIRequestFactory
from awx.api.generics import LoggedLoginView
from awx.api.versioning import drf_reverse
from rest_framework.reverse import reverse as drf_reverse
@pytest.mark.django_db

View File

@@ -9,8 +9,8 @@ def test_user_role_view_access(rando, inventory, mocker, post):
role_pk = inventory.admin_role.pk
data = {"id": role_pk}
mock_access = mocker.MagicMock(can_attach=mocker.MagicMock(return_value=False))
with mocker.patch('awx.main.access.RoleAccess', return_value=mock_access):
post(url=reverse('api:user_roles_list', kwargs={'pk': rando.pk}), data=data, user=rando, expect=403)
mocker.patch('awx.main.access.RoleAccess', return_value=mock_access)
post(url=reverse('api:user_roles_list', kwargs={'pk': rando.pk}), data=data, user=rando, expect=403)
mock_access.can_attach.assert_called_once_with(inventory.admin_role, rando, 'members', data, skip_sub_obj_read_check=False)
@@ -21,8 +21,8 @@ def test_team_role_view_access(rando, team, inventory, mocker, post):
role_pk = inventory.admin_role.pk
data = {"id": role_pk}
mock_access = mocker.MagicMock(can_attach=mocker.MagicMock(return_value=False))
with mocker.patch('awx.main.access.RoleAccess', return_value=mock_access):
post(url=reverse('api:team_roles_list', kwargs={'pk': team.pk}), data=data, user=rando, expect=403)
mocker.patch('awx.main.access.RoleAccess', return_value=mock_access)
post(url=reverse('api:team_roles_list', kwargs={'pk': team.pk}), data=data, user=rando, expect=403)
mock_access.can_attach.assert_called_once_with(inventory.admin_role, team, 'member_role.parents', data, skip_sub_obj_read_check=False)
@@ -33,8 +33,8 @@ def test_role_team_view_access(rando, team, inventory, mocker, post):
role_pk = inventory.admin_role.pk
data = {"id": team.pk}
mock_access = mocker.MagicMock(return_value=False, __name__='mocked')
with mocker.patch('awx.main.access.RoleAccess.can_attach', mock_access):
post(url=reverse('api:role_teams_list', kwargs={'pk': role_pk}), data=data, user=rando, expect=403)
mocker.patch('awx.main.access.RoleAccess.can_attach', mock_access)
post(url=reverse('api:role_teams_list', kwargs={'pk': role_pk}), data=data, user=rando, expect=403)
mock_access.assert_called_once_with(inventory.admin_role, team, 'member_role.parents', data, skip_sub_obj_read_check=False)

View File

@@ -30,7 +30,7 @@ def test_idempotent_credential_type_setup():
@pytest.mark.django_db
def test_create_user_credential_via_credentials_list(post, get, alice, credentialtype_ssh):
def test_create_user_credential_via_credentials_list(post, get, alice, credentialtype_ssh, setup_managed_roles):
params = {
'credential_type': 1,
'inputs': {'username': 'someusername'},
@@ -81,7 +81,7 @@ def test_credential_validation_error_with_multiple_owner_fields(post, admin, ali
@pytest.mark.django_db
def test_create_user_credential_via_user_credentials_list(post, get, alice, credentialtype_ssh):
def test_create_user_credential_via_user_credentials_list(post, get, alice, credentialtype_ssh, setup_managed_roles):
params = {
'credential_type': 1,
'inputs': {'username': 'someusername'},
@@ -385,10 +385,9 @@ def test_list_created_org_credentials(post, get, organization, org_admin, org_me
@pytest.mark.django_db
def test_list_cannot_order_by_encrypted_field(post, get, organization, org_admin, credentialtype_ssh, order_by):
for i, password in enumerate(('abc', 'def', 'xyz')):
response = post(reverse('api:credential_list'), {'organization': organization.id, 'name': 'C%d' % i, 'password': password}, org_admin)
post(reverse('api:credential_list'), {'organization': organization.id, 'name': 'C%d' % i, 'password': password}, org_admin, expect=400)
response = get(reverse('api:credential_list'), org_admin, QUERY_STRING='order_by=%s' % order_by, status=400)
assert response.status_code == 400
get(reverse('api:credential_list'), org_admin, QUERY_STRING='order_by=%s' % order_by, expect=400)
@pytest.mark.django_db
@@ -399,8 +398,7 @@ def test_inputs_cannot_contain_extra_fields(get, post, organization, admin, cred
'credential_type': credentialtype_ssh.pk,
'inputs': {'invalid_field': 'foo'},
}
response = post(reverse('api:credential_list'), params, admin)
assert response.status_code == 400
response = post(reverse('api:credential_list'), params, admin, expect=400)
assert "'invalid_field' was unexpected" in response.data['inputs'][0]

View File

@@ -1,19 +1,16 @@
import pytest
import yaml
import itertools
from unittest import mock
from django.db.utils import IntegrityError
from awx.api.versioning import reverse
from awx.main.models import Instance
from awx.main.models import Instance, ReceptorAddress
from awx.api.views.instance_install_bundle import generate_group_vars_all_yml
def has_peer(group_vars, peer):
peers = group_vars.get('receptor_peers', [])
for p in peers:
if f"{p['host']}:{p['port']}" == peer:
if p['address'] == peer:
return True
return False
@@ -24,119 +21,314 @@ class TestPeers:
def configure_settings(self, settings):
settings.IS_K8S = True
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_prevent_peering_to_self(self, node_type):
@pytest.mark.parametrize('node_type', ['hop', 'execution'])
def test_peering_to_self(self, node_type, admin_user, patch):
"""
cannot peer to self
"""
control_instance = Instance.objects.create(hostname='abc', node_type=node_type)
with pytest.raises(IntegrityError):
control_instance.peers.add(control_instance)
instance = Instance.objects.create(hostname='abc', node_type=node_type)
addr = ReceptorAddress.objects.create(instance=instance, address='abc', canonical=True)
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': instance.pk}),
data={"hostname": "abc", "node_type": node_type, "peers": [addr.id]},
user=admin_user,
expect=400,
)
assert 'Instance cannot peer to its own address.' in str(resp.data)
@pytest.mark.parametrize('node_type', ['control', 'hybrid', 'hop', 'execution'])
def test_creating_node(self, node_type, admin_user, post):
"""
can only add hop and execution nodes via API
"""
post(
resp = post(
url=reverse('api:instance_list'),
data={"hostname": "abc", "node_type": node_type},
user=admin_user,
expect=400 if node_type in ['control', 'hybrid'] else 201,
)
if resp.status_code == 400:
assert 'Can only create execution or hop nodes.' in str(resp.data)
def test_changing_node_type(self, admin_user, patch):
"""
cannot change node type
"""
hop = Instance.objects.create(hostname='abc', node_type="hop")
patch(
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"node_type": "execution"},
user=admin_user,
expect=400,
)
assert 'Cannot change node type.' in str(resp.data)
@pytest.mark.parametrize('node_type', ['hop', 'execution'])
def test_listener_port_null(self, node_type, admin_user, post):
"""
listener_port can be None
"""
post(
url=reverse('api:instance_list'),
data={"hostname": "abc", "node_type": node_type, "listener_port": None},
@pytest.mark.parametrize(
'payload_port, payload_peers_from, initial_port, initial_peers_from',
[
(-1, -1, None, None),
(-1, -1, 27199, False),
(-1, -1, 27199, True),
(None, -1, None, None),
(None, False, None, None),
(-1, False, None, None),
(27199, True, 27199, True),
(27199, False, 27199, False),
(27199, -1, 27199, True),
(27199, -1, 27199, False),
(-1, True, 27199, True),
(-1, False, 27199, False),
],
)
def test_no_op(self, payload_port, payload_peers_from, initial_port, initial_peers_from, admin_user, patch):
node = Instance.objects.create(hostname='abc', node_type='hop')
if initial_port is not None:
ReceptorAddress.objects.create(address=node.hostname, port=initial_port, canonical=True, peers_from_control_nodes=initial_peers_from, instance=node)
assert ReceptorAddress.objects.filter(instance=node).count() == 1
else:
assert ReceptorAddress.objects.filter(instance=node).count() == 0
data = {'enabled': True} # Just to have something to post.
if payload_port != -1:
data['listener_port'] = payload_port
if payload_peers_from != -1:
data['peers_from_control_nodes'] = payload_peers_from
patch(
url=reverse('api:instance_detail', kwargs={'pk': node.pk}),
data=data,
user=admin_user,
expect=201,
expect=200,
)
@pytest.mark.parametrize('node_type, allowed', [('control', False), ('hybrid', False), ('hop', True), ('execution', True)])
def test_peers_from_control_nodes_allowed(self, node_type, allowed, post, admin_user):
"""
only hop and execution nodes can have peers_from_control_nodes set to True
"""
post(
url=reverse('api:instance_list'),
data={"hostname": "abc", "peers_from_control_nodes": True, "node_type": node_type, "listener_port": 6789},
assert ReceptorAddress.objects.filter(instance=node).count() == (0 if initial_port is None else 1)
if initial_port is not None:
ra = ReceptorAddress.objects.get(instance=node, canonical=True)
assert ra.port == initial_port
assert ra.peers_from_control_nodes == initial_peers_from
@pytest.mark.parametrize(
'payload_port, payload_peers_from',
[
(27199, True),
(27199, False),
(27199, -1),
],
)
def test_creates_canonical_address(self, payload_port, payload_peers_from, admin_user, patch):
node = Instance.objects.create(hostname='abc', node_type='hop')
assert ReceptorAddress.objects.filter(instance=node).count() == 0
data = {'enabled': True} # Just to have something to post.
if payload_port != -1:
data['listener_port'] = payload_port
if payload_peers_from != -1:
data['peers_from_control_nodes'] = payload_peers_from
patch(
url=reverse('api:instance_detail', kwargs={'pk': node.pk}),
data=data,
user=admin_user,
expect=201 if allowed else 400,
expect=200,
)
def test_listener_port_is_required(self, admin_user, post):
"""
if adding instance to peers list, that instance must have listener_port set
"""
Instance.objects.create(hostname='abc', node_type="hop", listener_port=None)
post(
url=reverse('api:instance_list'),
data={"hostname": "ex", "peers_from_control_nodes": False, "node_type": "execution", "listener_port": None, "peers": ["abc"]},
assert ReceptorAddress.objects.filter(instance=node).count() == 1
ra = ReceptorAddress.objects.get(instance=node, canonical=True)
assert ra.port == payload_port
assert ra.peers_from_control_nodes == (payload_peers_from if payload_peers_from != -1 else False)
@pytest.mark.parametrize(
'payload_port, payload_peers_from, initial_port, initial_peers_from',
[
(None, False, 27199, True),
(None, -1, 27199, True),
(None, False, 27199, False),
(None, -1, 27199, False),
],
)
def test_deletes_canonical_address(self, payload_port, payload_peers_from, initial_port, initial_peers_from, admin_user, patch):
node = Instance.objects.create(hostname='abc', node_type='hop')
ReceptorAddress.objects.create(address=node.hostname, port=initial_port, canonical=True, peers_from_control_nodes=initial_peers_from, instance=node)
assert ReceptorAddress.objects.filter(instance=node).count() == 1
data = {'enabled': True} # Just to have something to post.
if payload_port != -1:
data['listener_port'] = payload_port
if payload_peers_from != -1:
data['peers_from_control_nodes'] = payload_peers_from
patch(
url=reverse('api:instance_detail', kwargs={'pk': node.pk}),
data=data,
user=admin_user,
expect=200,
)
assert ReceptorAddress.objects.filter(instance=node).count() == 0
@pytest.mark.parametrize(
'payload_port, payload_peers_from, initial_port, initial_peers_from',
[
(27199, True, 27199, False),
(27199, False, 27199, True),
(-1, True, 27199, False),
(-1, False, 27199, True),
],
)
def test_updates_canonical_address(self, payload_port, payload_peers_from, initial_port, initial_peers_from, admin_user, patch):
node = Instance.objects.create(hostname='abc', node_type='hop')
ReceptorAddress.objects.create(address=node.hostname, port=initial_port, canonical=True, peers_from_control_nodes=initial_peers_from, instance=node)
assert ReceptorAddress.objects.filter(instance=node).count() == 1
data = {'enabled': True} # Just to have something to post.
if payload_port != -1:
data['listener_port'] = payload_port
if payload_peers_from != -1:
data['peers_from_control_nodes'] = payload_peers_from
patch(
url=reverse('api:instance_detail', kwargs={'pk': node.pk}),
data=data,
user=admin_user,
expect=200,
)
assert ReceptorAddress.objects.filter(instance=node).count() == 1
ra = ReceptorAddress.objects.get(instance=node, canonical=True)
assert ra.port == initial_port # At the present time, changing ports is not allowed
assert ra.peers_from_control_nodes == payload_peers_from
@pytest.mark.parametrize(
'payload_port, payload_peers_from, initial_port, initial_peers_from, error_msg',
[
(-1, True, None, None, "Cannot enable peers_from_control_nodes"),
(None, True, None, None, "Cannot enable peers_from_control_nodes"),
(None, True, 21799, True, "Cannot enable peers_from_control_nodes"),
(None, True, 21799, False, "Cannot enable peers_from_control_nodes"),
(21800, -1, 21799, True, "Cannot change listener port"),
(21800, True, 21799, True, "Cannot change listener port"),
(21800, False, 21799, True, "Cannot change listener port"),
(21800, -1, 21799, False, "Cannot change listener port"),
(21800, True, 21799, False, "Cannot change listener port"),
(21800, False, 21799, False, "Cannot change listener port"),
],
)
def test_canonical_address_validation_error(self, payload_port, payload_peers_from, initial_port, initial_peers_from, error_msg, admin_user, patch):
node = Instance.objects.create(hostname='abc', node_type='hop')
if initial_port is not None:
ReceptorAddress.objects.create(address=node.hostname, port=initial_port, canonical=True, peers_from_control_nodes=initial_peers_from, instance=node)
assert ReceptorAddress.objects.filter(instance=node).count() == 1
else:
assert ReceptorAddress.objects.filter(instance=node).count() == 0
data = {'enabled': True} # Just to have something to post.
if payload_port != -1:
data['listener_port'] = payload_port
if payload_peers_from != -1:
data['peers_from_control_nodes'] = payload_peers_from
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': node.pk}),
data=data,
user=admin_user,
expect=400,
)
def test_peers_from_control_nodes_listener_port_enabled(self, admin_user, post):
assert error_msg in str(resp.data)
def test_changing_managed_listener_port(self, admin_user, patch):
"""
if peers_from_control_nodes is True, listener_port must an integer
Assert that all other combinations are allowed
if instance is managed, cannot change listener port at all
"""
for index, item in enumerate(itertools.product(['hop', 'execution'], [True, False], [None, 6789])):
node_type, peers_from, listener_port = item
# only disallowed case is when peers_from is True and listener port is None
disallowed = peers_from and not listener_port
post(
url=reverse('api:instance_list'),
data={"hostname": f"abc{index}", "peers_from_control_nodes": peers_from, "node_type": node_type, "listener_port": listener_port},
user=admin_user,
expect=400 if disallowed else 201,
)
hop = Instance.objects.create(hostname='abc', node_type="hop", managed=True)
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"listener_port": 5678},
user=admin_user,
expect=400, # cannot set port
)
assert 'Cannot change listener port for managed nodes.' in str(resp.data)
ReceptorAddress.objects.create(instance=hop, address='hop', port=27199, canonical=True)
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"listener_port": None},
user=admin_user,
expect=400, # cannot unset port
)
assert 'Cannot change listener port for managed nodes.' in str(resp.data)
def test_bidirectional_peering(self, admin_user, patch):
"""
cannot peer to node that is already to peered to it
if A -> B, then disallow B -> A
"""
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop1addr = ReceptorAddress.objects.create(instance=hop1, address='hop1', canonical=True)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
hop2addr = ReceptorAddress.objects.create(instance=hop2, address='hop2', canonical=True)
hop1.peers.add(hop2addr)
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop2.pk}),
data={"peers": [hop1addr.id]},
user=admin_user,
expect=400,
)
assert 'Instance hop1 is already peered to this instance.' in str(resp.data)
def test_multiple_peers_same_instance(self, admin_user, patch):
"""
cannot peer to more than one address of the same instance
"""
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop1addr1 = ReceptorAddress.objects.create(instance=hop1, address='hop1', canonical=True)
hop1addr2 = ReceptorAddress.objects.create(instance=hop1, address='hop1alternate')
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop2.pk}),
data={"peers": [hop1addr1.id, hop1addr2.id]},
user=admin_user,
expect=400,
)
assert 'Cannot peer to the same instance more than once.' in str(resp.data)
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_disallow_modifying_peers_control_nodes(self, node_type, admin_user, patch):
def test_changing_peers_control_nodes(self, node_type, admin_user, patch):
"""
for control nodes, peers field should not be
modified directly via patch.
"""
control = Instance.objects.create(hostname='abc', node_type=node_type)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', peers_from_control_nodes=True, listener_port=6789)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', peers_from_control_nodes=False, listener_port=6789)
assert [hop1] == list(control.peers.all()) # only hop1 should be peered
patch(
control = Instance.objects.create(hostname='abc', node_type=node_type, managed=True)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop1addr = ReceptorAddress.objects.create(instance=hop1, address='hop1', peers_from_control_nodes=True, canonical=True)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
hop2addr = ReceptorAddress.objects.create(instance=hop2, address='hop2', canonical=True)
assert [hop1addr] == list(control.peers.all()) # only hop1addr should be peered
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={"peers": ["hop2"]},
data={"peers": [hop2addr.id]},
user=admin_user,
expect=400, # cannot add peers directly
expect=400, # cannot add peers manually
)
assert 'Setting peers manually for managed nodes is not allowed.' in str(resp.data)
patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={"peers": ["hop1"]},
data={"peers": [hop1addr.id]},
user=admin_user,
expect=200, # patching with current peers list should be okay
)
patch(
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={"peers": []},
user=admin_user,
expect=400, # cannot remove peers directly
)
assert 'Setting peers manually for managed nodes is not allowed.' in str(resp.data)
patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={},
@@ -148,23 +340,25 @@ class TestPeers:
url=reverse('api:instance_detail', kwargs={'pk': hop2.pk}),
data={"peers_from_control_nodes": True},
user=admin_user,
expect=200, # patching without data should be fine too
expect=200,
)
assert {hop1, hop2} == set(control.peers.all()) # hop1 and hop2 should now be peered from control node
assert {hop1addr, hop2addr} == set(control.peers.all()) # hop1 and hop2 should now be peered from control node
def test_disallow_changing_hostname(self, admin_user, patch):
def test_changing_hostname(self, admin_user, patch):
"""
cannot change hostname
"""
hop = Instance.objects.create(hostname='hop', node_type='hop')
patch(
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"hostname": "hop2"},
user=admin_user,
expect=400,
)
def test_disallow_changing_node_state(self, admin_user, patch):
assert 'Cannot change hostname.' in str(resp.data)
def test_changing_node_state(self, admin_user, patch):
"""
only allow setting to deprovisioning
"""
@@ -175,12 +369,54 @@ class TestPeers:
user=admin_user,
expect=200,
)
patch(
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"node_state": "ready"},
user=admin_user,
expect=400,
)
assert "Can only change instances to the 'deprovisioning' state." in str(resp.data)
def test_changing_managed_node_state(self, admin_user, patch):
"""
cannot change node state of managed node
"""
hop = Instance.objects.create(hostname='hop', node_type='hop', managed=True)
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"node_state": "deprovisioning"},
user=admin_user,
expect=400,
)
assert 'Cannot deprovision managed nodes.' in str(resp.data)
def test_changing_managed_peers_from_control_nodes(self, admin_user, patch):
"""
cannot change peers_from_control_nodes of managed node
"""
hop = Instance.objects.create(hostname='hop', node_type='hop', managed=True)
ReceptorAddress.objects.create(instance=hop, address='hop', peers_from_control_nodes=True, canonical=True)
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"peers_from_control_nodes": False},
user=admin_user,
expect=400,
)
assert 'Cannot change peers_from_control_nodes for managed nodes.' in str(resp.data)
hop.peers_from_control_nodes = False
hop.save()
resp = patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"peers_from_control_nodes": False},
user=admin_user,
expect=400,
)
assert 'Cannot change peers_from_control_nodes for managed nodes.' in str(resp.data)
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_control_node_automatically_peers(self, node_type):
@@ -191,9 +427,10 @@ class TestPeers:
peer to hop should be removed if hop is deleted
"""
hop = Instance.objects.create(hostname='hop', node_type='hop', peers_from_control_nodes=True, listener_port=6789)
hop = Instance.objects.create(hostname='hop', node_type='hop')
hopaddr = ReceptorAddress.objects.create(instance=hop, address='hop', peers_from_control_nodes=True, canonical=True)
control = Instance.objects.create(hostname='abc', node_type=node_type)
assert hop in control.peers.all()
assert hopaddr in control.peers.all()
hop.delete()
assert not control.peers.exists()
@@ -203,26 +440,50 @@ class TestPeers:
if a new node comes online, other peer relationships should
remain intact
"""
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=True)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
hop1.peers.add(hop2)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
hop2addr = ReceptorAddress.objects.create(instance=hop2, address='hop2', canonical=True)
hop1.peers.add(hop2addr)
# a control node is added
Instance.objects.create(hostname='control', node_type=node_type, listener_port=None)
Instance.objects.create(hostname='control', node_type=node_type)
assert hop1.peers.exists()
def test_group_vars(self, get, admin_user):
def test_reverse_peers(self, admin_user, get):
"""
if hop1 peers to hop2, hop1 should
be in hop2's reverse_peers list
"""
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
hop2addr = ReceptorAddress.objects.create(instance=hop2, address='hop2', canonical=True)
hop1.peers.add(hop2addr)
resp = get(
url=reverse('api:instance_detail', kwargs={'pk': hop2.pk}),
user=admin_user,
expect=200,
)
assert hop1.pk in resp.data['reverse_peers']
def test_group_vars(self):
"""
control > hop1 > hop2 < execution
"""
control = Instance.objects.create(hostname='control', node_type='control', listener_port=None)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=True)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
execution = Instance.objects.create(hostname='execution', node_type='execution', listener_port=6789)
control = Instance.objects.create(hostname='control', node_type='control')
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
ReceptorAddress.objects.create(instance=hop1, address='hop1', peers_from_control_nodes=True, port=6789, canonical=True)
execution.peers.add(hop2)
hop1.peers.add(hop2)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
hop2addr = ReceptorAddress.objects.create(instance=hop2, address='hop2', peers_from_control_nodes=False, port=6789, canonical=True)
execution = Instance.objects.create(hostname='execution', node_type='execution')
ReceptorAddress.objects.create(instance=execution, address='execution', peers_from_control_nodes=False, port=6789, canonical=True)
execution.peers.add(hop2addr)
hop1.peers.add(hop2addr)
control_vars = yaml.safe_load(generate_group_vars_all_yml(control))
hop1_vars = yaml.safe_load(generate_group_vars_all_yml(hop1))
@@ -265,13 +526,15 @@ class TestPeers:
control = Instance.objects.create(hostname='control1', node_type='control')
write_method.assert_not_called()
# new hop node with peers_from_control_nodes False (no)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
# new address with peers_from_control_nodes False (no)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop1addr = ReceptorAddress.objects.create(instance=hop1, address='hop1', peers_from_control_nodes=False, canonical=True)
hop1.delete()
write_method.assert_not_called()
# new hop node with peers_from_control_nodes True (yes)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=True)
# new address with peers_from_control_nodes True (yes)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop')
hop1addr = ReceptorAddress.objects.create(instance=hop1, address='hop1', peers_from_control_nodes=True, canonical=True)
write_method.assert_called()
write_method.reset_mock()
@@ -280,20 +543,21 @@ class TestPeers:
write_method.assert_called()
write_method.reset_mock()
# new hop node with peers_from_control_nodes False and peered to another hop node (no)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
hop2.peers.add(hop1)
# new address with peers_from_control_nodes False and peered to another hop node (no)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop')
ReceptorAddress.objects.create(instance=hop2, address='hop2', peers_from_control_nodes=False, canonical=True)
hop2.peers.add(hop1addr)
hop2.delete()
write_method.assert_not_called()
# changing peers_from_control_nodes to False (yes)
hop1.peers_from_control_nodes = False
hop1.save()
hop1addr.peers_from_control_nodes = False
hop1addr.save()
write_method.assert_called()
write_method.reset_mock()
# deleting hop node that has peers_from_control_nodes to False (no)
hop1.delete()
# deleting address that has peers_from_control_nodes to False (no)
hop1.delete() # cascade deletes to hop1addr
write_method.assert_not_called()
# deleting control nodes (no)
@@ -315,8 +579,8 @@ class TestPeers:
# not peered, so config file should not be updated
for i in range(3):
Instance.objects.create(hostname=f"exNo-{i}", node_type='execution', listener_port=6789, peers_from_control_nodes=False)
inst = Instance.objects.create(hostname=f"exNo-{i}", node_type='execution')
ReceptorAddress.objects.create(instance=inst, address=f"exNo-{i}", port=6789, peers_from_control_nodes=False, canonical=True)
_, should_update = generate_config_data()
assert not should_update
@@ -324,11 +588,13 @@ class TestPeers:
expected_peers = []
for i in range(3):
expected_peers.append(f"hop-{i}:6789")
Instance.objects.create(hostname=f"hop-{i}", node_type='hop', listener_port=6789, peers_from_control_nodes=True)
inst = Instance.objects.create(hostname=f"hop-{i}", node_type='hop')
ReceptorAddress.objects.create(instance=inst, address=f"hop-{i}", port=6789, peers_from_control_nodes=True, canonical=True)
for i in range(3):
expected_peers.append(f"exYes-{i}:6789")
Instance.objects.create(hostname=f"exYes-{i}", node_type='execution', listener_port=6789, peers_from_control_nodes=True)
inst = Instance.objects.create(hostname=f"exYes-{i}", node_type='execution')
ReceptorAddress.objects.create(instance=inst, address=f"exYes-{i}", port=6789, peers_from_control_nodes=True, canonical=True)
new_config, should_update = generate_config_data()
assert should_update

Some files were not shown because too many files have changed in this diff Show More