Compare commits

...

685 Commits

Author SHA1 Message Date
Alan Rominger
918d5b3565 Do some aesthetic adjustments to role presentation fields (#15153)
* Do some asthetic adjustments to role presentation fields

* Correctly test managed setup

* Minor migration adjustments
2024-04-29 17:11:10 -04:00
Hao Liu
158314af50 Delete deprecated Cypress UI e2e_test.yml (#15155)
Delete e2e_test.yml

Remove because it's no longer being maintained
2024-04-29 12:58:10 -04:00
Seth Foster
4754819a09 awx modules wait on event processing finished (#15152)
This change makes "wait: true" for jobs and syncs
look at the event_processing_finished instead of
finished field.

Right now there is a race condition where
a module might try to delete an inventory, but the events
for an inventory sync have not yet finished. We have a
RelatedJobsPreventDeleteMixin that checks for this condition.

bulk jobs don't have event_processing_finished so we just
use finished field in that case.
2024-04-26 17:33:34 -04:00
Seth Foster
78fc23138a Pin openssl 3.0.7 (#15147)
followup to PR #15142

This commit pins openssl in the awx image,
not just the builder image.
2024-04-26 12:29:22 -04:00
Alan Rominger
014534bfa5 Upgrade DRF (#15144)
* Upgrade DRF

* Fix failures caused by DRF upgrade
2024-04-25 15:37:08 -04:00
Seth Foster
2502e7c7d8 Temporarily downgrade openssl (#15142)
openssl 3.2.0 has incompatiblity issues with
the libpq version we are using, and causes
some C runtime errors:
"double free or corruption (out)"

see awx issue #15136

also this issue

github.com/conan-io/conan-center-index/pull/22615

once the libpq libraries on centos stream9 are
updated with the patch, we can unpin openssl

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-04-25 14:01:03 -04:00
Jeff Bradberry
fb237e3834 Stop pre-caching every resource in the system upon import
If we don't have something in the cache when we call
get_by_natural_key, do an actual filtered query for it and cache the
results.  We'll get more overall API calls this way, but they'll be
smaller and will happen while we are importing, not upfront.
2024-04-24 17:04:40 -04:00
irozet12
e4646ae611 Add help message for expiration tokens (#15076) (#15077)
Co-authored-by: Ирина Розет <irozet@astralinux.ru>
2024-04-24 19:58:09 +00:00
Bruno Sanchez
7dc77546f4 Adding CSRF Validation for schemas (#15027)
* Adding CSRF Validation for schemas

* Changing retrieve of scheme to avoid importing new library

* check if CSRF_TRUSTED_ORIGINS exists before accessing it

---------

Signed-off-by: Bruno Sanchez <brsanche@redhat.com>
2024-04-24 15:47:03 -04:00
Michael Tipton
f5f85666c8 Add ability to set SameSite policy for userLoggedIn cookie (#15100)
* Add ability to set SameSite policy for userLoggedIn cookie

* reformat line for linter
2024-04-24 15:44:31 -04:00
Alan Rominger
47a061eb39 Fix and test data migration error from DAB RBAC (#15138)
* Fix and test data migration error from DAB RBAC

* Fix up migration test

* Fix custom method bug

* Fix another fat fingered bug
2024-04-24 15:14:03 -04:00
Alan Rominger
c760577855 Adjust test for stricter DAB user view permission enforcement (#15130) 2024-04-23 15:21:06 -04:00
TVo
814ceb0d06 Backports previously approved corrections. (#15121)
* Backports previously approved corrections.

* Deleted a blank line in inventories line 100
2024-04-22 09:55:19 -06:00
Seth Foster
f178c84728 Fix instance peering pagination (#15108)
Currently the association box displays a
list of available instances/addresses that can
be peered to.

The pagination issue arises as follows:

- fetch 5 instances (based on page_size)
- filter these instances down based on some
criteria (like is_internal: false)
- show results

Filtering down the results inside of the fetch
method results in pagnation errors (may return fewer than
5, for example)

instead, do the filtering by API queries. That way the
pagination count will be correct.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-04-18 15:17:10 -04:00
Jeff Bradberry
c0f71801f6 Use $(shell ...) to filter the redis docker volumes
Makefiles use $() for variable templating, so trying to use it
directly as a shell subcommand doesn't work.
2024-04-17 16:45:23 -04:00
Jan-Piet Mens
4e8e1398d7 Omit using -X when not needed, and don't default to demonstrating -k
It's the year 2024: using -k as default in https URL schemes should be deprecated. (I have left one mention of it possibly being required if no CA available). Furtheremore, neither -XGET or -XPOST are required, as curl(1) well knows when to use which method.
2024-04-17 15:18:57 -04:00
STEVEN ADAMS
3d6a8fd4ef chore: remove repetitive words (#15101)
Signed-off-by: hugehope <cmm7@sina.cn>
2024-04-17 19:18:25 +00:00
Hao Liu
e873bb1304 Fix wsrelay connection leak (#15113)
- when re-establishing connection to db close old connection
- re-initialize WebSocketRelayManager when restarting asyncio.run
- log and ignore error in cleanup_offline_host (this might come back to bite us)
- cleanup connection when WebSocketRelayManager crash
2024-04-16 14:54:36 -04:00
lucas-benedito
672f1eb745 Fixed missing fstring from wsrelay logging (#15094)
Fixed missing fstring from wsrelay logging
2024-04-16 13:32:34 -04:00
abikouo
199507c6f1 Support Google credentials on Terraform credentials type 2024-04-16 10:34:38 -04:00
jessicamack
a176c04c14 Update LDAP/SAML config dump command (#15106)
* update LDAP config dump

* return missing fields if any

* update test, remove unused import

* return bool and fields. check for missing_fields
2024-04-15 12:26:57 -04:00
Alan Rominger
e3af658f82 Use released version of django-radius (#15103) 2024-04-12 16:34:23 -04:00
Hao Liu
e8a3b96482 Use latest awx-ee in devel CI (#15098) 2024-04-12 14:30:39 -04:00
Hao Liu
c015e8413e Store molecule debug output to github artifacts (#15107)
Related to https://github.com/ansible/awx-operator/pull/1823
2024-04-12 13:56:25 -04:00
Alan Rominger
390c2d8907 [RBAC] Update related name to reflect upstream DAB change (#15093)
Update related name to reflect upstream DAB change
2024-04-11 14:59:09 -04:00
Alan Rominger
97605c5f19 Make custom urls work with RBAC 2024-04-11 14:59:09 -04:00
Alan Rominger
818c326160 [RBAC] Rename managed role definitions, and move migration logic here (#15087)
* Rename managed role definitions, and move migration logic here

* Fix naming capitalization
2024-04-11 14:59:09 -04:00
Alan Rominger
c98727d83e [RBAC] Fix bug where team could not be given read_role to other team (#15067)
* Fix bug where team could not be given read_role to other team

* Avoid unwanted triggers of parentage granting

* Restructure signal structure

* Fix another bug unmasked by team member permission fix

* Changes to live with test writing

* Use equality as opposed to string "in"

from Seth in review comment

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2024-04-11 14:59:09 -04:00
Alan Rominger
a138a92e67 [RBAC] Tweaks to reflect what endpoints are deprecated (#15068)
Tweaks to reflect what endpoints are deprecated
2024-04-11 14:59:09 -04:00
Alan Rominger
7aed19ffda Fix missing role membership when giving creator permissions (#15058) 2024-04-11 14:59:09 -04:00
Seth Foster
3bb559dd09 AWX Collections for DAB RBAC
Adds new modules for CRUD operations on the
following endpoints:

- api/v2/role_definitions
- api/v2/role_user_assignments
- api/v2/role_team_assignments

Note: assignment is Create or Delete only

Additional changes:
- Currently DAB endpoints do not have "type"
field on the resource list items. So this modifies
the create_or_update_if_needed to allow manually
specifying item type.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-04-11 14:59:09 -04:00
Alan Rominger
389a729b75 [RBAC] Fix known issues with backward compatible access_list (#15052)
* Remove duplicate access_list entries for direct team access

* Revert test changes for superuser in access_list
2024-04-11 14:59:09 -04:00
Alan Rominger
2f3c9122fd Generalize can_delete solution, use devel DAB (#15009)
* Generalize can_delete solution, use devel DAB

* Fix bug where model was used instead of model_name

* Linter fixes
2024-04-11 14:59:09 -04:00
Alan Rominger
733478ee19 [RBAC] Fix server error from delete capability of approvals (#15002)
Fix server error from delete capability of approvals
2024-04-11 14:59:09 -04:00
Alan Rominger
41c6337fc1 [RBAC] Fix migration for created and modified field changes (#14999)
Fix migration for created and modified field changes
2024-04-11 14:59:09 -04:00
Alan Rominger
7446da1c2f Bump migration number for RBAC branch 2024-04-11 14:59:09 -04:00
Alan Rominger
c79fca5ceb Adopt internal DAB RBAC Permission model (#14994) 2024-04-11 14:59:09 -04:00
Alan Rominger
dc5f43927a Minor RBAC test fix (#14982) 2024-04-11 14:59:09 -04:00
Alan Rominger
35a5a81e19 Use AWX base view to make unauth requests 401 (#14981) 2024-04-11 14:59:09 -04:00
Alan Rominger
9dcc11d54c [DAB RBAC] Re-implement system auditor as a singleton role in new system (#14963)
* Add new enablement settings from DAB RBAC

* Initial implementation of system auditor as role without testing

* Fix system auditor role, remove duplicate assignments

* Make the system auditor role managed

* Flake8 fix

* Remove another thing from old solution

* Fix a few test failures

* Add extra setting to disable custom system roles via API

* Add test for custom role prohibition
2024-04-11 14:59:09 -04:00
Alan Rominger
74ce21fa54 Bump number of allowed endpoints (#14956) 2024-04-11 14:59:09 -04:00
Alan Rominger
eb93660b36 Cache organization child evaluations and remove hacks 2024-04-11 14:59:09 -04:00
Alan Rominger
f50e597548 Cast ObjectRole object_id to int, very wrong, tmp fix 2024-04-11 14:59:09 -04:00
Alan Rominger
817c3b36b9 Replace role system with permissions-based DB roles
Develop ability to list permissions for existing roles

Create a model registry for RBAC-tracked models

Write the data migration logic for creating
  the preloaded role definitions

Write migration to migrate old Role into ObjectRole model

This loops over the old Role model, knowing it is unique
  on object and role_field

Most of the logic is concerned with identifying the
  needed permissions, and then corresponding role definition

As needed, object roles are created and users then teams
  are assigned

Write re-computation of cache logic for teams
  and then for object role permissions

Migrate new RBAC internals to ansible_base

Migrate tests to ansible_base

Implement solution for visible_roles

Expose URLs for DAB RBAC
2024-04-11 14:59:09 -04:00
Alan Rominger
1859a6ae69 Fix failure from DAB (#15102)
@AlanCoding said to do this 🚌
2024-04-11 17:10:11 +00:00
Chris Meyers
0645d342dd Implement optional url prefix the Django way
* Before, the optional url prefix feature required calling our
  versioning version of reverse(). This worked _ok_ until we added more
  and more urls from 3rd party apps. Those 3rd party apps do not call
  our reverse(), writefully so.
* This implementation looks at the incoming request path. If it includes
  the special optional prefix url, then we register ALL the urls WITH
  the optional url prefix.
  If the incoming request path does NOT contain the options url prefix
  then we register ALL the urls WITHOUT the optional url prefix.
* Before this, we were registering BOTH sets of urls and then reverse()
  + the request as context to decide which url.
2024-04-10 16:03:09 -04:00
Chris Meyers
61ec03e540 Move named url init out of Middleware init
* Middleware classes can be instantiated multiple times in testing. To
  make this a non-issue, move the init code for named urls out of the
  middleware init and into the app init.
* This makes it easier to use other testing facilities, like
  LiveServerTestCase, without having to mock the named url middleware
  init.
2024-04-10 15:46:30 -04:00
Shane McDonald
09f0a366bf Revert accidental line deletion
I made a mistake in https://github.com/ansible/awx/pull/15096. I realized afterwards that it must have been being consumed by the make target.
2024-04-10 12:19:07 -04:00
Shane McDonald
778961d31e Fix awxkit uploads when re-running promote workflow 2024-04-10 12:12:35 -04:00
Shane McDonald
f962c88df3 Allow for manually restarting promote workflow
The promote workflow recently failed. Since this was just a problem with our automation, it would be nice if we didn't have to do another release just to fix our tooling.
2024-04-10 12:00:30 -04:00
Hao Liu
8db3ffe719 Check galaxy collection with and without redirect 2024-04-10 11:26:42 -04:00
Alan Rominger
cc5d4dd119 Clean the postgres 15 volume (#15083) 2024-04-09 16:06:42 -04:00
Hao Liu
86204cf23b Publish amd64 and arm64 awx image on release (#15053)
* Stage multi-arch awx image

- change CI to use `make awx-kube-build` instead of build playbook
- update staging CI to build and push multiarch awx image
- update doc to use `make awx-kube-build` to build awx image
- remove build playbook (no longer used)
2024-04-09 09:50:09 -04:00
Chris Meyers
468949b899 Remove uneeded drf_reverse overwrite
* `drf_reverse()` was introduced here 1a75b1836e
* There is a comment about monkey patching. I can't find the monkey patch it is referencing.
* AWX `drf_reverse()` is a copy paste of this https://github.com/encode/django-rest-framework/blob/master/rest_framework/reverse.py#L32
  * The only difference is DRF's version calls `preserve_builtin_query_params()`
    * `preserve_builtin_query_params()` only does something if `api_settings.URL_FORMAT_OVERRIDE` is defined.
      * We don't use `REST_FRAMEWORK.URL_FORMAT_OVERRIDE`
2024-04-08 16:14:11 -04:00
TVo
f1d9966224 Added docs for terraform credential/inventory source (#15004)
* Added docs for terraform credential/inventory source

* Updated screen captures for inventories and source to match wfjt example

* Added docs for terraform credential/inventory source

* Updated screen captures for inventories and source to match wfjt example

* Update docs/docsite/rst/userguide/inventories.rst

Co-authored-by: Aoki <lucasaoki@users.noreply.github.com>

* Revised per review feedback.

* Update docs/docsite/rst/userguide/inventories.rst

Co-authored-by: Helen Bailey <hakbailey@gmail.com>

---------

Co-authored-by: Aoki <lucasaoki@users.noreply.github.com>
Co-authored-by: Helen Bailey <hakbailey@gmail.com>
2024-04-05 09:57:27 -06:00
César Francisco San Nicolás Martínez
b022b50966 fix service-index url calling reverse method 2024-04-04 07:48:04 -04:00
Elijah DeLee
e2f4213839 Round out options url prefix edge cases 2024-04-04 07:48:04 -04:00
Chris Meyers
ae1235b223 Rename container hostname from awx_1 to awx-1
* Django and other webservers that care about proper hostnames don't
  like underscores in them.
2024-04-03 15:58:17 -04:00
Tom Page
c061f59f1c Add tags and skip_tags option to awx.awx.workflow_launch (#15011)
Signed-off-by: Tom Page <tpage@redhat.com>
2024-04-03 15:29:43 -04:00
Jeff Bradberry
3edaaebba2 Adjust the awx-manage script to make use of importlib (#15015)
* Adjust the awx-manage script to make use of importlib

removing the deprecation warning.

* Synlink awx-manage in docker-compose

No longer need to rebuild docker-compose devel image to load change for `tools/docker-compose/awx-manage` in development environment

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-04-02 17:20:05 -04:00
Hao Liu
7cdf1c7f96 Update DOCKER_COMPOSE command to docker compose (#15056)
* Update DOCKER_COMPOSE command

docker-compose will stop being supported soon and this is causing CI flake setting DOCKER_COMPOSE default to `docker compose`

* Give AWX network a static name
2024-04-02 15:13:14 -04:00
Hao Liu
d558204192 Make db password optional for wsrelay (#15046)
* Make db password optional for wsrelay

* Change DB setting copy to deepcopy

safer than copy()

Co-Authored-By: Jeff Bradberry <685957+jbradberry@users.noreply.github.com>

---------

Co-authored-by: Jeff Bradberry <685957+jbradberry@users.noreply.github.com>
2024-04-02 11:47:24 -04:00
Chris Meyers
d06ce8f911 Remove json formatter for job lifecycle
* We didn't really make use of json formatting across the app. Remove
  the special case json formatter. Instead, output all of the meta-data
  associated with a job lifecycle event every time. Before, we tried to
  only output this extra meta data when in DEBUG mode. It turns out this
  information is smaller than we thought and more useful than we thought
  so always output it.
2024-04-02 11:39:34 -04:00
Alan Rominger
4b6f7e0ebe Add link to service-index URL 2024-03-29 10:07:15 +00:00
John Barker
370c567be1 Remove /17 magic number from Forum URL 2024-03-29 10:03:45 +00:00
John Barker
9be64f3de5 Improve social documentation release_process.md 2024-03-29 10:03:45 +00:00
Alan Rominger
30500e5a95 Re-parent DAB views from AWX base 2024-03-29 10:03:12 +00:00
David O Neill
bb323c5710 Loosen up body check on template
https://github.com/ansible/awx/issues/14985
https://github.com/ansible/awx/issues/13983
2024-03-29 10:02:18 +00:00
Chris Meyers
7571df49d5 Pass --exclude="list of exclude dirs like this"
* Previously, the params were passed without quotes and each directory
  was being interpreted as a seperate command line flag.
* Added some structure around the error messages returned from
  receptorctl so we can more easily decide how to handle each case. For
  example, releasing the cleanup job from receptor doesn't absolutely
  need to succeed because we have a periodic job that does that. In
  fact, that is the thing that is making it fail .. but I digress.
2024-03-28 14:42:08 -04:00
Jeff Bradberry
1559c21033 Change awx.awx.application to output the OAuth2 client secret
if one was generated.
2024-03-28 10:17:46 -04:00
PabloHiro
d9b81731e9 Fix: broken reference to API url 2024-03-27 20:37:53 +01:00
Adam Miller
2034cca3a9 update playbooks to use fqcn
Signed-off-by: Adam Miller <admiller@redhat.com>
2024-03-27 15:13:43 -04:00
Chris Meyers
0b5e59d9cb Fix websocket relay. Set autocommit so conn.notifies() does not blocks forever (#15043)
Without autocommit conn.notifies() blocks forever
2024-03-27 15:11:17 -04:00
Alan Rominger
f48b2d1ae5 Add resource and ansible_id to serializers (#15020) 2024-03-26 22:37:15 -04:00
Dimitri Savineau
b44bb98c7e Dockerfile: Fix collectstatic command (#15035)
Recent changes in awx and/or django ansible base cause the django
collectstatic command to fail when using an empty settings file.
Instead, use the defaults settings file from controller via
DJANGO_SETTINGS_MODULE=awx.settings.defaults

[linux/amd64 builder 13/13] RUN AWX_SETTINGS_FILE=/dev/null
SKIP_SECRET_KEY_CHECK=yes SKIP_PG_VERSION_CHECK=yes
/var/lib/awx/venv/awx/bin/awx-manage collectstatic --noinput --clear
Traceback (most recent call last):
(...)
django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly
configured. Please supply the ENGINE value. Check settings documentation for
more details.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2024-03-26 14:19:51 -04:00
Hao Liu
8cafdf0400 Fix wsrelay KeyboardInteruption not respected (#15036)
- stop wsrelay on keyboard interuption
- restart wsrelay for any other failure reason
2024-03-26 17:29:15 +00:00
Hao Liu
3f566c8737 Fix wsrelay not retry to establish db connection (#15031)
- run_wsrelay retry to run wsrelay forever with 10 second sleep
- wsrelay restart on`on_ws_heartbeat` task if fail to db connection goes away
2024-03-26 11:56:16 -04:00
Hao Liu
c8021a25bf Fix keycloak doc (#15024) 2024-03-25 15:01:49 -04:00
Matt Martz
934646a0f6 Address first_found skip bug (#15017)
* Address first_found skip bug

* Don't attempt installing project root requirements.yml as v2 collection format
2024-03-22 12:06:43 +01:00
Michael Abashian
9bb97dd658 Fix bug where extra variables were reset on schedule edit
Fix survey prompt presentation inconsistencies

Remove unnecessary conditional

This conditional always returned true.  See the following warning: This condition will always return 'true' since JavaScript compares objects by reference, not value.

Fix schedule edit tests
2024-03-20 10:30:10 -04:00
Hao Liu
7150f5edc6 Editable dependencies in docker compose development environment (#14979)
* Editable dependencies in docker compose development environment
2024-03-19 15:09:15 -04:00
Hao Liu
93da15c0ee Setting modification to address requests from UI_NEXT devs (#14996)
Modification to settings

- Add hidden to indicate to UI_NEXT to hide the field
- Add warning_text to indicate to UI_NEXT to display the warning when specific setting is modified
- Address some non required field being marked as required
2024-03-19 15:08:41 -04:00
Hao Liu
ab593bda45 Add setting for configuring optional URL prefix for /api (#14939)
* Add setting for configuring optional URL prefix for /api

Add OPTIONAL_API_URLPATTERN_PREFIX setting

examples:
- if set to `''` (empty string) API pattern will be `/api`
- if set to 'controller' API pattern will be `/api` AND `/api/controller`
2024-03-19 15:56:33 +00:00
TVo
065bd3ae2a Backported from product-docs PR #2001 (misc doc cleanup) (#14980)
* Backported from product-docs PR #2001 (misc doc cleanup)

* Update docs/docsite/rst/administration/awx-manage.rst
2024-03-15 12:20:02 -06:00
Hao Liu
8ff7260bc6 Add dump_auth_config management cmd (for SAML and LDAP) (#14947)
* Add dump_auth_config management cmd

- Dump SAML config from AWX to DAB authenticator config in json format

* Add dumping of LDAP settings

* add test for command

* Fix is_enabled

* fix command name typo

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

* add fields to config, add name to data

* break out IDP values

* change test fields and value comparison

* edit help text, reformat settings

---------

Co-authored-by: jessicamack <jmack@redhat.com>
2024-03-15 13:47:30 -04:00
Hao Liu
a635445082 Fix failing bulk launch job due to create partition race
https://github.com/ansible/awx/pull/14910/files

introduced a bug where we no longer accept the right exceptions

when 2 job launch at the sametime and try to create jobevent table partition 1 of the job will fail
2024-03-15 10:10:38 -04:00
Hao Liu
949e7efab1 Fix wsrelay hanging after db outage
TCP keepalive settings was moved out from settings.DATABASE to settings.LISTENER_DATABASES and it's not longer being respected by wsrelay
2024-03-14 15:51:30 -04:00
Hao Liu
615f09226f Fix awx-manage run_wsrelay --status (#14997)
by don't start the metrics server if --status is passed in
2024-03-14 18:55:05 +00:00
Dave
d903c524f5 Fix for 14924 - Unformatted help text toast message (#14990)
Fix for 14924  - Unformatted help text is popped out when peers for intances are changed

Co-authored-by: David O Neill <daoneill@redhat.com>
2024-03-14 13:24:53 -04:00
Cesar Francisco San Nicolas Martinez
393d9c39c6 Mismatch dependencies version (#14986)
* Fixed mismatch between setuptools version in the makefile and requirements file

* Fix mismatch of versions in makefile and requirements

* Added maturin license
2024-03-14 13:32:56 +01:00
Hao Liu
dfab342bb4 Skip replicas test for awx-operator (#14987)
speed up CI, also AWX code change won't effect that test
2024-03-13 18:01:02 +00:00
Dave
12843eccf7 AAP-13369 Python 3.9 -> 3.11 upgrade (#14771)
* Python 3.9 -> 3.11 upgrade

* Test: updating azure-keyvault to 4.2.0

* Revert "Test: updating azure-keyvault to 4.2.0"

This reverts commit cf0b83699442e0c0de4a1152d4af8543a5e05b88.

* Test: updating azure-keyvault to latest and adding azure-identity

* Fix licenses

* Adding new licenses

* Revert "Fix licenses"

This reverts commit da3876911ef5ebbe7a8adbddd336ced3039b6228.

* Fixing dependencies

* Test: updating azure-keyvault to 4.2.0

* Fix licenses

* Revert "Fix licenses"

This reverts commit da3876911ef5ebbe7a8adbddd336ced3039b6228.

* Fixing dependencies

---------

Co-authored-by: César Francisco San Nicolás Martínez <csannico@redhat.com>
2024-03-13 14:41:40 +01:00
Hao Liu
dd9160135d Prune dangle image periodically (#14957)
Prune dangle image periodically

pairs with https://github.com/ansible/ansible-runner/pull/1342

this fix the problem of us forcefully remove images when setting changing ee image that's being used in a job causing the job to fail
2024-03-12 10:57:57 -04:00
Chris Meyers
ad96a92fa7 Align Orign and Host header (#14970)
* Align Orign and Host header

* Before this change the Host: header was runserver. Seems to be set by
  nginx upstream flow.
* After this change we explicitly set the Host: header
* More about CSRF checks ...
  CSRF checks that Origin == Host. Think about how the browser works.

  <browser goes to awx.com>
  "I'm executing javascript that I downloaded from awx.com (ORIGIN) and
  I'm making an XHR POST request to awx.com (HOST)"
  Server verifies; Host: header == Origin: header; OK!

  vs. the malicious case.

  <hacker injects javascript code into google.com>
  <browser goes to google.com>
  "I'm executing javascript that I downloaded from google.com (ORIGIN)
  and I'm making an XHR POST request to awx.com (HOST)"
  Server verifies; Host: header != Origin: header; NOT OK!

* Update awx/settings/development.py

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-03-11 17:06:09 -04:00
David O Neill
ca8085fe7e English string validation to error code validation 2024-03-11 20:07:16 +00:00
Hao Liu
b076cb00a9 Revert "Implement project pulling from Azure DevOps using Service Pri… (#14977)
Revert "Implement project pulling from Azure DevOps using Service Principals (#14628)"

This reverts commit 2e2cd7f2de.
2024-03-11 14:05:24 +00:00
John Westcott IV
ee9eac15dc Upgrade to postgres:15 (#14230)
* Upgrade to postgres:15
* Changed postgres:15 to quay.io/sclorg/postgresql-15-c9s
2024-03-07 16:27:03 -05:00
Hao Liu
3f2f7b75a6 [developer productivity improvement] Running awx components in vscode debugger (#14942)
Enable VSCode debugger integration when attaching VSCode to with AWX docker-compose development environment container

- add debugpy launch target in `.vscode/launch.json` to enable launching awx processes with debugpy
- add vscode tasks in `.vscode/tasks.json` to facilitate shutting down corresponding supervisord managed processes while launching process with debugpy
- modify nginx conf to add django runserver as fallback to uwsgi (enable launching API server via debugpy)
2024-03-07 19:31:50 +00:00
Dave
b71645f3b1 AAP-12273 remove incorrect sentence conjugation (#14946)
AAP-12273 remove incorrect sentance conjugation

Co-authored-by: David O Neill <daoneill@redhat.com>
2024-03-07 14:04:11 -05:00
Hao Liu
eb300252b8 Fix awx-autoreload in dev environment (#14968)
Fix awx-autoreload, recent change made autoreload no longer take the command parameter
2024-03-07 16:33:23 +00:00
Patrick Uiterwijk
2e2cd7f2de Implement project pulling from Azure DevOps using Service Principals (#14628)
* Credential Lookup with multiple types
Allow looking up a credential with one of multiple type IDs.

* Allow Azure cred for SCM
Allow selecting an Azure Resource Manager credential for Git-based SCMs.
This is in order to enable using Azure Service Principals for project updates.

* Implement Azure Service Principal Git
This adds support for using an Azure Service Principal for project updates.

---------

Signed-off-by: Patrick Uiterwijk <patrick@puiterwijk.org>
2024-03-07 10:07:03 -05:00
Hao Liu
727278aaa3 Add pip>=21.3 to dev requirement to install django-ansible-base in editable mode (#14961)
Add  pip>=21.3 to dev requirement required for installing django-ansible-base in editable mode

https://peps.python.org/pep-0660/

PEP 660 – Editable installs for pyproject.toml based builds (wheel based)
2024-03-06 21:28:41 -05:00
Michael Abashian
81825ab755 Bump axios UI dep to 1.6.z (#14954)
* Bump axios UI dep to 1.6.z

* Add proxy-from-env license
2024-03-06 16:03:49 -05:00
Helen Bailey
7f2a1b6b03 Add terraform state inventory source (#14840)
* Add terraform state inventory source
* Update inventory source plugin test
Signed-off-by: Helen Bailey <hebailey@redhat.com>
2024-03-06 20:27:52 +00:00
Hao Liu
1b56d94d30 In development environment not auto-reload explicitly STOPPED processes (#14958)
Not auto-reload explicitly STOPPED processes

In development/debug workflow sometime we explicitly STOP processes this will make sure auto-reload does not start them back up
2024-03-06 20:22:44 +00:00
Shane McDonald
e1e32c971c Allow for manually starting workflow to build devel images (#14955) 2024-03-06 17:53:05 +00:00
Alan Rominger
a4a2fabc01 Add test for utils method is_testing 2024-03-05 11:38:14 +00:00
Hao Liu
b7b7bfa520 Fix test that fail on rerun due to expecting exact IDs (#14943)
Fix test that fail on rerun

due to expecting exact IDs
2024-03-01 12:37:17 -05:00
jessicamack
887604317e Integrate resources API in Controller (#14896)
* add resources api to controller

* update setting

models are not the source of truth in AWX

* Force creation of ServiceID object in tests

* fix typo

* settings fix for CI

---------

Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-03-01 11:18:35 -05:00
Chris Meyers
d35d8b6ed7 fails when ran with vscode debugger
Tried to dig as to why we ever needed this and could not find the answer. We removed it and ran all the tests and the tests passed so assuming it's no longer needed.
2024-02-29 15:22:41 -05:00
Hao Liu
ec28eff7f7 Convert swagger release fixture to env var (#14940)
`pytest awx/main/tests/docs --release=$(VERSION_TARGET)`
where --release is required breaks test discovery and running in vscode (from within the container)
2024-02-29 14:55:02 -05:00
Cesar Francisco San Nicolas Martinez
a5d17539c6 Removing Podman to use Docker again in the collection ci (#14938)
Removing Podman var to use Docker again in the collection ci
2024-02-28 14:56:49 +01:00
Thanhnguyet Vo
a49d894cf1 Added missing AWS secret management lookup creds. 2024-02-28 04:16:59 +00:00
Chris Meyers
b3466d4449 Make JWT the first auth class and default
* No harm in adding it to the list. If a JWT auth header is provided,
  then process it (valid or not). If a JWT is not provided, move on to
  the next auth.
2024-02-27 15:09:16 -05:00
Hao Liu
237adc6150 Publish multi-arch versioned awx-ee
dependent on https://github.com/ansible/awx-ee/pull/235
2024-02-27 08:21:51 +00:00
Hao Liu
09b028ee3c Publish multi-arch manifest of awx (#14929)
Promote multi-arch awx manifest
2024-02-26 17:05:57 -05:00
Hao Liu
fb83bfbc31 Fix ui_next banner (#14928) 2024-02-26 14:20:42 -05:00
Hao Liu
88e406e121 Fix CVEs and bump receptorctl (#14925)
CVE-2023-47627
CVE-2023-49083
CVE-2023-41040
CVE-2024-22195
CVE-2023-46137
2024-02-26 15:48:38 +00:00
Thanhnguyet Vo
59d0bcc63f Fixed some misc errors in illustrations and header formatting 2024-02-22 13:04:37 +00:00
Hao Liu
3fb3125bc3 Send QUIT to worker before dying (#14913)
Fix deadlock scenario where dispatcher child process stuck in reading from queue loop after dispatcher parent process decided to quit

Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-02-21 16:08:43 -05:00
Alan Rominger
d70c6b9474 Reset another to test-playbooks 2024-02-21 15:11:12 +00:00
Alan Rominger
5549516a37 Reset these tests back to test-playbooks 2024-02-21 15:11:12 +00:00
Alan Rominger
14ac91a8a2 Swap repos test fix 2024-02-21 15:11:12 +00:00
Alan Rominger
d5753818a0 Stop using the bulky test-playbooks in tests where possible 2024-02-21 15:11:12 +00:00
Chad Ferman
33010a2e02 added # -*-coding:utf-8-*- to test if this fixes issues with users being unable to have Japanese, Chinese and Korean Characters in email messages 2024-02-21 14:50:46 +00:00
Alan Rominger
14454cc670 Fix playbook errors found by debugging 2024-02-21 14:36:41 +00:00
Alan Rominger
7ab2bca16e Fix missing var name change 2024-02-21 14:36:41 +00:00
Alan Rominger
f0f655f2c3 Style consistency for task 'when' 2024-02-21 14:36:41 +00:00
Brian Coca
4286d411a7 fix project_update role/collection install
- avoid looping
  - avoid using multiple files, only one should be provided and processed per type
  - use first_found and variables to locate existing file
  - skip if no file exists
2024-02-21 14:36:41 +00:00
James Talton
06ad32ed8e Enhance the dashboard job summary endpoint to contain canceled and error job counts
Signed-off-by: James Talton <jtalton@redhat.com>
2024-02-21 14:12:59 +00:00
Alan Rominger
1ebff23232 Do not rely on unreliable dir output and use exists query 2024-02-21 13:43:54 +00:00
Alan Rominger
700de14c76 Make the migration middleware faster, second attempt 2024-02-21 13:43:54 +00:00
Thanhnguyet Vo
8605e339df Deleted duplicate graphics that were converted to drawio. 2024-02-21 13:10:41 +00:00
Thanhnguyet Vo
e50954ce40 Fixed graphics, illustrations, tables, examples, sizing 2024-02-21 13:10:41 +00:00
Hao Liu
7caca60308 Multi-arch build for AWX images in ghcr.io (#14899)
build amd64 and ARM image for
- awx
- awx_devel
- awx_kube_devel
2024-02-20 17:17:31 -05:00
Cesar Francisco San Nicolas Martinez
f4e13af056 Add support for terraform credentials in awxkit (#14902) 2024-02-20 13:42:25 +00:00
Jérémy Lal
decdb56288 Fix mistake in french message 2024-02-20 12:33:59 +00:00
Fdubois97
bcd4c2e8ef Fix: Ajout de traduction sur la langue FR 2024-02-20 12:32:18 +00:00
Thaïs DOUCET
d663066ac5 Fix typo in french message
Typo in "Launch Template" french message

Signed-off-by: Thaïs DOUCET <tdoucet@amiltone.com>
2024-02-20 12:21:31 +00:00
Sasa Jovicic
1ceebb275c Fix: login rerouting only works on the user's current tab (PR:#14399)
Signed-off-by: Sasa Jovicic <sjovicic@anexia-it.com>
2024-02-20 12:16:13 +00:00
César Francisco San Nicolás Martínez
f78ba282a6 Ability to user awxkit with websocket custom urls 2024-02-20 11:58:18 +00:00
Mikhail Yohman
81d88df757 Add YAML tab for Job Output event modal. 2024-02-19 16:45:46 +00:00
Hao Liu
0bdb01a9e9 Allow dev image to build on fork (#14898)
* Allow dev image to build on fork

Fix uppercase repo owner error in CI
example
```
Run docker pull ghcr.io/TheRealHaoLiu/awx_devel:devel
  docker pull ghcr.io/TheRealHaoLiu/awx_devel:devel
  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
  env:
    LC_ALL: C.UTF-8
    CI_GITHUB_TOKEN: ***
    DEV_DOCKER_OWNER: TheRealHaoLiu
    COMPOSE_TAG: devel
    py_version: 3
invalid reference format: repository name must be lowercase
```

---------

Co-authored-by: Rick Elrod <rick@elrod.me>
2024-02-19 16:19:59 +00:00
Alan Rominger
cd91fbf59f Label any changes to requirements folder with dependencies label 2024-02-19 15:56:53 +00:00
Michael Abashian
f240e640e5 Run prettier 2024-02-19 14:48:36 +00:00
ivarmu
46f489185e fixes problems with workflow nodes information section 2024-02-19 14:48:36 +00:00
Thanhnguyet Vo
dbb80fb7e3 Updated release notes so they don't need to be revised so often. 2024-02-19 12:18:57 +00:00
Seth Foster
cb3d357ce1 Disable install_bundle endpoint for ingress node
As we do for control nodes, disable the
install_bundle endpoint for ingress nodes.

This can be done by checking if instance managed
is True.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-19 11:00:06 +00:00
Chris Meyers
dfa4db9266 Add tests for websocket endpoints
* authorized/not authorized tests for wsrelay endpoint
* not authorized test for web browser websockets
* skeleton of a test for authorized web browser websockets
2024-02-17 18:37:53 -05:00
Chris Meyers
6906a88dc9 Add pytest-asyncio to test channels websockets 2024-02-17 18:37:53 -05:00
Alan Rominger
1f7be9258c Remove tower_legacy module_utils that appears unused (#14421)
* Remove tower_legacy module that appears unused

* Update license details
2024-02-16 16:02:09 -05:00
Jake Jackson
dcce024424 Update release doc to check for awxkit tar files. (#14892)
add a step to release process to confirm tar file is published
2024-02-16 18:51:58 +00:00
Jeff Bradberry
79d7179c72 Fix the persistent breakage when cleaning up branches
The github workflow that we have set up for branch deletion doesn't work:

- the `on: delete` event does not support the `branches:` filter
- the `mode` flag for the aws_s3 module does not have `delete` as one
  of the options.  The proper option appears to be `delobj`.
2024-02-15 15:16:18 -05:00
Alan Rominger
4d80f886e0 Revert "Drop cython dep" (#14884)
* Revert "Remove cython lib"

This reverts commit 46f816e7a4.

* Revert "WIP consider droping cython dep"

This reverts commit 54b32c10f0.

* Update Cython comment
2024-02-15 11:58:17 -05:00
TVo
5179333185 Added mesh ingress content to instances chapter. (#14854)
* Added mesh ingress content to instances chapter.

* Changes to incorp feedback from @TheRealHaoLiu.

* Added mesh ingress content to instances chapter.

* Line 75

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Line 117

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Line 126

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>

* Wording changes and update graphics

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2024-02-15 07:19:52 -07:00
Alan Rominger
362e11aaf2 Respect old downtime setting name if user has already set it 2024-02-15 12:34:24 +00:00
Hao Liu
decff01fa4 Update command for sos-report
wsbroadcast has been renamed to wsrelay
2024-02-15 12:32:05 +00:00
Daniel Gonçalves
a14cc8199d Add python 3.12 dependency (#14869)
Signed-off-by: Daniel Gonçalves <daniel.gonc@lves.fr>
2024-02-14 15:12:51 -05:00
Chris Meyers
b6436826f6 Support websocket per-endpoint auth
* Channels doesn't really give you an interface to support per-endpoint
  auth ... so this adds one.
* The web browser and node <--> node communication have different auth
  needs.
2024-02-14 15:11:52 -05:00
Chris Meyers
2109b5039e Fixes "Task was destroyed but it is pending"
* asyncio.gather expects *tasks .. gotta unpack the list
2024-02-14 15:11:52 -05:00
root
b6f9b73418 adding iputils 2024-02-14 17:01:15 +00:00
Christian M. Adams
40a8a3cb2f Add dockerx make target for building awx for arm64
Signed-off-by: Christian M. Adams <chadams@redhat.com>
2024-02-14 16:01:36 +00:00
David O Neill
19f80c0a26 GH13983 - Add additional check for bad templates 2024-02-14 15:49:26 +00:00
Christian Adams
5d1bb2125e Update .github/workflows/stage.yml
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-14 15:44:21 +00:00
Christian Adams
99c512bcef Update .github/workflows/stage.yml
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-14 15:44:21 +00:00
Christian M. Adams
ed0329f5db Build multi-arch awx-operator images when releasing 2024-02-14 15:44:21 +00:00
Chris Meyers
dd53345397 shhhhhhhhhhhhhhh
Hopefully silence some setuptools
2024-02-14 15:38:31 +00:00
Chris Meyers
f66cde51d7 More locked down websocket path
* Previously, the nginx location would match on /foo/websocket... or
  /foo/api/websocket... Now, we require these two paths to start at the
  root i.e. <host>/websocket/... /api/websocket/...
* Note: We now also require an ending / and do NOT support
  <host>/websocket_foobar but DO support <host>/websocket/foobar. This
  was always the intended behavior. We want to keep
  <host>/api/websocket/... "open" and routing to daphne in case we want
  to add more websocket urls in the future.
2024-02-14 13:50:51 +00:00
thedoubl3j
d1c31687fc remove the ldap volume when cleaning all volumes 2024-02-14 12:33:42 +00:00
Jeff Bradberry
38424487f1 Unbreak the pip-compile command when multiple files are passed in (#14875) 2024-02-13 15:59:04 -05:00
Hao Liu
b0565e9937 Switch to docker_compose_v2 in tools playbook (#14872)
Switch to docker_compose_v2

Fix
```
"Configuration error - kwargs_from_env() got an unexpected keyword argument 'ssl_version'"}
```
2024-02-13 13:05:33 -05:00
Hao Liu
44d85b589c Retries on vault on seal (#14873)
Sometime we tried to unseal when vault is not ready yet
2024-02-13 13:05:23 -05:00
Alan Rominger
46f816e7a4 Remove cython lib 2024-02-13 14:45:28 +00:00
Alan Rominger
54b32c10f0 WIP consider droping cython dep 2024-02-13 14:45:28 +00:00
Alan Rominger
20202054cc Use lowercase password 2024-02-13 14:36:39 +00:00
Alan Rominger
e84e2962d0 Support DB configs where PASSWORD is not used 2024-02-13 14:36:39 +00:00
Chris Meyers
2259047527 Websockets now use rest_framework configed auth
* Always support cookies, session, and also allow rest_framework
  configured auth methods over the browser websocket.
* The node -> node websocket auth remains locked down and unchanged
2024-02-13 12:05:25 +00:00
Chris Meyers
f429ef6ca7 Allow connecting to websockets via api/websocket/
* Before, we just allowed websockets on <host>/websocket/. With this
  change, they can now come from <host>/api/websocket/
2024-02-13 12:02:44 +00:00
Hao Liu
4b637c1319 gitignore pyenv python-version file
pyenv local can be use to set directory specific python version in `.python-version` file in the directory

Signed-off-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-13 12:00:57 +00:00
Alan Rominger
4c41f6b018 Ability to use updater script to pin dev requirements (#14644)
* Add a dev option for updater script to pin CI reqs

* Avoid removing git links for dev requirements

* Add dev to primary options

* Fix up sanitize git switch
2024-02-12 11:57:59 -05:00
Jesse Wattenbarger
3ae72219b4 Change parsing of docker info in dev build
This is a non-functional change. The way os_info is populated with docker info
and grep 'Operating System' breaks on podman and likely in other places. This
makes it work on both podman and docker, and it will continue to return the
exact same strings everywhere else.
2024-02-12 16:40:48 +00:00
jessicamack
402c29dc52 Switch mailing list to forum (#14600)
* Switch mailing list to forum

* add link to community

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

* use correct channel name

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-02-12 15:51:31 +00:00
Alan Rominger
8eb4a9a2a0 Update location of logstash build context (#14676) 2024-02-12 15:49:29 +00:00
Jeff Bradberry
36f3b46726 Avoid using SmartFilter during migrations (#14786)
Our migrations that touch roles tend to bring in our real models via
migration_utils.set_current_apps_for_migrations, and that can have
some undesirable side-effects.
2024-02-12 15:48:58 +00:00
Bikouo Aubin
55c6a319dc Add new credential type to support Terraform backend configuration (#14828)
* Add new credential type to support configuration of Terraform Backend

* Fix unit tests
2024-02-12 15:47:24 +00:00
Dave
56b6a07f6e Remove json serialization for notify validation (#14847)
* Remove json serialization for notify validation

* Update serializers.py
2024-02-12 15:43:43 +00:00
Jake Jackson
519fd22bec Add ldap support to vault container in docker dev environment (#14777)
* add ldap_auth mount and configure it

* added in key engines, userpass auth method, still needs testing

* add policies and fix ldap_user

* start awx automation for vault demo and move ldap

* update docs with new flags/new credentials
2024-02-09 15:19:17 -05:00
TVo
2e5306ae8e Added LDAP support for HashiCorp Vault lookup credential (#14833)
* Added LDAP support for HashiCorp Vault lookup credential

* Added LDAP support for HashiCorp Vault lookup credential

* Replaced graphics and updated missing fields.

* Added LDAP support for HashiCorp Vault lookup credential

* Replaced graphics and updated missing fields.

* Incorporated review feedback from @thedoubl3j and @djyasin.
2024-02-09 10:17:14 -07:00
Cesar Francisco San Nicolas Martinez
068e6acbd5 Fix the way we are passing the awxkit base path to resources (#14862) 2024-02-09 15:31:52 +01:00
Seth Foster
f9a23a5645 Remove enable button for hop nodes (#14861)
Enabled/disabled does not apply to hop nodes,
since hop nodes don't run jobs.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-08 22:10:10 -05:00
Seth Foster
40150a2be8 Fix UI peers_from_control_nodes (#14858)
* Fix UI peers_from_control_nodes

Fixes bug where peers_from_control_node was
greyed out in UI.

Additional changes:
- Make edit instance button only show for instances
with managed = False
- Make remove instance button only show for instances
with managed = False
- InstanceList selectable only for instances with
managed = False

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-08 14:13:11 -05:00
TVo
b79aa5b1ed Removed erroneous line for basic auth 2024-02-08 12:45:47 -05:00
Jeff Bradberry
b3aeb962ce Fix the test_export_system_auditor collection test 2024-02-07 15:55:19 -05:00
Jeff Bradberry
2300b8fddf Fix linting problem 2024-02-07 15:55:19 -05:00
Lorenzo Tanganelli
3a3284b5df rework with check if POST exists 2024-02-07 15:55:19 -05:00
Lorenzo Tanganelli
2359004cc1 fix decription extraction on export 2024-02-07 15:55:19 -05:00
Lorenzo Tanganelli
694d7e98e7 leave $encrypted$ on export
add encrypted removal from import when object not exists
2024-02-07 14:29:30 -05:00
Julen Landa Alustiza
8c9c02c975 awxkit: allow to modify api base url (#14835)
Signed-off-by: Julen Landa Alustiza <jlanda@redhat.com>
2024-02-07 12:26:42 +01:00
Chris Meyers
8a902debd5 Per-service metrics http server
* Organize metrics into their respective service
* Server per-service metrics on a per-service http server
* Increase prometheus client usage over our custom metrics fields
2024-02-05 15:17:24 -05:00
Seth Foster
6dcaa09dfb UI rename Endpoints to Listener Addresses
Listener Addresses is a better name to
emphasize these are routable addresses to
reach a listener service on the node.

Also removed expand toggle on the listener
addresses list items, as the expanded mode
had no additional information.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
21fd6af0f9 InstanceLink unique constraint source and target
Prevent creating InstanceLinks with duplicate
source and target pairings.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
eeae1d59d4 Disable health check button if managed
Also, update ui screen tests to expect
injecting "listener_port: null" if
listener_port is empty

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
a252d0ae33 Fix UI lint by running npm prettier
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
48971411cc Protocol blank if no canonical address
Make protocol be blank on instance if there
is no canonical address for this instance.

It was defaulting to "tcp" before.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
083c05f12a Prevent duplicating instance links
In receptor address post-save method:
- Fixed detecting if address was missing
a link from control nodes
- Use InstanceLink create_or_update to prevent
adding duplicate InstanceLink source and target
peers

In instance serializer create_or_update,
delete receptor addresses first before doing
instance create or update. This ensures that we don't
trigger unnecessary post-save methods that might
attempt to manipulate receptor addresses that
will just be removed later.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
b558397b67 Remove redundant tests
test_listener_port
test_peers_from_control_nodes
test_peers_from_control_nodes_without_listener_port

are covered in the following tests:

test_no_op
test_creates_canonical_address
test_deletes_canonical_address
test_updates_canonical_address
test_canonical_address_validation_error

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
904c6001e9 If managed, cannot modify peers_from_control_nodes
Adds validation to prevent changing
peers_from_control_nodes if instance managed=True

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
818e11dfdc Test inspect_established_receptor_connections
Add functional test case for inspecting
established receptor connections.

InstanceLink starts in ADDING state, and should
move to ESTABLISHED state if the connection
is detected in the receptor status output.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Jeff Bradberry
7fc13a0569 Write tests around the two special instance serializer fields
and all of the cases that they might be in.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
92c693f14e Break out peer validation into its own method 2024-02-02 10:37:41 -05:00
Jeff Bradberry
f2417f0ed2 Make the peer validation more compact
and reuse information.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
8f22188116 Use a select_related to build the peers queryset in the install bundle
Since the relationship is ReceptorAddress -> Instance,
prefetch_related isn't necessary.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
05502c0af8 Placeholder FIXMEs for things of concern 2024-02-02 10:37:41 -05:00
Jeff Bradberry
957ce59bf7 Use Counter to find duplicate peer relationships
Gives a bit more readability.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
cc4cc37d46 'managed' is a read-only field on InstanceSerializer
so we don't need complex logic to compare an incoming to existing value.
2024-02-02 10:37:41 -05:00
Jeff Bradberry
1e254c804c The listener port cannot be disabled when setting peers_from_control_nodes
If the port is explicitly set to null (causing any ReceptorAddress to
be deleted), then that's a validation error.

If the port is left off but a ReceptorAddress doesn't already exist,
we should not infer a port number and that is also a validation error.
2024-02-02 10:37:41 -05:00
Seth Foster
1b44bebed3 Peers_from_control_nodes requires listener port
Adds validation and a unit test to ensure:

- peers_from_control_nodes=True should fail if
listener_port is not set
- peers_from_control_nodes=False should be NOOP if
listener_port is not set

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
a4cf55bdba Require receptor collection 2.0.3
This release supports ingress hop node, and
is backwards compatible with previous AWX
version.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
c333d0e82f Prevent modifying peers on managed node
Add validation to prevent any managed node
from modifying "peers" through the API

Peering from these nodes should be handled
by setting peers_from_control_nodes only.

Managed nodes are control nodes and
ingress hop nodes.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
David O Neill
b093c89a84 Form hardening and node type exclusion
Disabled add/edit/remove for managed nodes
Tightened validation on Peers from control nodes
2024-02-02 10:37:41 -05:00
Jeff Bradberry
f98493aa61 Support wss as ws-listener in the Receptor config 2024-02-02 10:37:41 -05:00
Hao Liu
c36d2b0485 Update requirements.yml 2024-02-02 10:37:41 -05:00
Jeff Bradberry
8ddb604bf1 Template the listener protocol into the receptor install bundle (#14792)
Previously we were hard-coding tcp, now we need to also support ws/wss.
2024-02-02 10:37:41 -05:00
Seth Foster
cd9dd43be7 Make InstanceLink target non-nullable
InstanceLink target should not be null.

Should be safe to set to null=False, because we have
a custom RunPython method to explicitly set
target to a proper key.

Also, add new test to test_migrations
which ensures data integrity after migrating
the receptor address model changes.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Jeff Bradberry
82323390a7 Reconstitute migration file
Do the things as a proper 3-step process -- add new stuff, copy the data, drop the old.
2024-02-02 10:37:41 -05:00
Seth Foster
4c5ac1d3da Add management command to remove address
Adds remove_receptor_address to delete a
receptor address from the database

Also, enforce that only 1 canonical address
can be added to an instance via
the add_receptor_address command.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
David O Neill
9c06370e33 InstanceAdd sends null for port_listener 2024-02-02 10:37:41 -05:00
David O Neill
449b95d1eb Fix remaning tests, removed unused code 2024-02-02 10:37:41 -05:00
David O Neill
1712540c8e Updates for receptor reaslese to ui for protocol and is_managed 2024-02-02 10:37:41 -05:00
Seth Foster
7cf639d8eb Add migration to support InstanceLink changes
- Add forwards method to create a receptor address
for any existing Instance that has listener_port defined

- Add forwards method to modify each InstanceLink object
that changes target to the newly created receptor addresses

This migration was implemented as follows:

1. Add a target_new to InstanceLink which is a foreign key
to ReceptorAddress
2. create receptor addresses
3. link to these receptor addresses using the target_new field
4. rename target_new to target
5. drop listener_port and peers_from_control_nodes from Instance

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
dbfcc40d7c Only create receptor address if port is defined
If a Instance endpoint is patched with
{"peers_from_control_nodes" True}

but a listener_port is not defined on the instance,
or is part of the patch payload, do not create a
receptor address.

Only update or create a receptor address if listener_port
is set, either in the payload or already on the instance.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
73d2c92ae3 Fix condition for creating receptor_address
If listener_port is not explicitly defined don't create a receptor_address
2024-02-02 10:37:41 -05:00
Seth Foster
24a4242147 Add protocol to receptor address serializer
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
92ce85b688 Remove unused warnings import
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
9531f8377a Ensure register_peers target is ReceptorAddress
command has 3 "targets", all of which should be a
ReceptorAddress object

- peers, disconnect, exact

Add logic to make sure each entry in those lists
are receptor addresses.

When creating InstanceLink objects, make sure
target is ReceptorAddress, not an Instance.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
15a16b3dd1 Update bootstrap_development.sh 2024-02-02 10:37:41 -05:00
Seth Foster
a37e7bf147 Add canonical=True when creating ReceptorAddress in tests
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
a2fcd2f97a Fix ui-lint error 2024-02-02 10:37:41 -05:00
Hao Liu
c394ffdd19 Fix lint error, remove unused import 2024-02-02 10:37:41 -05:00
Seth Foster
69102cf265 Remove receptor_address module from collection
After removing CRUD from receptor addresses, we need
to remove the module.

- remove receptor_address module
- Add listener_port to instance module
- Add peers_from_control_nodes to instance module

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
a188798543 Make canonical field default to False
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
60108ebd10 Rename migration dependency
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Jeff Bradberry
8c7c00451a Join across the InstanceLink.target to the underlying Instance
to avoid having to do a lookup for every instance in the list for the
mesh visualizer endpoint.
2024-02-02 10:37:41 -05:00
Seth Foster
7a1ed406da Remove CRUD for Receptor Addresses
Removes ability to directly create and delete
receptor addresses for a given node.

Instead, receptor addresses are created automatically
if listener_port is set on the Instance.

For example patching "hop" instance

with {"listener_port": 6667}

will create a canonical receptor address with port
6667.

Likewise, peers_from_control_nodes on the instance
sets the peers_from_control_nodes on the canonical
address (if listener port is also set).

protocol is a read-only field that simply reflects
the canonical address protocol.

Other Changes:

- rename k8s_routable to is_internal
- add protocol to ReceptorAddress
- remove peers_from_control_nodes and listener_port
from Instance model

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
César Francisco San Nicolás Martínez
f916ffe1e9 Adjust migration names and dependencies 2024-02-02 10:37:41 -05:00
César Francisco San Nicolás Martínez
901dbd697e Comment unused dependency 2024-02-02 10:37:41 -05:00
David O Neill
d8b4a9825e Cleanup
- Remove peer selection on add and edit instance
- Added conconcial name and order comlums names same on endpoints and
  peers
- Other cleanup items
- rename name to instance name
2024-02-02 10:37:41 -05:00
Hao Liu
6db66c5f81 Fix provision instance not respecting protocol 2024-02-02 10:37:41 -05:00
David O Neill
82ad7dcf40 Mesh UI support
- add endpoint
- delete endpoint (wip)
- associate
- disassociate
2024-02-02 10:37:41 -05:00
David O Neill
93500f9fea Fix lint trailing whitespace 2024-02-02 10:37:41 -05:00
Seth Foster
9ba70c151d Add canonical receptor address
Creates a non-deletable address that acts as
the "main" address for this instance.

All other addresses for that instance must
be non-canonical.

When listener_port on an instance is set, automatically
create a canonical receptor address where:
  - address is hostname of instance
  - port is listener_port
  - canonical is True

Additionally, protocol field is added to instance to
denote the receptor listener protocol to use (ws, tcp).

The receptor config listener information is derived from
the listener_port and protocol information. Having a
canonical address that mirrors the listener_port ensures that
an address exists that matches the receptor config information.

Other changes:
- Add managed field to receptor address.
If managed is True, no fields on on this address can be edited
via the API.
If canonical is True, only the address cannot be edited.

- Add managed field to instance. If managed is True, users
cannot set node_state to deprovisioning (i.e. cannot delete node)

This change to our mechanism to prevent users from deleting
the mesh ingress hop node.

- Field is_internal is now renamed to k8s_routable

- Add reverse_peers on instance which is a list of instance IDs
that peer to this instance (via an address)

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
David O Neill
46dc61253f UI Updates for receptor peering 2024-02-02 10:37:41 -05:00
Seth Foster
6cb2cd18b0 Fix proper indent to instance module
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5d1dd8ec41 Fix inconsistent tab width
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
9f69daf787 Add choices to module protocol field
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
16ece5de7e Remove unused variables and imports
Addresses flake8 and linting failures

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
ab0e9265c5 Add search fields to views
Make receptoraddress list views
searchable by "address"

Other changes:
- Add help text to source and target of the
InstanceLink model

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
04cbbbccfa Update awx_collection to support ReceptorAddress
- Add receptor_address module which allows
users to create addresses for instances

- Update awx_collection functional and integration
tests to support new peering design

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
d1cacf64de Add functional and unit tests
Updated existing tests to support the
ReceptorAddress model

- cannot peer to self
- cannot peer to node that is already peered to me
- cannot peer to node more than once (via 2+ addresses)
- cannot set is_internal True

Other changes:
Change post save signal to only call
schedule_write_receptor_config() when an actual change is detected.

Make functional tests more robust by
checking for specific validation error in the
response.

I.e. instead of just checking for 400, just for 400
and that the error message corresponds to the
validation we are testing for.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5385eb0fb3 Add validation when setting peers
- cannot peer to self
- cannot peer to instance that is already peered to self

Other changes:
- ReceptorAddress protocol field restricted to choices: tcp, ws, wss
- fix awx-manage list_instances when instance.last_seen is None
- InstanceLink make source and target unique together
- Add help text to the ReceptorAddress fields

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
7d7503279d Add API validation when creating ReceptorAddress
- websocket_path can only be set if protocol is ws
- is_internal must be False
- only 1 address per instance can have
peers_from_control_nodes set to True

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Hao Liu
d860d1d91b Temp change to aid dev
Temporarily switch to feature branch for receptor-collection in instance_install_bundle to make dev easier
2024-02-02 10:37:41 -05:00
Seth Foster
3a17c45b64 Register_peers support for receptor_addresses
register_peers has inputs:

source: source instance
peers: list of instances the source should peer to

InstanceLink "target" is now expected to be a ReceptorAddress

For each peer, we can just use the first receptor address. If
multiple receptor addresses exist, throw a command error.

Currently this command is only used on VM-deployments, where
there is only a single receptor address per instance, so this
should work fine.

Other changes:
drop listener_port field from Instance. Listener port is now just
"port" on ReceptorAddress

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
bca68bcdf1 Add install bundle support
group_vars all.yaml changes:
- peer entry has two fields, address and port
- receptor_port is inferred from the first
receptor_address entry that uses protocol tcp

other changes:
ActivityStream now records when receptor_addresses
are peered to

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
c32f234ebb Add peers_from_control_nodes to ReceptorAddress
- write_receptor_config peers to ReceptorAddress entries
that have peers_from_control_nodes enabled

- peers_from_control_nodes and listener_port removed from Instance model

- peers_from_control_nodes added to ReceptorAddress model

- ReceptorAddress is now unique by address and protocol combination

- Write receptor config task is dispatched upon ReceptorAddress creation
or deletion, and when control node is first created

- InstanceLinkSerializer adds a target_address field and has logic
to grab the instance hostname associated with the peered ReceptorAddress

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5cb3d3b078 Update receptor conf when address changes
Add post save and post delete hooks to
call write_receptor_config when
a receptor address is added / removed.

Add peers_from_control_nodes to
provision_instance

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-02-02 10:37:41 -05:00
Seth Foster
5199cc5246 Add ReceptorAddress to root urls
- Add database contraints to make sure addresses
are unique
If port is defined:
address, port, protocol, websocket_path are unique together

if port is not defined:
address, protocol, websocket_path are unique together

- Allow deleting address via API
- Add ReceptorAddressAccess to determine permissions
- awx-manage add_receptor_address returns changed: True
if successful
2024-02-02 10:37:41 -05:00
Hao Liu
387e877485 Connect from controlplane node to mesh ingress 2024-02-02 10:37:41 -05:00
Seth Foster
d54c5934ff Add support for inbound hop nodes 2024-02-02 10:37:41 -05:00
Chris Meyers
2fa5116197 Project updates do not run against hosts
* The project update event has no host_id or host_name because they
  only run on localhost.
2024-02-01 14:45:16 -05:00
Chris Meyers
527755d986 Always get host from event data
* Regardless of if there is a host map or not, get the host that Ansible
  reports and assign it to the Events' host_name
2024-01-31 09:42:01 -05:00
Chris Meyers
f9c0b97c53 Avoid EDA dev env port conflict
* Not many, if any, folks use the notebook feature. It kind of goes in
  and out of popularity. We've used it in the past when we work on
  features that require visualization (i.e. network graphs, workflows).
  Might as well keep it around in case we use it again.
2024-01-30 11:17:30 -05:00
TVo
65655f84de Added section for using private image for default EE. (#14815)
* Added hop node information and beefed up exec nodes section.

* Made updates to the Topology viewer & dis/associate peers

* Added hop node information and beefed up exec nodes section.

* Made updates to the Topology viewer & dis/associate peers

* Fixed build and image rendering issues.

* Replaced graphic with many many nodes!

* Updated images with latest

* Incorporated review inputs from @fosterseth.

* Incorporated more review feedback from @fosterseth

* Resized graphic

* Resized graphic again

* Reduced image sizes to scale better on different browsers.

* Added section for using private image for default EE.

* Deleted .md file for execution nodes due to migration to RTD.
2024-01-29 17:48:53 -07:00
Elijah DeLee
9aa3d5584a fix nginx append slash to respect proxy
This is already fixed in awx-operator.
See a534c856db/roles/installer/templates/configmaps/config.yaml.j2 (L215)
This just makes it so a development environment can also work correctly
behind a proxy

Fixes problem of
GET to https://$PROXY/something/awx/v2/me
rewritten to https://$AWX/something/awx/v2/me/ (which doesn't exist)

instead path is correctly rewritten as https://$PROXY/something/awx/v2/me/
2024-01-29 15:30:16 -05:00
TVo
266e31d71a Added hop node information and beefed up exec nodes section. (#14787)
* Added hop node information and beefed up exec nodes section.

* Made updates to the Topology viewer & dis/associate peers

* Added hop node information and beefed up exec nodes section.

* Made updates to the Topology viewer & dis/associate peers

* Fixed build and image rendering issues.

* Replaced graphic with many many nodes!

* Updated images with latest

* Incorporated review inputs from @fosterseth.

* Incorporated more review feedback from @fosterseth

* Resized graphic

* Resized graphic again

* Reduced image sizes to scale better on different browsers.
2024-01-27 14:23:32 -07:00
Alan Rominger
a1bbe75aed Adopt new rules from black upgrade (#14809) 2024-01-26 12:54:44 -05:00
Sandra McCann
695f1cf892 Replaced old tower docs link with new AWX docs link (#14801) 2024-01-25 17:23:48 -07:00
Chris Meyers
0ab103d8c4 Get that new AWX DAB hotness 2024-01-25 15:45:18 -05:00
Chris Meyers
9ac1c0f6c2 Specify docker network when multiple networks
* We introduced multi networks to our docker env. The code this replaces
  would return both networks' ip addresses concatinated i.e.
  '192.168.2.1192.168.2.3'.
2024-01-25 14:47:02 -05:00
Lila Yasin
2e168d8177 Add userpass and LDAP support for HashiCorp vault credential_plugin (#14654)
* Add username and password to handle_auth and update exception message

Revise naming of ldap username and password

* Add url for LDAP and userpass to method_auth

* Add information regarding LDAP and username and password to credential plugins documentation

Revise ldap_auth to userpass_auth and revised exception to better reflect functionality

* Revise method_auth to ensure certs can be used with username and ensure namespace functionality is not hindered
2024-01-25 09:50:13 -05:00
Kristof Wevers
d4f7bfef18 feat: Add retries to requests sessions
Every so often we get connection timed out errors towards our HCP Vault
endpoint. This is usually when a larger number of jobs is running
simultaneously. Considering requests for other jobs do still succeed this
is probably load related and adding a retry should help in making this a
bit more robust.
2024-01-24 15:45:54 -05:00
dependabot[bot]
985a8d499d Bump jinja2 from 3.1.2 to 3.1.3 in /docs/docsite
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.2 to 3.1.3.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.2...3.1.3)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-24 15:36:19 -05:00
Chris Meyers
e3b52f0169 Join the service-mesh docker network
* Put the awx node(s) on a service-mesh docker network so they can be
  proxied to. Also put all the other containers on an explicit awx
  network otherwise they can not talk to each other. We might could be
  more surgical about what containers we put on awx but I just added all
  of them.
2024-01-24 10:34:44 -05:00
jessicamack
f69f600cff Refer to the ansible repo for django-ansible-base requirement (#14793)
* update to the proper repo

* refer to devel
2024-01-22 10:29:47 -05:00
TVo
74cd23be5c Fixed/updated URL for “Passing Variables on the Command Line" link. (#14763) 2024-01-19 11:41:17 -07:00
jessicamack
209747d88e Update for django-ansible-base split (#14783)
* update paths and names

* temp to get tests passing

* fix typo
2024-01-19 12:30:32 -05:00
Alan Rominger
d91da39f81 New setting for pg_notify listener DB settings, add keepalive (#14755) 2024-01-17 13:44:04 -05:00
Michael Tipton
5cd029df96 Add secure flag option for userLoggedIn cookie if SESSION_COOKIE_SECU… (#14762)
Add secure flag option for userLoggedIn cookie if SESSION_COOKIE_SECURE set to True
2024-01-17 09:36:06 -05:00
Michael Abashian
5a93a519f6 Fix linting error in SubscriptionUsageChart 2024-01-16 14:12:55 -05:00
jessicamack
5f5cd960d5 Add django-ansible-base settings (#14768)
add ansible base settings
2024-01-16 15:55:59 +00:00
Jeff Bradberry
42701f32fe Build the source distribution bundle to also upload to PyPI
This has been broken since 20.0.1.
2024-01-11 15:49:07 -05:00
Hao Liu
30d4df788f Update dependency django-ansible-base (#14752) 2024-01-10 11:05:57 -05:00
Cameron McLaughlin
1bcd71a8ac Update execution environment documentation link (fixes #14690) (#14741)
Update execution enviorment documentation link (fixes #14690)

Signed-off-by: Cameron McLaughlin <auatr7@protonmail.com>
2024-01-05 15:47:55 -07:00
Patrick Uiterwijk
43be90f051 Add support for Bitbucket Data Center webhooks (#14674)
Add support for receiving webhooks from Bitbucket Data Center, and add support for posting build statuses back

Note that this is very explicitly only for Bitbucket Data Center.
The entire webhook format and API is entirely different for Bitbucket Cloud.
2024-01-05 09:34:29 -05:00
TVo
bb1922cdbb Update conf.py to 2024 (#14743)
* Update conf.py to 2023

* Update awxkit/awxkit/cli/docs/source/conf.py

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-01-04 12:38:21 -07:00
Martin Slemr
403f545071 Fix port conflicts when running other Ansible dev environments (#14701)
AAP: Docker port conflicts
2024-01-04 09:10:55 -05:00
Nenodema
a06a2a883c Adding "address" property 2024-01-03 15:08:18 -05:00
Keith Grant
2529fdcfd7 Persist schedule prompt on launch fields when editing (#14736)
* persist schedule prompt on launch fields when editing

* Merge job template default credentials with schedule overrides in schedule prompt

* rename vars for clarity

* handle undefined defaultCredentials

---------

Co-authored-by: Michael Abashian <mabashia@redhat.com>
2023-12-21 14:46:49 -05:00
loh
19dff9c2d1 Fix twilio_backend.py to send SMS to multiple destinations. (#14656)
AWX only sends Twilio notifications to one destination with the current version of code, but this is a bug. Fixed this bug for sending SMS to multiple destinations.
2023-12-20 15:31:47 -05:00
Chris Meyers
2a6cf032f8 refactor awxkit import code
* Move awxkit import code into a pytest fixture to better control when
  the import happens
* Ensure /awx_devel/awxkit is added to sys path before awxkit import
  runs
2023-12-18 12:00:50 -05:00
Chris Meyers
6119b33a50 add awx collection export tests
* Basic export tests
* Added test that highlights a problem with running Schedule exports as
  non-root user. We rely on the POST key in the OPTIONS response to
  determine the fields to export for a resource. The POST key is not
  present if a user does NOT have create privileges.
* Fixed up forwarding all headers from the API server back to the test
  code. This was causing a problem in awxkit code that checks for
  allowed HTTP Verbs in the headers.
2023-12-18 12:00:50 -05:00
John Westcott IV
aacf9653c5 Use filtering/sorting from django-ansible-base (#14726)
* Move filtering to DAB

* add comment to trigger building a new image

Signed-off-by: jessicamack <jmack@redhat.com>

* remove unneeded comment

Signed-off-by: jessicamack <jmack@redhat.com>

* remove unused imports

Signed-off-by: jessicamack <jmack@redhat.com>

* change mock import

Signed-off-by: jessicamack <jmack@redhat.com>

---------

Signed-off-by: jessicamack <jmack@redhat.com>
Co-authored-by: jessicamack <jmack@redhat.com>
2023-12-18 10:05:02 -05:00
Alan Rominger
325f5250db Narrow the actor types accepted for RBAC evaluations (#14709)
* Narrow the scope of RBAC evaluations

* Update tests for RBAC method changes

* Simplify querset for credentials in org

* Fix call pattern to pass in team role obj
2023-12-14 21:30:47 -05:00
Alan Rominger
b14518c1e5 Simplify RBAC get_roles_on_resource method (#14710)
* Simplify RBAC get_roles_on_resource method

* Fix bug

* Fix query type bug
2023-12-14 10:42:26 -05:00
Hao Liu
6440e3cb55 Send SIGKILL to rsyslog if hard cancellation is needed 2023-12-14 10:41:48 -05:00
Hao Liu
b5f6aac3aa Correct misuse of stdxxx_event_enabled
Not every log messages need to be emitted as a event!
2023-12-14 10:41:48 -05:00
Hao Liu
6e5e1c8fff Recover rsyslog from 4xx error
Due to https://github.com/ansible/awx/issues/7560

'omhttp' module for rsyslog will completely stop forwarding message to external log aggregator after receiving a 4xx error from the external log aggregator

This PR is an "workaround" for this problem by restarting rsyslogd after detecting that rsyslog received a 4xx error
2023-12-14 10:41:48 -05:00
Hao Liu
bf42c63c12 Remove superwatcher from docker-compose dev (#14708)
When making changes to the application sometime you can accidentally cause FATAL state and cause the dev container to crash which will remove any ephemeral changes that you have made and is ANNOYING!
2023-12-13 14:26:53 -05:00
Avi Layani
df24cb692b Adding hosts bulk deletion feature (#14462)
* Adding hosts bulk deletion feature

Signed-off-by: Avi Layani <alayani@redhat.com>

* fix the type of the argument

Signed-off-by: Avi Layani <alayani@redhat.com>

* fixing activity_entry tracking

Signed-off-by: Avi Layani <alayani@redhat.com>

* Revert "fixing activity_entry tracking"

This reverts commit c8eab52c2ccc5abe215d56d1704ba1157e5fbbd0.
Since the bulk_delete is not related to an inventory, only hosts which
can be from different inventories.

* get only needed vars to reduce memory consumption

Signed-off-by: Avi Layani <alayani@redhat.com>

* filtering the data to reduce memory increase the number of queries

Signed-off-by: Avi Layani <alayani@redhat.com>

* update the activity stream for inventories

Signed-off-by: Avi Layani <alayani@redhat.com>

* fix the changes dict initialiazation

Signed-off-by: Avi Layani <alayani@redhat.com>

---------

Signed-off-by: Avi Layani <alayani@redhat.com>
2023-12-13 10:28:31 -06:00
jessicamack
0d825a744b Update setuptools-scm (#14716)
* properly format requirement

* upgrade setuptools_scm

* Revert "properly format requirement"

This reverts commit 4c8792950f.

* test ansible-runner package upgrade

* Revert "test ansible-runner package upgrade"

This reverts commit ba4b74f2bb.
2023-12-11 17:11:04 -05:00
Marliana Lara
5e48bf091b Fix undefined error in settings/logging/edit form (#14715)
Fix undefined error in logging settings edit form
2023-12-11 10:58:30 -05:00
Alan Rominger
1294cec92c Fix updater bug due to missing newline at EOF (#14713) 2023-12-08 16:51:17 +00:00
jessicamack
dae12ee1b8 Remove incorrectly formatted line from requirements.txt (#14714)
remove git+ line
2023-12-08 11:06:17 -05:00
jessicamack
b091f6cf79 Add django-ansible-base (#14705)
* add django-ansible-base

Signed-off-by: jessicamack <jmack@redhat.com>

* add licenses

* add django-ansible-base

Signed-off-by: jessicamack <jmack@redhat.com>

* add licenses

* apply patch to fix permissions issue

---------

Signed-off-by: jessicamack <jmack@redhat.com>
2023-12-07 11:45:44 -05:00
Don Naro
fe564c5fad Dependabot for docsite requirements (#14670) 2023-12-06 15:11:44 -05:00
Tyler Muir
eb3bc84461 remove unnecessary required flags for saml backend (#14666)
Signed-off-by: Tyler Muir <tylergmuir@gmail.com>
2023-12-06 15:08:54 -05:00
Andrew Austin
6aa2997dce Add TLS certificate auth for HashiCorp Vault (#14534)
* Add TLS certificate auth for HashiCorp Vault

Add support for AWX to authenticate with HashiCorp Vault using
TLS client certificates.

Also updates the documentation for the HashiCorp Vault secret management
plugins to include both the new TLS options and the missing Kubernetes
auth method options.

Signed-off-by: Andrew Austin <aaustin@redhat.com>

* Refactor docker-compose vault for TLS cert auth

Add TLS configuration to the docker-compose Vault configuration and
use that method by default in vault plumbing.

This ensures that the result of bringing up the docker-compose stack
with vault enabled and running the plumb-vault playbook is a fully
working credential retrieval setup using TLS client cert authentication.

Signed-off-by: Andrew Austin <aaustin@redhat.com>

* Remove incorrect trailing space

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>

* Make vault init idempotent

- improve error handling for vault_initialization
- ignore error if vault cert auth is already configured
- removed unused register

* Add VAULT_TLS option

Make TLS for HashiCorp Vault optional and configurable via VAULT_TLS env var

* Add retries for vault init

Sometime it took longer for vault to fully come up and init will fail

---------

Signed-off-by: Andrew Austin <aaustin@redhat.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
Co-authored-by: Hao Liu <haoli@redhat.com>
2023-12-06 19:12:15 +00:00
Don Naro
dd00bbba42 separate tox calls in readthedocs config (#14673) 2023-12-06 17:17:12 +00:00
Rick Elrod
fe6bac6d9e [CI] Reduce GHA timeouts from 6h default (#14704)
* [CI] Reduce GHA timeouts from 6h default

The goal here is to never interfere with a real run (so most of the
timeout-minutes values seem rather high) but to avoid having 6h long
runs if something goes crazy and never ends.

Signed-off-by: Rick Elrod <rick@elrod.me>

* Do bash hackery instead

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-12-06 14:43:45 +00:00
jainnikhil30
87abbd4b10 Fix the bulk Job Launch Integration test in awx collection (#14702)
* fix the integration tests
2023-12-06 18:50:33 +05:30
lucas-benedito
fb04e5d9f6 Fixing wsrelay connection loop (#14692)
* Fixing wsrelay connection loop

* The loop was being interrupted when reaching the return statements, causing a race condition that would make nodes remain disconnected from their websockets
* Added log messages for the previous return state to improve the logging from this state.

* Added logging for malformed payload

* Update awx/main/wsrelay.py

Co-authored-by: Rick Elrod <rick@elrod.me>

* Moved logmsg outside condition

---------

Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
Co-authored-by: Rick Elrod <rick@elrod.me>
2023-12-04 09:33:05 -05:00
Hao Liu
478e2cb28d Fix awx collection publishing on galaxy (#14642)
--location (-L) parameter will prompt curl to submit a new request if the URL is a redirect.

After moving to galaxy-NG without -L the curl falsely return 302 for any version

Co-authored-by: John Barker <john@johnrbarker.com>
2023-11-29 20:28:22 +00:00
Chris Meyers
2ac304d289 allow pytest --migrations to succeed (#14663)
* allow pytest --migrations to succeed

* We actually subvert migrations from running in test via pytest.ini
  --no-migrations option. This has led to bit rot for the sqlite
  migrations happy path. This changeset pays off that tech debt and
  allows for an sqlite migration happy path.
* This paves the way for programatic invocation of individual migrations
  and weaving of the creation of resources (i.e. Instance, Job Template,
  etc). With this, a developer can instantiate various database states,
  trigger a migration, assert the state of the db, and then have pytest
  rollback all of that.
* I will note that in practice, running these migrations is dog shit
  slow BUT this work also opens up the possibility of saving and
  re-using sqlite3 database files. Normally, caching is not THE answer
  and causes more harm than good. But in this case, our migrations are
  mostly write-once (I say mostly because this change set violates
  that :) so cache invalidation isn't a major issue.

* functional test for migrations on sqlite

* We commonly subvert running migrations in test land. Test land uses
  sqlite. By not constantly exercising this code path it atrophies. The
  smoke test here is to continuously exercise that code path.
* Add ci test to run migration tests separately, they take =~ 2-3
  minutes each on my laptop.
* The smoke tests also serves as an example of how to write migration
  tests.

* run migration tests in ci
2023-11-17 13:33:08 -05:00
Don Naro
3e5851f3af Upgrade doc requirements (#14669)
* upgrade when pip compiling doc reqs

* upgrade doc requirements
2023-11-16 13:04:21 -07:00
Alan Rominger
adb1b12074 Update RBAC docs, remove unused get_permissions (#14492)
* Update RBAC docs, remove unused get_permissions

* Add back in section for get_roles_on_resource
2023-11-16 11:29:33 -05:00
Alan Rominger
8fae20c48a Remove unused methods we attach to user model (#14668) 2023-11-16 11:21:21 -05:00
Hao Liu
ec364cc60e Make vault init more idempotent (#14664)
Currently if you cleanup docker volume for vault and bring docker-compose development back up with vault enabled we will not initialize vault because the secret files still exist.

This change will attempt to initialize vault reguardless and update the secret file if vault is initialized
2023-11-16 09:43:45 -06:00
TVo
1cfd51764e Added missing pointers to release notes (#14659)
* Replaced with larger graphic.

* Revert "Replaced with larger graphic."

This reverts commit 1214b00052.

* Added missing pointers to release notes.
2023-11-15 14:24:11 -07:00
Steffen Scheib
0b8fedfd04 Adding the possibility to decode base64 decoded strings to Delinea's Devops Secret Vault (DSV) (#14646)
Adding the possibility to decode base64 decoded strings to Delinea's Devops Secret Vault (DSV).
This is necessary as uploading files to DSV is not possible (and not meant to be) and files should be added base64 encoded.
The commit is making sure to remain backward compatible (no secret decoding), as a default is supplied.

This has been tested with DSV and works for secrets that are base64 encoded and secrets that are not base64 encoded (which is the default).

Signed-off-by: Steffen Scheib <sscheib@redhat.com>
2023-11-15 15:28:34 -05:00
Don Naro
72a8173462 issue #14653 heading does not render correctly (#14665) 2023-11-15 15:05:52 -05:00
Tong He
873b1fbe07 Set subscription type as developer for developer subscriptions. (#14584)
* Set subscription type as developer for developer subscriptions.

Signed-off-by: Tong He <the@redhat.com>

* Set subscription type as developer for developer subscription manifests.

Signed-off-by: Tong He <the@redhat.com>

* Remedy the wrong character to assign value.

Signed-off-by: Tong He <the@redhat.com>

* Reformat licensing.py by black.

Signed-off-by: Tong He <the@redhat.com>

---------

Signed-off-by: Tong He <the@redhat.com>
2023-11-15 10:33:57 +00:00
Alan Rominger
1f36e84b45 Correctly handle case where unpartitioned table does not exist (#14648) 2023-11-14 08:38:48 -05:00
TVo
8c4bff2b86 Replaced with larger graphic. (#14647) 2023-11-13 09:55:04 -06:00
lucas-benedito
14f636af84 Setting credential_type as required (#14651)
* Setting credential_type as required

* Added test for missing credential_type in credential module

* Corrected test assertion

---------

Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
2023-11-13 09:54:32 -06:00
Don Naro
0057c8daf6 Docs: Include REST API reference content from swagger.json (#14607) 2023-11-11 08:33:41 -05:00
TVo
d8a28b3c06 Added alt text for settings-menu.rst (#14639)
* Re-do for PR #14595 to fix CI issues.

* Added alt text to settings-menu.rst

* Update docs/docsite/rst/common/settings-menu.rst

Co-authored-by: Don Naro <dnaro@redhat.com>

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-11-08 15:17:57 -07:00
Ratan Gulati
40c2b700fe Fix: #14523 Add alt-text codeblock to Images for workflow_template.rst (#14604)
* add alt to images in workflow_templates.rst

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>

* add alt to images in workflow_templates.rst

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>

* Update workflow_templates.rst

* Revised proposed alt text for workflow_templates.rst

---------

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-11-07 10:51:11 -07:00
Thanhnguyet Vo
71d548f9e5 Removed references to images that were deleted. 2023-11-07 08:55:27 -07:00
Thanhnguyet Vo
dd98963f86 Updated images - Workflow Templates chapter of Userguide. 2023-11-07 08:55:27 -07:00
TVo
4b467dfd8d Revised proposed alt text for insights.rst 2023-11-07 08:14:34 -07:00
BHANUTEJA
456b56778e Update insights.rst 2023-11-07 08:14:34 -07:00
BHANUTEJA
5b3cb20f92 Update insights.rst 2023-11-07 08:14:34 -07:00
TVo
d7086a3c88 Revised the proposed Alt text for main_menu.rst 2023-11-06 13:09:26 -07:00
Ratan Gulati
21e7ab078c Fix: #14511 Add alt-text codeblock to Images for Userguide: main_menu.rst
Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>
2023-11-06 13:09:26 -07:00
Elijah DeLee
946ca0b3b8 fix wsrelay connection in ipv6 environments 2023-11-06 13:58:41 -05:00
TVo
b831dbd608 Removed mailing list from triage_replies.md 2023-11-03 14:30:30 -06:00
Thanhnguyet Vo
943e455f9d Re-do for PR #14595 to fix CI issues. 2023-11-03 08:35:22 -06:00
Seth Foster
53bc88abe2 Fix python_paths error in CI(#14622)
Remove outdated lines from pytest.ini

Was causing KeyError 'python_paths' in CI

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2023-11-03 09:36:21 -04:00
Rick Elrod
3b4d95633e [rsyslog] remove main_queue, add more action queue params (#14532)
* [rsyslog] remove main_queue, add more action queue params

Signed-off-by: Rick Elrod <rick@elrod.me>

* Remove now-unused LOG_AGGREGATOR_MAX_DISK_USAGE_GB, add LOG_AGGREGATOR_ACTION_QUEUE_SIZE

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-10-31 14:49:17 -04:00
Alan Rominger
93c329d9d5 Fix cancel bug - WorkflowManager cancel in transaction (#14608)
This fixes a bug where jobs within a workflow job were not canceled
  when the workflow job was canceled by the user

The fix is to submit the cancel request as a part of the
  transaction that WorkflowManager commits its work in
  this requires that we send the message without expecting a reply
  so this changes the control-with-reply cancel to just a control function
2023-10-30 15:30:18 -04:00
Hao Liu
f4c53aaf22 Update receptor-collection version to 2.0.2 (#14613) 2023-10-30 17:24:02 +00:00
Alan Rominger
333ef76cbd Send notifications for dependency failures (#14603)
* Send notifications for dependency failures

* Delete tests for deleted method

* Remove another test for removed method
2023-10-30 10:42:37 -04:00
Alan Rominger
fc0b58fd04 Fix bug that prevented dispatcher exit with downed DB (#14469)
* Separate handling of original sitTERM and sigINT
2023-10-26 14:34:25 -04:00
Andrii Zakurenyi
bef0a8b23a Fix DevOps Secrets Vault credential plugin to work with python-dsv-sdk>=1.0.4
Signed-off-by: Andrii Zakurenyi <andrii.zakurenyi@c.delinea.com>
2023-10-25 15:48:24 -04:00
lmo5
a5f33456b6 Fix missing service account secret in docker-compose-minikube role (#14596)
* Fix missing service account secret

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2023-10-25 19:27:21 +00:00
Surav Shrestha
21fb395912 fix typos in docs/development/minikube.md 2023-10-25 15:23:23 -04:00
jessicamack
44255f378d Fix extra_vars bug in ansible.controller.ad_hoc_command (#14585)
* convert to valid type for serializer

* check that extra_vars are in request

* remove doubled line

* add integration test for change

* move change to the ad_hoc_command module

Signed-off-by: jessicamack <jmack@redhat.com>

* fix imports

Signed-off-by: jessicamack <jmack@redhat.com>

---------

Signed-off-by: jessicamack <jmack@redhat.com>
2023-10-25 10:38:45 -04:00
Parikshit Adhikari
71a6d48612 Fix: typos inside /docs directory (#14594)
fix typos inside docs
2023-10-24 19:01:21 +00:00
nmiah1
b7e5f5d1e1 Typo in export.py example (#14598) 2023-10-24 18:33:38 +00:00
Alan Rominger
b6b167627c Fix Boolean values defaulting to False in collection (#14493)
* Fix Boolean values defaulting to False in collection

* Remove null values in other cases, fix null handling for WFJT nodes

* Only remove null values if it is a boolean field

* Reset changes to WFJT node field processing

* Use test content from sean-m-sullivan to fix lookups in assert
2023-10-24 14:29:16 -04:00
Hao Liu
20f5b255c9 Fix "upgrade in progress" status page not showing up while migration is in progress (#14579)
Web container does not need to wait for migration

if the database is running and responsive, but migrations have not finished, it will start serving, and users will get the upgrading page

wait-for-migration prevent nginix and uwsgi from starting up to serve the "upgrade in progress" status page
2023-10-24 14:27:09 -04:00
Oleksii Baranov
3bcf46555d Fix swagger generation on rhel (#14317) (#14589) 2023-10-24 14:19:02 -04:00
Don Naro
94703ccf84 Pip compile docsite requirements (#14449)
Co-authored-by: Sviatoslav Sydorenko <578543+webknjaz@users.noreply.github.com>
Co-authored-by: Sviatoslav Sydorenko <wk.cvs.github@sydorenko.org.ua>
2023-10-24 12:53:41 -04:00
BHANUTEJA
6cdea1909d Alt text for Execution Env section of Userguide (#14576)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 18:48:07 +00:00
Mike Mwanje
f133580172 Adds alt text to instance_groups.rst images (#14571)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 16:11:17 +00:00
Kishan Mehta
4b90a7fcd1 Add alt text for image directives in credential_types.rst (#14551)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 09:36:05 -06:00
Marliana Lara
95bfedad5b Format constructed inventory hint example as valid YAML (#14568) 2023-10-20 10:24:47 -04:00
Kishan Mehta
1081f2d8e9 Add alt text for image directives in credentials.rst (#14550)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 14:13:49 +00:00
Kishan Mehta
c4ab54d7f3 Add alt text for image directives in job_capacity.rst & job_slices.rst (#14549)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 13:34:04 +00:00
Hao Liu
bcefcd8cf8 Remove specific version for receptorctl (#14593) 2023-10-19 22:49:42 -04:00
Kishan Mehta
0bd057529d Add alt text for image directives in job_templates.rst (#14548)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
2023-10-19 20:24:32 +00:00
Sayyed Faisal Ali
a82c03e2e2 added alt-text in projects.rst (#14544)
Signed-off-by: c0de-slayer <fsali315@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-19 12:39:58 -06:00
TVo
447ac77535 Corrected missing text replacement directives (#14592) 2023-10-19 16:36:41 +00:00
Andrew Klychkov
72d0928f1b [DOCS] EE guide: fix a ref to Get started with EE (#14587) 2023-10-19 03:30:21 -04:00
Deepshri M
6d727d4bc4 Adding alt text for image (#14541)
Signed-off-by: Deepshri M <deepshrim613@gmail.com>
2023-10-17 14:53:18 -06:00
Rohit Raj
6040e44d9d docs: Update teams.rst (#14539)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-17 20:16:09 +00:00
Rohit Raj
b99ce5cd62 docs: Update users.rst (#14538)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-17 14:58:40 +00:00
Rohit Raj
ba8a90c55f docs: Update security.rst (#14540)
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-16 17:56:46 -06:00
Sayyed Faisal Ali
7ee2172517 added alt-text in project-sign.rst (#14545)
Signed-off-by: c0de-slayer <fsali315@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-16 09:25:34 -06:00
Alan Rominger
07f49f5925 AAP-16926 Delete unpartitioned tables in a separate transaction (#14572) 2023-10-13 15:50:51 -04:00
Hao Liu
376993077a Removing mailing list from get involved (#14580) 2023-10-13 17:49:34 +00:00
Hao Liu
48f586bac4 Make wait-for-migrations wait forever (#14566) 2023-10-13 13:48:12 +00:00
Surendran
16dab57c63 Added alt-text for images in notifications.rst (#14555)
Signed-off-by: Surendran Gokul <surendrangokul55@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-12 15:22:37 -06:00
Surendran
75a71492fd Added alt-text for images in organizations.rst (#14556)
Signed-off-by: Surendran Gokul <surendrangokul55@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-12 15:15:45 -06:00
Hao Liu
e9bd99c1ff Fix CVE-2023-43665 (#14561) 2023-10-12 14:00:32 -04:00
Daniel Gonçalves
56878b4910 Add customizable batch_size for cleanup_activitystream and cleanup_jobs (#14412)
Signed-off-by: Daniel Gonçalves <daniel.gonc@lves.fr>
2023-10-11 20:09:16 +00:00
Alan Rominger
19ca480078 Upgrade client library for dsv since tss already landed (#14362) 2023-10-11 16:01:22 -04:00
Steffen Scheib
64eb963025 Cleaning SOS report passwords (#14557) 2023-10-11 19:54:28 +00:00
Will Thames
dc34d0887a Execution environment image should not be required (#14488) 2023-10-11 15:39:51 -04:00
Andrew Klychkov
160634fb6f ee_reference.rst: refert to Builder's definition docs instead of duplicating its content (#14562) 2023-10-11 13:54:12 +01:00
Alan Rominger
9745058546 Only block commits if black fails for certain paths (#14531) 2023-10-10 10:12:57 -04:00
Aviral Katiyar
c97a48b165 Fix: #14510 Add alt-text codeblock to Images for Userguide: jobs.rst (#14530)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-09 16:40:56 -06:00
Rohit Raj
259bca0113 docs: Update workflows.rst (#14537) 2023-10-06 15:30:47 -06:00
Aviral Katiyar
92c2b4e983 Fix: #14500 Added alt text to images for Userguide: credential_plugins.rst (#14527)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-06 14:53:23 -06:00
Seth Foster
127a0cff23 Set ip_address to empty string
ip_address cannot be null, so set to
empty instead of None

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2023-10-05 22:53:16 -04:00
Aviral Katiyar
a0ef25006a Fix: #14499 Added alt text to images for Userguide: applications_auth.rst (#14526)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-05 14:22:10 -06:00
Chris Meyers
50c98a52f7 Update setting_up.rst (#14542) 2023-10-05 15:06:40 -04:00
Michelle McCausland
4008d72af6 issue-14522: Add alt-text codeblock to Images for Userguide: webhooks.rst (#14529)
Signed-off-by: Michelle McCausland <mmccausl@redhat.com>
2023-10-05 17:40:07 +01:00
Alan Rominger
e72e9f94b9 Fix collection test flake due to successful canceled command (#14519) 2023-10-04 09:09:29 -04:00
Sasa Jovicic
9d60b0b9c6 Fix #12815 Direct links to AWX do not reroute the user after authentication (#14399)
Signed-off-by: Sasa993 <jovicic.sasa@hotmail.com>
Co-authored-by: Sasa Jovicic <sjovicic@anexia-it.com>
2023-10-03 16:55:22 -04:00
Aviral Katiyar
05b58c4df6 Fix : #14490 Fixed the required spelling errors (#14507)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
2023-10-03 14:15:13 -06:00
TVo
b1b960fd17 Updated Forum terminology and removed mailing list (#14491) 2023-10-03 19:24:19 +01:00
Jakub Laskowski
3c8f71e559 Fixed wrong arguments order in DomainPasswordGrantAuthorizer (#14441)
Signed-off-by: Jakub Laskowski <jakub.laskowski9@gmail.com>
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-10-03 11:54:57 -04:00
Alan Rominger
f5922f76fa DROP unnecessary unpartioned event tables (#14055) 2023-10-03 11:49:23 -04:00
kurokobo
05582702c6 fix: make type conversions work correctly (related #14487) (#14489)
Signed-off-by: kurokobo <2920259+kurokobo@users.noreply.github.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-09-30 04:02:10 +00:00
Alan Rominger
1d340c5b4e Add a section for postgres max_connections value (#14482) 2023-09-28 10:28:52 -04:00
TVo
15925f1416 Simplified release notes for AWX (#14485) 2023-09-27 14:50:57 -06:00
Salma Kochay
6e06a20cca add subscription usage page 2023-09-27 10:57:04 -04:00
Hao Liu
bb3acbb8ad Debug log for scheduler commit duration (#14035)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-09-27 09:46:55 -04:00
Hao Liu
a88e47930c Update django version to address CVE-2023-41164 (#14460) 2023-09-27 09:36:02 -04:00
Hao Liu
a0d4515ba4 Explicitly set collection version during promotion (#14484) 2023-09-26 14:19:22 -04:00
Alan Rominger
770cc10a78 Get rid of names_digest hack no longer needed (#14459) 2023-09-26 12:09:30 -04:00
Alan Rominger
159dd62d84 Add null value handling in create_partition (#14480) 2023-09-25 18:28:44 -04:00
TVo
640e5db9c6 Removed references of IRC and fixed formatting in "Work Items" section. (#14478)
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-09-25 11:24:39 -06:00
Alan Rominger
9ed527eb26 Consolidate image and server setup in several checks (#14477) 2023-09-25 09:02:20 -04:00
Alan Rominger
29ad6e1eaa Fix bug, None was used instead of empty for DB outage (#14463) 2023-09-21 14:30:25 -04:00
Alan Rominger
3e607f8964 AAP-15927 Use ATTACH PARTITION to avoid exclusive table lock for events (#14433) 2023-09-21 14:27:04 -04:00
TVo
c9d1a4d063 Added release notes for version 23.1.0 (#14471) 2023-09-21 11:02:38 -06:00
Hao Liu
a290b082db Use ldap container hostname for LDAP config (#14473) 2023-09-21 11:31:51 -04:00
Hao Liu
6d3c22e801 Update how to get involved with matrix and forum (#14472) 2023-09-20 18:33:04 +00:00
Michael Abashian
1f91773a3c Simplify docs string base generation 2023-09-20 13:16:54 -04:00
Hao Liu
7b846e1e49 Add makefile target to load dev image into Kind (#13775)
Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Rick Elrod <rick@elrod.me>
2023-09-19 13:34:10 -04:00
Don Naro
f7a2de8a07 Contributor guide and adjusted titles (#14447)
Co-authored-by: Thanhnguyet Vo <tvo@ansible.com>
2023-09-18 10:40:47 -06:00
Andrew Klychkov
194c214f03 userguide/execution_environments.rst: replace building paragraphs with ref to Get started EE guide (#14429) 2023-09-15 10:20:46 -04:00
Christian Adams
77e30dd4b2 Add link to script for publishing operator on OperatorHub (#14442) 2023-09-15 09:32:19 -04:00
jessicamack
9d7421b9bc Update README (#14452)
Signed-off-by: jessicamack <jmack@redhat.com>
2023-09-14 20:20:06 +00:00
Alan Rominger
3b8e662916 Remove conditional paths due to conflict with required checks (#14450) 2023-09-14 16:19:42 -04:00
Alan Rominger
aa3228eec9 Fix continue-on-error GH actions bug, always run archive step instead 2023-09-14 19:45:07 +00:00
Alan Rominger
7b0598c7d8 Continue workflow steps to save logs from failed tests (#14448) 2023-09-14 18:23:22 +00:00
Ivan Aragonés Muniesa
49832d6379 don't pass the 'organization' or other fields to the search of the instance group or execution environments (#14223) 2023-09-14 09:31:05 -04:00
Alan Rominger
8feeb5f1fa Allow saving github creds in user folder (#14435) 2023-09-12 15:47:12 -04:00
Michael Abashian
56230ba5d1 Show a toast when the job is already in the process of launching 2023-09-06 16:56:34 -04:00
Michael Abashian
480aaeace5 Prevent the user from launching multiple jobs by rapidly clicking on buttons 2023-09-06 16:56:34 -04:00
Joe Garcia
3eaea396be Add base64 check on JWT from authn 2023-09-06 15:58:36 -04:00
Keith Grant
deef8669c9 rebuild package-lock (#14423) 2023-09-06 12:36:50 -07:00
Don Naro
63223a2cc7 allow list for example secrets in docs 2023-09-06 15:15:58 -04:00
Keith Grant
a28bc2eb3f bump babel dependencies (#14370) 2023-09-06 09:14:04 -07:00
Alan Rominger
09168e5832 Edit docker-compose instructions for correctness (#14418) 2023-09-06 11:55:25 -04:00
Alan Rominger
6df1de4262 Avoid activity stream entries for instance going offline (#14385) 2023-09-06 11:18:52 -04:00
Alan Rominger
e072bb7668 Declare license for unique module that uses BSD-2
Co-authored-by: Maxwell G <maxwell@gtmx.me>
2023-09-06 10:43:25 -04:00
Alan Rominger
ec579fd637 Fix collection metadata license to match intent 2023-09-06 10:43:25 -04:00
Marliana Lara
b95d521162 Update missing inventory error message (#14416) 2023-09-06 10:24:25 -04:00
Rick Elrod
d03a6a809d Enable collection integration tests on GHA
There are a number of changes here:

- Abstract out a GHA composite action for running the dev environment
- Update the e2e tests to use that new abstracted action
- Introduce a new (matrixed) job for running collection integration
  tests. This splits the jobs up based on filename.
- Collect coverage info and generate an html report that people can
  download easily to see collection coverage info.
- Do some hacks to delete the intermediary coverage file artifacts
  which aren't needed after the job finishes.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-09-05 16:10:48 -05:00
TVo
4466976e10 Added relnotes for 23.0.0 (#14409) 2023-09-05 15:07:53 -06:00
Don Naro
5733f78fd8 Add readthedocs configuration (#14413) 2023-09-05 15:07:32 -06:00
Alan Rominger
20fc7c702a Add check for building docsite (#14406) 2023-09-05 16:07:48 -04:00
Lila Yasin
6ce5799689 Incorrect capacity for remote execution nodes 14051 (#14315) 2023-09-05 11:20:36 -04:00
Don Naro
dc81aa46d0 Create AWX docsite with RST content (#14328)
Co-authored-by: Thanhnguyet Vo <tvo@ansible.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-09-01 09:24:03 -06:00
Alan Rominger
ab3ceaecad Remove extra scheduler state save that does nothing (#14396) 2023-08-31 10:35:07 -04:00
John Westcott IV
1bb4240a6b Allow saml_admin_attr to work in conjunction with SAML Org Map (#14285)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-08-31 09:41:30 -03:00
Rick Elrod
5e105c2cbd [CI] Update GHA actions to sate some warnings emitted by test infrastructure (#14398)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-08-30 23:58:57 -05:00
Alan Rominger
cdb4f0b7fd Consume job_explanation from runner, fix error reporting error (#13482) 2023-08-30 16:45:50 -04:00
Ivanilson Junior
cf1e448577 Fix undefined property error when task is of type yum/debug and was s… (#14372)
Signed-off-by: Ivanilson Junior <ivanilsonaraujojr@gmail.com>
2023-08-30 15:37:28 -04:00
Andrew Klychkov
224e9e0324 [DOCS] tools/docker-compose/README.md: add way to solve postgresql issue (#14225) 2023-08-30 10:45:50 -04:00
Martin Slemr
660dab439b HostMetrics: Hard auto-cleanup (#14255)
Fix host metric settings

Cleanup_host_metric command with default params

Fix order of host metric cleanups
2023-08-30 09:18:59 -04:00
sean-m-sullivan
5ce2055431 update collection workflow example and tests 2023-08-30 09:15:54 -04:00
Alan Rominger
951bd1cc87 Re-run the updater script after upstream removal of future (#14265) 2023-08-29 15:36:42 -04:00
kurokobo
c9190ebd8f docs: update execution_nodes.md to follow changes for receptor_collection (#14247) 2023-08-29 13:06:54 -04:00
Seth Foster
eb33973fa3 Use receptor collection 2.0.0 2023-08-29 13:06:54 -04:00
Seth Foster
40be2e7b6e Use receptor-collection devel 2023-08-29 13:06:54 -04:00
kialam
485813211a Add toast and delete modal messaging when removing/adding peers. (#14373) 2023-08-29 13:06:54 -04:00
Seth Foster
0a87bf1b5e Apply JS formatting from npm prettier 2023-08-29 13:06:54 -04:00
Seth Foster
fa0e0b2576 Removed unused variable in test_instance_peers 2023-08-29 13:06:54 -04:00
Seth Foster
1d3b2f57ce No longer assert on receptor_host_identifier
receptor_host_identifier can be left out
of group_vars and will default to the
'ansible_host' variable
2023-08-29 13:06:54 -04:00
Seth Foster
0577e1ee79 Setup receptor after podman
Might help to install receptor last,
that way when nodes are first connected to the mesh
they already have podman installed and can potentially
run jobs. Otherwise it might be possible for controller
to launch jobs against nodes that aren't fully set up.
2023-08-29 13:06:54 -04:00
Seth Foster
470ecc4a4f Use itertools product instead of nested loop
Make test case cleaner by using itertools product
instead of the triple nested loop

Replace triple single quotes with triple
double quotes
2023-08-29 13:06:54 -04:00
Seth Foster
965127637b Make ip_address read only
Setting a different value for ip_address
and hostname does not work with the current
way we create receptor certs.
2023-08-29 13:06:54 -04:00
Seth Foster
eba130cf41 Change username to <username> in inventory 2023-08-29 13:06:54 -04:00
Seth Foster
441336301e Ensure ip_address is empty string 2023-08-29 13:06:54 -04:00
Seth Foster
2a0be898e6 Fix detecting if peers changed in serializer
Add a check_peers_changed() utility method
to determine if peers in attrs matches
the current instance peers.

Other changes:
- Set ip_address default to "", and do not
allow null.
2023-08-29 13:06:54 -04:00
Seth Foster
c47acc5988 Change PeersSerializer to SlugRelatedField
Get rid of PeersSerializer and just use SlugRelatedField,
which should be more a straightforward approach.

Other changes:
- cleanup code related to the already-removed api/v2/peers
endpoint
- add "hybrid" node type into more instance_peers test cases
2023-08-29 13:06:54 -04:00
Seth Foster
70ba32b5b2 Do not install ansible-runner or podman on hop nodes 2023-08-29 13:06:54 -04:00
Seth Foster
81e06dace2 Add listener_port to provision_instance
API changes
- cannot change peers or enable
peers_from_control_nodes on VM deployments
- allow setting ip_address
- use ip_address over hostname in the generated
group_vars/all.yml
- Drop api/v2/peers endpoint

DB changes
- add ip_address unique constraint, but ignore "" entries

Other changes
- provision_instance should take listener_port option

Tests
- test that new controls doesn't disturb other peers
relationships
- test ip_address over hostname
2023-08-29 13:06:54 -04:00
Seth Foster
3e8202590c Remove Disconnected link state
Dynamically flipping from Established
to Disconnected is not the intended
usage of InstanceLink State.

- Link state starts in Adding and becomes
Established once any control node first sees the link
is in the status KnownConnectionCosts
2023-08-29 13:06:54 -04:00
Seth Foster
ad96a72ebe Remove duplicate install bundle on InstanceDetail 2023-08-29 13:06:54 -04:00
Seth Foster
eb0058268b Revert "Remove duplicate install bundle on InstanceDetail"
This reverts commit cf5ccf53f4322b49b1009ca13e4f025c30529b30.
2023-08-29 13:06:54 -04:00
Seth Foster
2bf6512a8e Do not change link state if Removing
inspect_established_receptor_connections should
not change link state is current state is Removing.

Other changes:
- rename inspect_execution_nodes to inspect_execution_and_hop_nodes
- Default link state is Adding
- Set min listener_port value to 1024
- inspect_established_receptor_connections now
runs as part of cluster_node_heartbeat task
2023-08-29 13:06:54 -04:00
Seth Foster
855f61a04e Bump migration number 186 to 187 2023-08-29 13:06:54 -04:00
Seth Foster
532e71ff45 Remove extra newlines in install bundle all.yml 2023-08-29 13:06:54 -04:00
Seth Foster
b9ea114cac Remove duplicate install bundle on InstanceDetail 2023-08-29 13:06:54 -04:00
Seth Foster
e41ad82687 optional listener port UI (#14300) 2023-08-29 13:06:54 -04:00
Seth Foster
3bd25c682e Allow setting ip_address for execution nodes 2023-08-29 13:06:54 -04:00
Seth Foster
7169c75b1a receptor_python_packages renamed 2023-08-29 13:06:54 -04:00
kialam
fdb359a67b feature hop node topology updates (#14142) 2023-08-29 13:06:54 -04:00
Seth Foster
ed2a59c1a3 receptor python packages 2023-08-29 13:06:54 -04:00
Jake Jackson
906f8a1dce [hop node] documentation update in execution_nodes for hop nodes (#14215)
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-08-29 13:06:54 -04:00
Lila Yasin
6833976c54 [hop node] fix failing ci checks on feature_hop-node branch (#14226) 2023-08-29 13:06:54 -04:00
Seth Foster
d15405eafe Add peers_from for reverse peers M2M
use devel receptor-collection
2023-08-29 13:06:54 -04:00
Lila
6c3bbfc3be Looking to see if revising the path in the static dir resolves failing ci check. 2023-08-29 13:06:54 -04:00
Lila Yasin
2e3e6cbde5 hop node migration file updates(#14196)
rename migration function set_peers_from_control_nodes_true to automatically_peer_from_control_plane
import settings and only run function if settings.IS_K8S is true
set listener_port for control nodes to None
2023-08-29 13:06:54 -04:00
Lila Yasin
54894c14dc Hop node AWX Collection Updates (#14153)
Add hop node support to awx collections
- add peers and peers_from_control_nodes fields
- show new node_type "hop"
- add tests for adding hop nodes via collections

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-08-29 13:06:54 -04:00
Seth Foster
2a51f23b7d Add functional API tests
add tests for calling write_receptor_config

add write_receptor_config test

Do not set default listener_port on control node
2023-08-29 13:06:54 -04:00
Jake Jackson
80df31fc4e [hop node] update peer validation logic (#14132) 2023-08-29 13:06:54 -04:00
Lila Yasin
8f8462b38e Marked hop node validation errors for translation (#14116) 2023-08-29 13:06:54 -04:00
Seth Foster
0c41abea0e Make peers field optional 2023-08-29 13:06:54 -04:00
Lila Yasin
3eda1ede8d Migration file to set peers_from_control_ nodes to true for existing execution nodes (#14061) 2023-08-29 13:06:54 -04:00
Jake Jackson
40fca6db57 [hop_node] Validate listener_port is defined for peers (#14056)
add peer listener_port validation and update install bundle if listener_port is defined or not defined.
2023-08-29 13:06:54 -04:00
Seth Foster
148111a072 Remove task that enables COPR receptor repo (#14088)
do not pip install receptorctl
2023-08-29 13:06:54 -04:00
Lila Yasin
9cad45feac Prevent manual peering of control plane nodes to hop node (#13966) 2023-08-29 13:06:54 -04:00
Seth Foster
6834568c5d Add receptor host identifier to group_vars
Add disconnected link state topology
2023-08-29 13:06:54 -04:00
Lorenzo Tanganelli
f7fdb7fe8d Add peers readonly api and instancelink constraint (#13916)
Add Disconnected link state

introspect_receptor_connections is a periodic
task that examines active receptor connections
and cross-checks it with the InstanceLink info.

Any links that should be active but are not
will be put into a Disconnected state. If
active, it will be in an Established state.

UI - Add hop creation and peers mgmt (#13922)

* add UI for mgmt peers, instance edit and add

* add peer info on detail and bug fix on detail

* remove unused chip and change peer label

* rename lookup, put Instance type disable on edit

---------

Co-authored-by: tanganellilore <lorenzo.tanagnelli@hotmail.it>
2023-08-29 13:06:54 -04:00
Seth Foster
d8abd4912b Add support in hop nodes in API 2023-08-29 13:06:54 -04:00
Alan Rominger
4fbdc412ad Restrict PR body check to just AWX repo 2023-08-29 09:29:30 -04:00
Alan Rominger
db1af57daa Revert "Adding PR check to ensure JIRA links are present"
This reverts commit 3ae6174050.
2023-08-29 09:29:30 -04:00
Hao Liu
ffa59864ee Fix CVE-2023-40267 (#14388)
CVE-2023-40267 GitPython: Insecure non-multi options in clone and clone_from is not blocked https://bugzilla.redhat.com/show_bug.cgi?id=2231474

GitPython before 3.1.32 does not block insecure non-multi options in clone and clone_from. NOTE: this issue exists because of an incomplete fix for CVE-2022-24439.

References:
gitpython-developers/GitPython@ca965ec gitpython-developers/GitPython#1609
2023-08-28 15:35:32 -04:00
bxbrenden
b209bc67b4 Fix typo in description of scm_update_on_launch (#14382) 2023-08-28 16:52:44 +00:00
Chandler Swift
1faea020af Fix default redis url to pass check in redis-py>4.4 (#14344)
Signed-off-by: Chandler Swift <chandler+pearson@chandlerswift.com>
Co-authored-by: Rebeccah Hunter <rhunter@redhat.com>
2023-08-25 09:48:36 -04:00
Pablo Hess
b55a099620 Clarify that the license module requires fetching subs prior (#14351)
Co-authored-by: Pablo N. Hess <phess@redhat.com>
2023-08-23 15:20:47 -04:00
David Danielsson
f6dd3cb988 Enforce mutually exclusive options in credential module of the collection (#14363) 2023-08-23 15:16:06 -04:00
Alan Rominger
c448b87c85 AAP-10891 Apply AWX_TASK_ENV when performing credential plugin lookups (#14271) 2023-08-23 13:26:12 -04:00
Rick Elrod
4dd823121a Update cryptography for CVE-2023-38325 (#14358)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-08-23 10:54:20 -05:00
Michael Abashian
ec4f10d868 Add location for locales in nginx config 2023-08-22 16:33:00 -04:00
Marliana Lara
2a1dffd363 Fix edit constructed inventory hanging loading state (#14343) 2023-08-21 12:36:36 -04:00
digitalbadger-uk
8c7ab8fcf2 Added required epoc time field for Splunk HEC Event Receiver (#14246)
Signed-off-by: Iain <iain@digitalbadger.com>
2023-08-21 09:44:52 -03:00
Hao Liu
3de8455960 Fix missing trailing / in PUBLIC_PATH for UI
Missing trialing `/` cause UI to load file from incorrect static dir location.
2023-08-17 15:16:59 -04:00
Hao Liu
d832e75e99 Fix ui-next build step file path issue
Add full path for the mv command so that the command can be run from ui_next and from project root.

Additionally move the rename of file to src build step.
2023-08-17 15:16:59 -04:00
abwalczyk
a89e266feb Fixed task and web docs (#14350) 2023-08-17 12:22:51 -04:00
Hao Liu
8e1516eeb7 Update UI_NEXT build to set PRODUCT and PUBLIC_PATH
https://github.com/ansible/ansible-ui/pull/792 added configurable public path (which was change to '/' in https://github.com/ansible/ansible-ui/pull/766/files#diff-2606df06d89b38ff979770f810c3c269083e7c0fbafb27aba7f9ea0297179828L128-R157)

This PR added the variable when building ui-next
2023-08-16 18:35:12 -04:00
Hao Liu
c7f2fdbe57 Rename ui_next index.html to index_awx.html during build process
Due to change made in https://github.com/ansible/ansible-ui/pull/766/files#diff-7ae45ad102eab3b6d7e7896acd08c427a9b25b346470d7bc6507b6481575d519R18 awx/ui_next/build/awx/index_awx.html was renamed to awx/ui_next/build/awx/index.html

This PR fixes the problem by renaming the file back
2023-08-16 18:35:12 -04:00
delinea-sagar
c75757bf22 Update python-tss-sdk dependency (#14207)
Signed-off-by: delinea-sagar <sagar.wani@c.delinea.com>
2023-08-16 20:07:35 +00:00
Kevin Pavon
b8ec7c4072 Schedule rruleset fix related #13446 (#13611)
Signed-off-by: Kevin Pavon <7450065+KaraokeKev@users.noreply.github.com>
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-08-16 16:10:31 -03:00
jbreitwe-rh
bb1c155bc9 Fixed typos (#14347) 2023-08-16 15:05:23 -04:00
Jeff Bradberry
4822dd79fc Revert "Improve performance for awx cli export (#13182)" 2023-08-15 15:55:10 -04:00
jainnikhil30
4cd90163fc make the default JOB_EVENT_BUFFER_SECONDS 1 seconds (#14335) 2023-08-12 07:49:34 +05:30
Alan Rominger
8dc6ceffee Fix programming error in facts retry merge (#14336) 2023-08-11 13:54:18 -04:00
Alan Rominger
2c7184f9d2 Add a retry to update host facts on deadlocks (#14325) 2023-08-11 11:13:56 -04:00
Martin Slemr
5cf93febaa HostMetricSummaryMonthly: Analytics export 2023-08-11 09:38:23 -04:00
Alan Rominger
284bd8377a Integrate scheduler into dispatcher main loop (#14067)
Dispatcher refactoring to get pg_notify publish payload
  as separate method

Refactor periodic module under dispatcher entirely
  Use real numbers for schedule reference time
  Run based on due_to_run method

Review comments about naming and code comments
2023-08-10 14:43:07 -04:00
Jeff Bradberry
14992cee17 Add in an async task to migrate the data over 2023-08-10 13:48:58 -04:00
Jeff Bradberry
6db663eacb Modify main/0185 to set aside the json fields that might be a problem
Rename them, then create a new clean field of the new jsonb type.
We'll use a task to do the data conversion.
2023-08-10 13:48:58 -04:00
Ivanilson Junior
87bb70bcc0 Remove extra quote from Skipped task status string (#14318)
Signed-off-by: Ivanilson Junior <ivanilsonaraujojr@gmail.com>
Co-authored-by: kialam <digitalanime@gmail.com>
2023-08-09 15:58:46 -07:00
Pablo Hess
c2d02841e8 Allow importing licenses with a missing "usage" attribute (#14326) 2023-08-09 16:41:14 -04:00
onefourfive
e5a6007bf1 fix broken link to upgrade docs. related #11313 (#14296)
Signed-off-by: onefourfive <>
Co-authored-by: onefourfive <unknown>
2023-08-09 15:06:44 -04:00
Alan Rominger
6f9ea1892b AAP-14538 Only process ansible_facts for successful jobs (#14313) 2023-08-04 17:10:14 -04:00
Sean Sullivan
abc56305cc Add Request time out option for collection (#14157)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-08-03 15:06:04 -03:00
kialam
9bb6786a58 Wait for new label IDs before setting label prompt values. (#14283) 2023-08-03 09:46:46 -04:00
Michael Abashian
aec9a9ca56 Fix rbac around credential access add button (#14290) 2023-08-03 09:18:21 -04:00
John Westcott IV
7e4cf859f5 Added PR check to ensure JIRA links are present (#13839) 2023-08-02 15:28:13 -04:00
mcen1
90c3d8a275 Update example service-account.yml for container group in documentation (#13479)
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
Co-authored-by: Nana <35573203+masbahnana@users.noreply.github.com>
2023-08-02 15:27:18 -04:00
lucas-benedito
6d1c8de4ed Fix trial status and host limit with sub (#14237)
Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
2023-08-02 10:27:20 -04:00
Seth Foster
601b62deef bump python-daemon package (#14301) 2023-08-01 01:39:17 +00:00
Seth Foster
131dd088cd fix linting (#14302) 2023-07-31 20:37:37 -04:00
Rick Elrod
445d892050 Drop unused django-taggit dependency (#14241)
This drops the django-taggit dependency and drops the relevant fields
from old migrations.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-31 10:05:27 -05:00
Michael Abashian
35a576f2dd Adds autoComplete attribute to forms that were missing it (#14080) 2023-07-28 09:49:36 -04:00
John Westcott IV
7838641215 Fixed dependencies tag in PR labeler (#14286) 2023-07-28 08:30:30 -04:00
Alan Rominger
ab5cc2e69c Simplifications for DependencyManager (#13533) 2023-07-27 15:42:29 -04:00
John Westcott IV
5a63533967 Added support to collection for named urls (#14205) 2023-07-27 10:22:41 -03:00
Christian Adams
b549ae1efa Only show the product version header when the requester is authenticated (#14135) 2023-07-26 18:38:05 -04:00
Alex Corey
bd0089fd35 fixes docs link for controller versions >= 4.3 (#14287) 2023-07-26 21:54:39 +00:00
Christian Adams
40d18e95c2 Explicitly turn off autocomplete for API login form (#14232) 2023-07-26 15:33:26 -04:00
Andrew Klychkov
191a0f7f2a docs/execution_environments.md: add a link to EE getting started guide (#14263) 2023-07-26 15:05:36 -04:00
eric-zadara
852bb0717c Return back chdir to project sync to support project-local roles/collections
Signed-off-by: eric-zadara <eric@zadarastorage.com>
2023-07-25 09:58:43 -05:00
Alan Rominger
98bfe3f43f Add missing trigger for failed-to-start nodes (#13802) 2023-07-24 12:17:46 -04:00
John Westcott IV
53a7b7818e Updating release process doc for operator hub instructions (#13564) 2023-07-24 15:29:26 +01:00
Gabriel Muniz
e7c7454a3a Remove host update code which can be non performant (#14233) 2023-07-24 09:56:40 -04:00
Homero Pawlowski
63e82aa4a3 Fix collection module docs for names, IDs, and named URLs (#14269) 2023-07-24 08:57:46 -04:00
ZitaNemeckova
fc1b74aa68 Remove extra data for AoC (#14254) 2023-07-19 11:16:53 -04:00
Alan Rominger
ea455df9f4 Only push the production images for main repo (#14261) 2023-07-19 09:51:33 -04:00
Satoe Imaishi
8e2a5ed8ae Require pyyaml >= 6.0.1 (#14262) 2023-07-18 16:25:14 -05:00
Rick Elrod
1d7e54bd39 Wrap Django RedisCache to mute exceptions (#14243)
We introduce a thin wrapper over Django's RedisCache so that the functionality of DJANGO_REDIS_IGNORE_EXCEPTIONS is retained while still being able to drop the django-redis dependency.

Credit to django-redis's implementation for the idea of using a decorator for this and abstracting out the exception handling logic.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-18 15:31:09 -05:00
Cristiano Nicolai
83df056f71 Small doc fixes for workflow and task manager (#14242) 2023-07-18 19:23:48 +00:00
Rick Elrod
48edb15a03 Prevent Dispatcher deadlock when Redis disappears (#14249)
This fixes https://github.com/ansible/awx/issues/14245 which has
more information about this issue.

This change addresses both:
- A clashing signal handler (registering a callback to fire when
  the task manager times out, and hitting that callback in cases
  where we didn't expect to). Make dispatcher timeout use
  SIGUSR1, not SIGTERM.
- Metrics not being reported should not make us crash, so that is
  now fixed as well.

Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-07-18 10:43:46 -05:00
John Westcott IV
8ddc19a927 Changing how associations work in awx collection (#13626)
Co-authored-by: Alan Rominger <arominge@redhat.com>
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-07-17 14:16:55 -03:00
Sean Sullivan
b021ad7b28 Allow job_template collection module to set verbosity to 5 (#14244) 2023-07-17 09:48:14 -05:00
Rick Elrod
b8ba2feecd Tell Makefile and pre-commit.sh that they are bash
On some systems, /bin/sh is a bash symlink and running it will launch
bash in sh compatibility mode. However, bash-specific syntax will still
work in this mode (for example using == or pipefail).

However, on systems where /bin/sh is a symlink to another shell (think:
Debian-based) they might not have those bashisms.

Set the shell in the Makefile, so that it uses bash (since it is already
depending on bash, even though it is calling it as /bin/sh by default),
and add a shebang to pre-commit.sh for the same reason.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-14 12:06:55 -05:00
Rick Elrod
8cfb704f86 Migrate from django-redis to Django's built-in Redis caching support (#14210)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-13 12:16:16 -05:00
John Westcott IV
efcac860de Upgrade django to 4.2.3 (#14228) 2023-07-13 08:52:50 -04:00
Martin Slemr
6c5590e0e6 HostMetricSummaryMonthly command + views + scheduled task (#13999)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-07-12 16:40:09 -04:00
Erez Tamam
0edcd688a2 add organization column notification template list (#13998) 2023-07-12 15:11:47 -04:00
Alan Rominger
b8c48f7d50 Restore pre-upgrade pg_notify notifcation behavior (#14222) 2023-07-11 16:23:53 -04:00
John Westcott IV
07e30a3d5f Refined release documentation (#14221) 2023-07-10 19:45:34 +00:00
John Westcott IV
cb5a8aa194 Fix black pre-commit hook (#14212) 2023-07-06 16:36:50 -04:00
Seth Foster
8b49f910c7 Add settings.RECEPTOR_LOG_LEVEL, update work signing key path (#14098) 2023-07-06 11:39:30 -04:00
kialam
a4f808df34 Schedules form - pass time prop as string. (#14206) 2023-07-06 07:57:55 -07:00
Alan Rominger
82abd18927 Fix DELETE 500 KeyError due to eventless model events (#14172) 2023-07-05 15:37:52 -04:00
John Westcott IV
5e9d514e5e Added CSRF Origin in settings (#14062) 2023-07-05 15:18:23 -04:00
Rick Elrod
4a34ee1f1e Add optional pgbouncer to dev environment (#14083)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-05 13:41:47 -05:00
John Westcott IV
3624fe2cac Add combined roles/collection requirements on project sync (#14081) 2023-07-05 13:25:44 -03:00
Cesar Francisco San Nicolas Martinez
0f96d9aca2 Rename/relocate receptor crt in install bundle (#14201) 2023-07-05 14:50:55 +02:00
Shane McDonald
989b80e771 Fix selinux errors with Redis mount in dev env 2023-07-03 09:57:01 -04:00
John Westcott IV
cc64be937d Fix spelling errors in readme of awx_collection/tools
Signed-off-by: John Westcott <john.westcott.iv@redhat.com>
2023-06-30 15:41:47 -04:00
John Westcott IV
94183d602c Enhancing vault integration
Added persistent storage

Auto-create vault and awx via playbooks

Create a new pattern for custom containers where we can do initialization

Auto-install roles needed for plumbing via the Makefile
2023-06-30 10:05:15 -04:00
Vidya Nambiar
ac4ef141bf Fix filter experience when assigning access to teams (#14175) 2023-06-29 15:15:32 -04:00
jainnikhil30
86f6b54eec add the bulk api swagger topic for API reference docs (#14181) 2023-06-28 21:55:38 +05:30
Michael Abashian
bd8108b27c Fixed bug where a weekly rrule string without a BYDAY would result in the UI throwing a TypeError (#14182) 2023-06-28 11:10:49 -04:00
Alan Rominger
aed96fb365 Use the proper queryset to filter project update events (#14166) 2023-06-26 21:41:08 -04:00
Alan Rominger
fe2da52eec Upgrade Github actions issue labeler to fix 404 errors (#14163) 2023-06-26 17:14:53 -04:00
Alan Rominger
974465e46a Add hashivault option as docker-compose optional container (#14161)
Co-authored-by: Sarabraj Singh <singh.sarabraj@gmail.com>
2023-06-26 15:48:58 -04:00
Alan Rominger
c736986023 Try to fix CI by adding dropped coreapi lib (#14165) 2023-06-26 15:11:12 -04:00
Akira Yokochi
6b381aa79e Add example for ad_hoc_command module (#14106) 2023-06-23 11:59:16 -04:00
Alan Rominger
755e55ec70 Remove reference to unmaintained runner image (#14143) 2023-06-23 10:15:11 -04:00
Rick Elrod
255c2e4172 [wsrelay] Give connection tasks time to clean up
When we close/cancel a connection to a web node, give the task time to
clean up after itself and cleanly exit. Otherwise, the Python GC might
clean up the task too early and this leads to ugly log messages like
this: "Task was destroyed but it is pending!"

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-06-23 00:56:24 -05:00
Alan Rominger
aa8437fd77 Tooling for running collection tests locally ad hoc (#14160) 2023-06-22 13:32:09 -04:00
Akira Yokochi
66f14bfe8f Using execution_environment option in ad_hoc_command module (#14105) 2023-06-22 13:10:01 -04:00
Gabriel Muniz
721a2002dc Add --interval to launch monitor command (#14068)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-06-22 11:07:26 -03:00
Seth Foster
af39b2cd3f Rename work signing private key filename (#14156) 2023-06-21 19:50:04 -04:00
Lorenzo Tanganelli
cdd48dd7cd Add instance_groups on resource_list_param_keys in awx_collection (#14146) 2023-06-21 19:29:14 +00:00
Sean Sullivan
d3de884baf In collection, give changed status in workflow_job_template when destroying nodes (#13928) 2023-06-21 15:17:53 -04:00
Benjamin Dudas
fa8968b95b Fix for Save on the Jobs settings page not responding (#14103)
Co-authored-by: Michael Abashian <mabashia@redhat.com>
2023-06-21 15:14:31 -04:00
Jesse Wattenbarger
897a19e127 Add None check back to get_post_fields (#14155) 2023-06-21 12:37:59 -04:00
Artsiom Musin
4bae961b5f Improve performance for awx cli export (#13182)
Co-authored-by: Jesse Wattenbarger <jwattenb@redhat.com>
2023-06-21 10:49:22 -04:00
Seth Foster
900c4fd8f1 Rename work signing private key filename (#14151) 2023-06-21 09:52:58 -04:00
Akira Yokochi
4d5bbd7065 Fixed typo in integration test for group module (#14140) 2023-06-21 09:28:01 -04:00
Gabriel Muniz
fb8fadc7f9 Add new ANSIBLE_COLLECTIONS_PATH in preparation for deprecation of plural version (#14079) 2023-06-20 10:32:18 -03:00
John Westcott IV
ba99ddfd82 Fix PR and issue labeler job permissions (#14134) 2023-06-15 18:56:40 +00:00
Gabriel Muniz
9676a95e05 Add AWS Secretsmanager plugin (#13778)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-06-15 10:12:02 -04:00
Gabriel Muniz
36d6ed9cac Removed automatic failure of job template launch when last project update is failed and update on launch is enabled (#13796)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-06-15 10:11:13 -04:00
Gabriel Muniz
875f1a82e4 Add dynamically configurable debug settings (#14008)
Co-authored-by: Michael Abashian <mabashia@redhat.com>
2023-06-15 09:31:54 -04:00
Rick Elrod
db71b63829 Address comments from @jjwatt
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-06-14 17:40:15 -04:00
John Westcott IV
cd4d83acb7 Compensating for NUL unicode characters
NUL characters are not allowed in text fields in the database

We used to strip them out of stdout but the exception changed

And we want to be sure to strip them out of JSONBlob fields
2023-06-14 17:40:15 -04:00
John Westcott IV
7e25a694f3 Making all non-complicated JSONBlobs JSONFields 2023-06-14 17:40:15 -04:00
John Westcott IV
baca43ee62 Performing test maintainance 2023-06-14 17:40:15 -04:00
John Westcott IV
3b69552260 Forcing our JSONField to use text instead of Jsonb data 2023-06-14 17:40:15 -04:00
Rick Elrod
f9bd780d62 [wsrelay] Port back to psycopg3
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-06-14 17:40:15 -04:00
John Westcott IV
a665d96026 Replacing psycopg2.copy_expert with psycopg3.copy 2023-06-14 17:40:15 -04:00
John Westcott IV
e47d30974c Removing psycopg2 references 2023-06-14 17:40:15 -04:00
John Westcott IV
2b8ed66f3e Updating old migration for psycopg3 2023-06-14 17:40:15 -04:00
John Westcott IV
dfe8b3b16b Removes psycopg2 in favor of psycopg3 2023-06-14 17:40:15 -04:00
Artsiom Musin
c738d0788e Check for a list of all option instead of string (#14046) 2023-06-14 15:41:06 -04:00
Jesse Wattenbarger
0c2d589109 Lazy init VERSION vars in Makefile (#14093) 2023-06-14 15:00:38 -04:00
Sean Sullivan
a47bbb5479 bugfix collection role module target_teams and instance_groups options (#14119) 2023-06-14 17:53:24 +00:00
Shane McDonald
4b4b73c02a Fix ARM builds (#14125) 2023-06-14 16:40:59 +00:00
John Westcott IV
d1d08fe499 Changed pin of rsyslog version (#14117) 2023-06-13 16:33:25 -04:00
Hao Liu
7e7a9f541c Remove install bundle download restriction (#14092) 2023-06-12 16:08:44 -04:00
kialam
98d67e2133 Update Patternfly and related deps. (#14086) 2023-06-12 12:35:26 -07:00
Alan Rominger
7a36041bf2 Remove whitespace artifacts from black with f-strings (#14112) 2023-06-12 11:52:22 -04:00
Hao Liu
b96564da55 Rename/relocate receptor cert and keys (#14091) 2023-06-09 12:57:04 -04:00
Seth Foster
044d6bf97c Fix task_system logs twice (#14096) 2023-06-07 16:50:56 -04:00
delinea-sagar
d357c1162f Awx.credential plugin.tss (#13985) 2023-06-07 19:36:15 +00:00
Darshan
3c22fc9242 Fix : awx.awx.group preserve hosts fails when there are no hosts (#13913)
Co-authored-by: Sean Sullivan <ssulliva@redhat.com>
2023-06-07 15:24:59 -04:00
Seth Foster
8c86092bf5 Remove random UUIDs from swagger json (#14089) 2023-06-06 10:44:15 -04:00
Cesar Francisco San Nicolas Martinez
081206965c Generate random UUID by default for added remote nodes (#14074) 2023-06-06 12:36:28 +02:00
Rick Elrod
036f85cd80 Two silly internal cleanups
- Nix an unused function from run_dispatcher. This stopped being used
  in 558e92806b but was never removed.

- Fix a typo in run_ws_heartbeat: hearbeat -> heartbeat that has existed
  since the beginning of this daemon.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-06-05 14:46:25 -05:00
Gabriel Muniz
6976ac9273 Add management command to precreate partitioned tables (#14076) 2023-06-05 18:20:53 +00:00
rakesh561
9009a21a32 Update Mesh.js to allow for running AWX at non-root path (URL prefixing) (#14020)
Co-authored-by: Michael Abashian <mabashia@redhat.com>
2023-06-05 11:46:12 -04:00
Shane McDonald
aafd4df288 Fix /api/swagger endpoint (available only in development mode) (#13197)
Co-authored-by: John Westcott IV <john.westcott.iv@redhat.com>
2023-06-02 12:58:21 -04:00
John Westcott IV
844666df4c Send real client remote address in TACACS+ authentication packet (#14077)
Co-authored-by: ekougs <ekougs@gmail.com>
2023-06-02 10:03:56 -04:00
Rick Elrod
0ae720244c [rsyslog] Enable disk-assisted queuing on output (#14005)
Right now we only enable queuing on the rsyslog main_queue. This adds a
parameter to also enable it on the omhttp output action. As omhttp can
take time to process messages (e.g. blocking on the result of its HTTP
requests), this change allows for queuing messages up and hopefully
preventing some messages from getting lost when the log server is slow
to respond.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-06-01 22:37:45 -05:00
Alex Corey
b70fa88b78 Adds RTL tests to new component, and to Instances List (#12927) 2023-06-01 19:19:24 +00:00
Alan Rominger
fbaeb90268 Apply conservative database connection reduction changes (#14066)
This is expected to free up 4 additional database connections per traditional node
  compare to roughly 12 in total before this change

Out of these 3 are accomplished by using existing connection for recently added services
  then 1 is obtained by closing the connection for the idle callback receiver main process

Signed-off-by: jessicamack <jmack@redhat.com>
Co-authored-by: jessicamack <jmack@redhat.com>
2023-06-01 14:59:18 -04:00
Michael Abashian
2a549c0b23 Removes dependabot for opening ui dependency pr's (#14075) 2023-06-01 14:30:02 -04:00
Alan Rominger
2c320cb16d Manually run subquery for parent event updates (#14044)
Fixes a long query when processing playbook_on_stats events
2023-06-01 07:55:56 -04:00
lucas-benedito
434595481c AAP-8038 - enable/disable services on reboot (#13415)
Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
2023-05-31 19:24:30 +00:00
sll552
444d05447e Fix ovirt source (#12882) 2023-05-31 15:22:58 -04:00
Michael Abashian
fbe202bdbf Adds missing rel="noopener noreferrer" to each link element with target="_blank" (#13959) 2023-05-31 13:49:39 -04:00
Michael Abashian
d89cad0d9e Adds managed_by_policy checkbox to instances form. Adds warnings when associating or disassociating instances from instance groups. (#13994) 2023-05-31 12:31:55 -04:00
Marliana Lara
bdfd6f47ff Use PATCH request when updating wf nodes (#14063) 2023-05-31 12:30:58 -04:00
Gabriel Muniz
ae7be2eea1 Add instance_group to bulk api (#13982)
Co-authored-by: Elijah DeLee <kdelee@redhat.com>
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-05-26 09:09:44 -03:00
Baptiste Agasse
8957a84738 Related #13336 - DNS resolution is preventing awx_collection to work with http[s]_proxy (#13524)
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-05-24 20:00:07 +00:00
Rick Elrod
bac124004f Rename heartbeet daemon to ws_heartbeat (#14041)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-05-24 13:27:55 -05:00
Joel Tenta
f46c7452d1 Spelling and codespelling corrections from community PR
- Made the choice not to pull in the CI tools due to the possibility of it blocking PRs.

Co-Authored By: Lila Yasin <89486372+djyasin@users.noreply.github.com>
2023-05-24 10:06:42 -04:00
John Westcott IV
098861d906 Updated sqlparse library (#13962)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-05-24 08:09:29 -03:00
John Westcott IV
daf39dc77e Adding capability of pretty error pages (#13852)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-05-23 14:05:38 -03:00
Hao Liu
00d8291d40 Change logging setting for task analytic scheduler (#14031) 2023-05-23 13:01:12 -04:00
Rick Elrod
88d1a484fa [dev docs] Re-document websockets infrastructure (#13992)
Re-add documentation for how AWX websockets and channels work, in the post-web/task split world.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-05-22 16:41:23 -05:00
Michael Abashian
5afdfb1135 Escape parenthesis in labeler for tech preview ui label 2023-05-18 15:00:19 -04:00
Michael Abashian
2f15cc5170 Updates issue_labeler.yml to handle tech preview ui auto-labeling 2023-05-18 14:46:36 -04:00
Michael Abashian
f15d40286c Adds a component label for the tech preview ui in bug_report.yml 2023-05-18 14:45:27 -04:00
Alan Rominger
f58c44590d Remove unused settings and associated code (#13898) 2023-05-18 10:05:59 -04:00
Alan Rominger
ef99770383 Add subsystem metrics for the dispatcher (#13989)
This adds a handful of metrics to /api/v2/metrics/ recorded from the dispatcher main process

Adds logic in the dispatcher period tasks to calculate these for the last collection interval
Reports worker count, task count, scale up events, and availability

Add data to demo grafana dashboard
2023-05-17 14:29:31 -04:00
John Westcott IV
84f67c7f82 Merge pull request #13961 from ansible/feature_django_upgrade_psycopg2
Upgrade to Django 4.2 LTS
2023-05-17 11:45:53 -04:00
Alan Rominger
433c28caa8 Materialize label page after getting 204 code (#14010) 2023-05-16 16:12:18 -04:00
Rick Elrod
fa05f55512 [collection] Fix sanity tests on ansible-core 2.15 (#14007)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-05-15 14:39:14 -05:00
Alan Rominger
0d5c0bcb91 Skip constructed_inventory in a more correct loop (#14004) 2023-05-15 13:48:59 -04:00
Rick Elrod
f3fa75d832 [wsrelay] Handle heartbeet shutdown and redis drop (#13991)
This fixes two different exceptions in wsrelay.

* One resulted from heartbeet getting ability in #13858 to gracefully
  shut down. When we saw the message come through, we didn't fully
  clean up the connection to the web node.

* The second resulted when Redis disappeared. We still want to exit in
  that case, but it's better to log a message and exit gracefully
  instead of crashing out.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-05-15 10:46:23 -05:00
John Westcott IV
285b7b0e5f Fixing using QuerySet.iterator() after prefetch_related() without specifying chunk_size is deprecated 2023-05-11 11:45:47 -04:00
John Westcott IV
08e8147374 Removing deprecated django.utils.timezone.utc alias in favor of datetime.timezone.utc 2023-05-11 11:45:47 -04:00
John Westcott IV
09bd398a9e Replacing depricated index_togeather with new indexes 2023-05-11 11:45:47 -04:00
John Westcott IV
8d6f50fae8 Upgrading djgno to 4.2 LTS 2023-05-11 11:45:15 -04:00
John Westcott IV
ecfbcb641e Adding upgrade to django-oauth-toolkit pre-migraiton 2023-05-11 11:43:33 -04:00
Shane McDonald
e434b1e0f3 Merge pull request #13987 from fosterseth/fix_ui_csp
Fix content security policy
2023-05-11 11:03:09 -04:00
Seth Foster
66c3acf777 Fix content security policy 2023-05-11 10:42:23 -04:00
John Westcott IV
ed1983bd8c Merge pull request #13977 from john-westcott-iv/awxkit_import_fix
Skip constructed_inventory endpoint in awxkit import
2023-05-11 09:04:32 -04:00
John Westcott IV
5c4277958c Merge pull request #13976 from john-westcott-iv/collection_job_wait_remove_depreciated_field_check
Change the job_wait integration test
2023-05-11 08:29:50 -04:00
John Westcott IV
7e4da7efa2 Updated pycryptography (#13964)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-05-11 09:25:56 -03:00
Christian Adams
7b1cb281c2 Merge pull request #13980 from rooftopcellist/extract-ui-next-strings
Update make target for extracting strings to do so for ui_next too
2023-05-10 23:18:44 -04:00
Christian M. Adams
dee39f3f1c Update make target for extracting strings to do so for ui_next too 2023-05-10 19:20:21 -04:00
John Westcott IV
ba7f97f84b Skip constructed_inventory endpoint in awxkit import 2023-05-10 14:24:27 -04:00
Alan Rominger
85e7189ee3 Add error handling to scm_version.py script (#13521)
raise Exception in the case that return code is non-zero

this approach has shown itself to be the most consistently reliable across multiple ecosystems
2023-05-10 14:20:56 -04:00
Alan Rominger
06430741ab Fix 400 error from job labels sublist (#13972)
This was caused by an incorrect parent_key ref from label to job
  also applies to workflow_job labels

This fixes a regression introduced by a recent merge (#13957)
2023-05-10 11:37:59 -04:00
John Westcott IV
cf091d7836 Change job_wait collection test to always try and delete created objects 2023-05-10 11:13:20 -04:00
John Westcott IV
a66acd87e6 Removes test of depreciated fields that have been removed from job_wait collection 2023-05-10 11:10:07 -04:00
Shane McDonald
595b4e3876 Merge pull request #13956 from shanemcd/get-your-strings-together
Clean up string formatting issues from black migration
2023-05-10 10:14:09 -04:00
Rick Elrod
74c46568c1 [wsrelay] switch from psycopg 3 to asyncpg (#13965)
Due to dependency issues specifically around upgrading to Django 4.2, we
cannot feasibly have a dependency on psycopg2 and psycopg3. The only
place that was currently using psycopg3 was wsrelay.

Change wsrelay to use the asyncpg library and psycopg2 instead.

Tested locally on kind with a dev build of awx.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-05-10 09:10:35 -05:00
Shane McDonald
f1196fc019 Clean up string formatting issues from black migration 2023-05-10 08:19:23 -04:00
1476 changed files with 48632 additions and 13922 deletions

View File

@@ -44,6 +44,7 @@ body:
label: Select the relevant components
options:
- label: UI
- label: UI (tech preview)
- label: API
- label: Docs
- label: Collection

View File

@@ -0,0 +1,34 @@
name: Setup images for AWX
description: Builds new awx_devel image
inputs:
github-token:
description: GitHub Token for registry access
required: true
runs:
using: composite
steps:
- name: Get python version from Makefile
shell: bash
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Set lower case owner name
shell: bash
run: echo "OWNER_LC=${OWNER,,}" >> $GITHUB_ENV
env:
OWNER: '${{ github.repository_owner }}'
- name: Log in to registry
shell: bash
run: |
echo "${{ inputs.github-token }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull latest devel image to warm cache
shell: bash
run: docker pull ghcr.io/${OWNER_LC}/awx_devel:${{ github.base_ref }}
- name: Build image for current source checkout
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} \
COMPOSE_TAG=${{ github.base_ref }} \
make docker-compose-build

View File

@@ -0,0 +1,77 @@
name: Run AWX docker-compose
description: Runs AWX with `make docker-compose`
inputs:
github-token:
description: GitHub Token to pass to awx_devel_image
required: true
build-ui:
description: Should the UI be built?
required: false
default: false
type: boolean
outputs:
ip:
description: The IP of the tools_awx_1 container
value: ${{ steps.data.outputs.ip }}
admin-token:
description: OAuth token for admin user
value: ${{ steps.data.outputs.admin_token }}
runs:
using: composite
steps:
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ inputs.github-token }}
- name: Upgrade ansible-core
shell: bash
run: python3 -m pip install --upgrade ansible-core
- name: Install system deps
shell: bash
run: sudo apt-get install -y gettext
- name: Start AWX
shell: bash
run: |
DEV_DOCKER_OWNER=${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
COMPOSE_UP_OPTS="-d" \
make docker-compose
- name: Update default AWX password
shell: bash
run: |
SECONDS=0
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]; do
if [[ $SECONDS -gt 600 ]]; then
echo "Timing out, AWX never came up"
exit 1
fi
echo "Waiting for AWX..."
sleep 5
done
echo "AWX is up, updating the password..."
docker exec -i tools_awx_1 sh <<-EOSH
awx-manage update_password --username=admin --password=password
EOSH
- name: Build UI
# This must be a string comparison in composite actions:
# https://github.com/actions/runner/issues/2238
if: ${{ inputs.build-ui == 'true' }}
shell: bash
run: |
docker exec -i tools_awx_1 sh <<-EOSH
make ui-devel
EOSH
- name: Get instance data
id: data
shell: bash
run: |
AWX_IP=$(docker inspect -f '{{.NetworkSettings.Networks.awx.IPAddress}}' tools_awx_1)
ADMIN_TOKEN=$(docker exec -i tools_awx_1 awx-manage create_oauth2_token --user admin)
echo "ip=$AWX_IP" >> $GITHUB_OUTPUT
echo "admin_token=$ADMIN_TOKEN" >> $GITHUB_OUTPUT

View File

@@ -0,0 +1,19 @@
name: Upload logs
description: Upload logs from `make docker-compose` devel environment to GitHub as an artifact
inputs:
log-filename:
description: "*Unique* name of the log file"
required: true
runs:
using: composite
steps:
- name: Get AWX logs
shell: bash
run: |
docker logs tools_awx_1 > ${{ inputs.log-filename }}
- name: Upload AWX logs as artifact
uses: actions/upload-artifact@v3
with:
name: docker-compose-logs
path: ${{ inputs.log-filename }}

View File

@@ -1,19 +1,10 @@
version: 2
updates:
- package-ecosystem: "npm"
directory: "/awx/ui"
- package-ecosystem: "pip"
directory: "docs/docsite/"
schedule:
interval: "monthly"
open-pull-requests-limit: 5
allow:
- dependency-type: "production"
reviewers:
- "AlexSCorey"
- "keithjgrant"
- "kialam"
- "mabashian"
- "marshmalien"
interval: "weekly"
open-pull-requests-limit: 2
labels:
- "component:ui"
- "docs"
- "dependencies"
target-branch: "devel"

View File

@@ -6,6 +6,8 @@ needs_triage:
- "Feature Summary"
"component:ui":
- "\\[X\\] UI"
"component:ui_next":
- "\\[X\\] UI \\(tech preview\\)"
"component:api":
- "\\[X\\] API"
"component:docs":

View File

@@ -15,5 +15,4 @@
"dependencies":
- any: ["awx/ui/package.json"]
- any: ["awx/requirements/*.txt"]
- any: ["awx/requirements/requirements.in"]
- any: ["requirements/*"]

View File

@@ -7,8 +7,8 @@
## PRs/Issues
### Visit our mailing list
- Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on our mailing list? See https://github.com/ansible/awx/#get-involved for information for ways to connect with us.
### Visit the Forum or Matrix
- Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on either the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com) or the [Ansible Community Forum](https://forum.ansible.com/tag/awx)?
### Denied Submission

View File

@@ -11,6 +11,7 @@ jobs:
common-tests:
name: ${{ matrix.tests.name }}
runs-on: ubuntu-latest
timeout-minutes: 60
permissions:
packages: write
contents: read
@@ -20,6 +21,8 @@ jobs:
tests:
- name: api-test
command: /start_tests.sh
- name: api-migrations
command: /start_tests.sh test_migrations
- name: api-lint
command: /var/lib/awx/venv/awx/bin/tox -e linters
- name: api-swagger
@@ -35,29 +38,44 @@ jobs:
- name: ui-test-general
command: make ui-test-general
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run check ${{ matrix.tests.name }}
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make github_ci_runner
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make docker-runner
dev-env:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run smoke test
run: make github_ci_setup && ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
run: ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
awx-operator:
runs-on: ubuntu-latest
timeout-minutes: 60
env:
DEBUG_OUTPUT_DIR: /tmp/awx_operator_molecule_test
steps:
- name: Checkout awx
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: awx
- name: Checkout awx-operator
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ansible/awx-operator
path: awx-operator
@@ -67,7 +85,7 @@ jobs:
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
@@ -78,11 +96,11 @@ jobs:
- name: Build AWX image
working-directory: awx
run: |
ansible-playbook -v tools/ansible/build.yml \
-e headless=yes \
-e awx_image=awx \
-e awx_image_tag=ci \
-e ansible_python_interpreter=$(which python3)
VERSION=`make version-for-buildyml` make awx-kube-build
env:
COMPOSE_TAG: ci
DEV_DOCKER_TAG_BASE: local
HEADLESS: yes
- name: Run test deployment with awx-operator
working-directory: awx-operator
@@ -91,18 +109,28 @@ jobs:
ansible-galaxy collection install -r molecule/requirements.yml
sudo rm -f $(which kustomize)
make kustomize
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind -- --skip-tags=replicas
env:
AWX_TEST_IMAGE: awx
AWX_TEST_IMAGE: local/awx
AWX_TEST_VERSION: ci
AWX_EE_TEST_IMAGE: quay.io/ansible/awx-ee:latest
STORE_DEBUG_OUTPUT: true
- name: Upload debug output
if: failure()
uses: actions/upload-artifact@v3
with:
name: awx-operator-debug-output
path: ${{ env.DEBUG_OUTPUT_DIR }}
collection-sanity:
name: awx_collection sanity
runs-on: ubuntu-latest
timeout-minutes: 30
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
# The containers that GitHub Actions use have Ansible installed, so upgrade to make sure we have the latest version.
- name: Upgrade ansible-core
@@ -110,7 +138,139 @@ jobs:
- name: Run sanity tests
run: make test_collection_sanity
collection-integration:
name: awx_collection integration
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
target-regex:
- name: a-h
regex: ^[a-h]
- name: i-p
regex: ^[i-p]
- name: r-z0-9
regex: ^[r-z0-9]
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install dependencies for running tests
run: |
python3 -m pip install -e ./awxkit/
python3 -m pip install -r awx_collection/requirements.txt
- name: Run integration tests
run: |
echo "::remove-matcher owner=python::" # Disable annoying annotations from setup-python
echo '[general]' > ~/.tower_cli.cfg
echo 'host = https://${{ steps.awx.outputs.ip }}:8043' >> ~/.tower_cli.cfg
echo 'oauth_token = ${{ steps.awx.outputs.admin-token }}' >> ~/.tower_cli.cfg
echo 'verify_ssl = false' >> ~/.tower_cli.cfg
TARGETS="$(ls awx_collection/tests/integration/targets | grep '${{ matrix.target-regex.regex }}' | tr '\n' ' ')"
make COLLECTION_VERSION=100.100.100-git COLLECTION_TEST_TARGET="--coverage --requirements $TARGETS" test_collection_integration
env:
# needed due to cgroupsv2. This is fixed, but a stable release
# with the fix has not been made yet.
ANSIBLE_TEST_PREFER_PODMAN: 1
# Upload coverage report as artifact
- uses: actions/upload-artifact@v3
if: always()
with:
name: coverage-${{ matrix.target-regex.name }}
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: collection-integration-${{ matrix.target-regex.name }}.log
collection-integration-coverage-combine:
name: combine awx_collection integration coverage
runs-on: ubuntu-latest
timeout-minutes: 10
needs:
- collection-integration
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v3
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
- name: Download coverage artifacts
uses: actions/download-artifact@v3
with:
path: coverage
- name: Combine coverage
run: |
make COLLECTION_VERSION=100.100.100-git install_collection
mkdir -p ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage
cd coverage
for i in coverage-*; do
cp -rv $i/* ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
done
cd ~/.ansible/collections/ansible_collections/awx/awx
ansible-test coverage combine --requirements
ansible-test coverage html
echo '## AWX Collection Integration Coverage' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
ansible-test coverage report >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo >> $GITHUB_STEP_SUMMARY
echo '## AWX Collection Integration Coverage HTML' >> $GITHUB_STEP_SUMMARY
echo 'Download the HTML artifacts to view the coverage report.' >> $GITHUB_STEP_SUMMARY
# This is a huge hack, there's no official action for removing artifacts currently.
# Also ACTIONS_RUNTIME_URL and ACTIONS_RUNTIME_TOKEN aren't available in normal run
# steps, so we have to use github-script to get them.
#
# The advantage of doing this, though, is that we save on artifact storage space.
- name: Get secret artifact runtime URL
uses: actions/github-script@v6
id: get-runtime-url
with:
result-encoding: string
script: |
const { ACTIONS_RUNTIME_URL } = process.env;
return ACTIONS_RUNTIME_URL;
- name: Get secret artifact runtime token
uses: actions/github-script@v6
id: get-runtime-token
with:
result-encoding: string
script: |
const { ACTIONS_RUNTIME_TOKEN } = process.env;
return ACTIONS_RUNTIME_TOKEN;
- name: Remove intermediary artifacts
env:
ACTIONS_RUNTIME_URL: ${{ steps.get-runtime-url.outputs.result }}
ACTIONS_RUNTIME_TOKEN: ${{ steps.get-runtime-token.outputs.result }}
run: |
echo "::add-mask::${ACTIONS_RUNTIME_TOKEN}"
artifacts=$(
curl -H "Authorization: Bearer $ACTIONS_RUNTIME_TOKEN" \
${ACTIONS_RUNTIME_URL}_apis/pipelines/workflows/${{ github.run_id }}/artifacts?api-version=6.0-preview \
| jq -r '.value | .[] | select(.name | startswith("coverage-")) | .url'
)
for artifact in $artifacts; do
curl -i -X DELETE -H "Accept: application/json;api-version=6.0-preview" -H "Authorization: Bearer $ACTIONS_RUNTIME_TOKEN" "$artifact"
done
- name: Upload coverage report as artifact
uses: actions/upload-artifact@v3
with:
name: awx-collection-integration-coverage-html
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/reports/coverage

View File

@@ -3,32 +3,55 @@ name: Build/Push Development Images
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on:
workflow_dispatch:
push:
branches:
- devel
- release_*
- feature_*
jobs:
push:
if: endsWith(github.repository, '/awx') || startsWith(github.ref, 'refs/heads/release_')
push-development-images:
runs-on: ubuntu-latest
timeout-minutes: 120
permissions:
packages: write
contents: read
strategy:
fail-fast: false
matrix:
build-targets:
- image-name: awx_devel
make-target: docker-compose-buildx
- image-name: awx_kube_devel
make-target: awx-kube-dev-buildx
- image-name: awx
make-target: awx-kube-buildx
steps:
- uses: actions/checkout@v2
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Set lower case owner name
- name: Skipping build of awx image for non-awx repository
run: |
echo "OWNER_LC=${OWNER,,}" >>${GITHUB_ENV}
echo "Skipping build of awx image for non-awx repository"
exit 0
if: matrix.build-targets.image-name == 'awx' && !endsWith(github.repository, '/awx')
- uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Set GITHUB_ENV variables
run: |
echo "DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER,,}" >> $GITHUB_ENV
echo "COMPOSE_TAG=${GITHUB_REF##*/}" >> $GITHUB_ENV
echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
env:
OWNER: '${{ github.repository_owner }}'
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
@@ -36,20 +59,19 @@ jobs:
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/} || :
- name: Setup node and npm
uses: actions/setup-node@v2
with:
node-version: '16.13.1'
if: matrix.build-targets.image-name == 'awx'
- name: Build images
- name: Prebuild UI for awx image (to speed up build process)
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make docker-compose-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-dev-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-build
sudo apt-get install gettext
make ui-release
make ui-next
if: matrix.build-targets.image-name == 'awx'
- name: Push image
- name: Build and push AWX devel images
run: |
docker push ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/}
make ${{ matrix.build-targets.make-target }}

17
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,17 @@
---
name: Docsite CI
on:
pull_request:
jobs:
docsite-build:
name: docsite test build
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v3
- name: install tox
run: pip install tox
- name: Assure docs can be built
run: tox -e docs

View File

@@ -1,109 +0,0 @@
---
name: E2E Tests
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on:
pull_request_target:
types: [labeled]
jobs:
e2e-test:
if: contains(github.event.pull_request.labels.*.name, 'qe:e2e')
runs-on: ubuntu-latest
timeout-minutes: 40
permissions:
packages: write
contents: read
strategy:
matrix:
job: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]
steps:
- uses: actions/checkout@v2
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.py_version }}
- name: Install system deps
run: sudo apt-get install -y gettext
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
- name: Build UI
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ github.base_ref }} make ui-devel
- name: Start AWX
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ github.base_ref }} make docker-compose &> make-docker-compose-output.log &
- name: Pull awx_cypress_base image
run: |
docker pull quay.io/awx/awx_cypress_base:latest
- name: Checkout test project
uses: actions/checkout@v2
with:
repository: ${{ github.repository_owner }}/tower-qa
ssh-key: ${{ secrets.QA_REPO_KEY }}
path: tower-qa
ref: devel
- name: Build cypress
run: |
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
docker build -t awx-pf-tests .
- name: Update default AWX password
run: |
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]
do
echo "Waiting for AWX..."
sleep 5;
done
echo "AWX is up, updating the password..."
docker exec -i tools_awx_1 sh <<-EOSH
awx-manage update_password --username=admin --password=password
EOSH
- name: Run E2E tests
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
run: |
export COMMIT_INFO_BRANCH=$GITHUB_HEAD_REF
export COMMIT_INFO_AUTHOR=$GITHUB_ACTOR
export COMMIT_INFO_SHA=$GITHUB_SHA
export COMMIT_INFO_REMOTE=$GITHUB_REPOSITORY_OWNER
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
AWX_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' tools_awx_1)
printenv > .env
echo "Executing tests:"
docker run \
--network '_sources_default' \
--ipc=host \
--env-file=.env \
-e CYPRESS_baseUrl="https://$AWX_IP:8043" \
-e CYPRESS_AWX_E2E_USERNAME=admin \
-e CYPRESS_AWX_E2E_PASSWORD='password' \
-e COMMAND="npm run cypress-concurrently-gha" \
-v /dev/shm:/dev/shm \
-v $PWD:/e2e \
-w /e2e \
awx-pf-tests run --project .
- name: Save AWX logs
uses: actions/upload-artifact@v2
with:
name: AWX-logs-${{ matrix.job }}
path: make-docker-compose-output.log

View File

@@ -2,13 +2,12 @@
name: Feature branch deletion cleanup
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on:
delete:
branches:
- feature_**
on: delete
jobs:
push:
branch_delete:
if: ${{ github.event.ref_type == 'branch' && startsWith(github.event.ref, 'feature_') }}
runs-on: ubuntu-latest
timeout-minutes: 20
permissions:
packages: write
contents: read
@@ -21,6 +20,4 @@ jobs:
run: |
ansible localhost -c local, -m command -a "{{ ansible_python_interpreter + ' -m pip install boto3'}}"
ansible localhost -c local -m aws_s3 \
-a "bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=delete permission=public-read"
-a "bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=delobj permission=public-read"

View File

@@ -6,18 +6,19 @@ on:
- opened
- reopened
permissions:
contents: read # to fetch code
issues: write # to label issues
permissions:
contents: write # to fetch code
issues: write # to label issues
jobs:
triage:
runs-on: ubuntu-latest
timeout-minutes: 20
name: Label Issue
steps:
- name: Label Issue
uses: github/issue-labeler@v2.4.1
uses: github/issue-labeler@v3.1
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
not-before: 2021-12-07T07:00:00Z
@@ -26,9 +27,10 @@ jobs:
community:
runs-on: ubuntu-latest
timeout-minutes: 20
name: Label Issue - Community
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- name: Install python requests
run: pip install requests

View File

@@ -8,12 +8,13 @@ on:
- synchronize
permissions:
contents: read # to determine modified files (actions/labeler)
contents: write # to determine modified files (actions/labeler)
pull-requests: write # to add labels to PRs (actions/labeler)
jobs:
triage:
runs-on: ubuntu-latest
timeout-minutes: 20
name: Label PR
steps:
@@ -25,9 +26,10 @@ jobs:
community:
runs-on: ubuntu-latest
timeout-minutes: 20
name: Label PR - Community
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- name: Install python requests
run: pip install requests

View File

@@ -7,8 +7,10 @@ on:
types: [opened, edited, reopened, synchronize]
jobs:
pr-check:
if: github.repository_owner == 'ansible' && endsWith(github.repository, 'awx')
name: Scan PR description for semantic versioning keywords
runs-on: ubuntu-latest
timeout-minutes: 20
permissions:
packages: write
contents: read

View File

@@ -7,23 +7,38 @@ env:
on:
release:
types: [published]
workflow_dispatch:
inputs:
tag_name:
description: 'Name for the tag of the release.'
required: true
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
promote:
if: endsWith(github.repository, '/awx')
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Set GitHub Env vars for workflow_dispatch event
if: ${{ github.event_name == 'workflow_dispatch' }}
run: |
echo "TAG_NAME=${{ github.event.inputs.tag_name }}" >> $GITHUB_ENV
- name: Set GitHub Env vars if release event
if: ${{ github.event_name == 'release' }}
run: |
echo "TAG_NAME=${{ env.TAG_NAME }}" >> $GITHUB_ENV
- name: Checkout awx
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
@@ -40,14 +55,20 @@ jobs:
if: ${{ github.repository_owner != 'ansible' }}
- name: Build collection and publish to galaxy
env:
COLLECTION_NAMESPACE: ${{ env.collection_namespace }}
COLLECTION_VERSION: ${{ env.TAG_NAME }}
COLLECTION_TEMPLATE_VERSION: true
run: |
COLLECTION_TEMPLATE_VERSION=true COLLECTION_NAMESPACE=${{ env.collection_namespace }} make build_collection
if [ "$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
echo "Galaxy release already done"; \
else \
make build_collection
curl_with_redirects=$(curl --head -sLw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz | tail -1)
curl_without_redirects=$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz | tail -1)
if [[ "$curl_with_redirects" == "302" ]] || [[ "$curl_without_redirects" == "302" ]]; then
echo "Galaxy release already done";
else
ansible-galaxy collection publish \
--token=${{ secrets.GALAXY_TOKEN }} \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz; \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz;
fi
- name: Set official pypi info
@@ -59,9 +80,11 @@ jobs:
if: ${{ github.repository_owner != 'ansible' }}
- name: Build awxkit and upload to pypi
env:
SETUPTOOLS_SCM_PRETEND_VERSION: ${{ env.TAG_NAME }}
run: |
git reset --hard
cd awxkit && python3 setup.py bdist_wheel
cd awxkit && python3 setup.py sdist bdist_wheel
twine upload \
-r ${{ env.pypi_repo }} \
-u ${{ secrets.PYPI_USERNAME }} \
@@ -78,11 +101,15 @@ jobs:
- name: Re-tag and promote awx image
run: |
docker pull ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }}
docker tag ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }} quay.io/${{ github.repository }}:${{ github.event.release.tag_name }}
docker tag ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }} quay.io/${{ github.repository }}:latest
docker push quay.io/${{ github.repository }}:${{ github.event.release.tag_name }}
docker push quay.io/${{ github.repository }}:latest
docker pull ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker tag ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }} quay.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker push quay.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker buildx imagetools create \
ghcr.io/${{ github.repository }}:${{ env.TAG_NAME }} \
--tag quay.io/${{ github.repository }}:${{ env.TAG_NAME }}
docker buildx imagetools create \
ghcr.io/${{ github.repository }}:${{ env.TAG_NAME }} \
--tag quay.io/${{ github.repository }}:latest
- name: Re-tag and promote awx-ee image
run: |
docker buildx imagetools create \
ghcr.io/${{ github.repository_owner }}/awx-ee:${{ env.TAG_NAME }} \
--tag quay.io/${{ github.repository_owner }}/awx-ee:${{ env.TAG_NAME }}

View File

@@ -23,6 +23,7 @@ jobs:
stage:
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest
timeout-minutes: 90
permissions:
packages: write
contents: write
@@ -44,68 +45,100 @@ jobs:
exit 0
- name: Checkout awx
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: awx
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
- name: Checkout awx-operator
uses: actions/checkout@v3
with:
python-version: ${{ env.py_version }}
repository: ${{ github.repository_owner }}/awx-operator
path: awx-operator
- name: Checkout awx-logos
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ansible/awx-logos
path: awx-logos
- name: Checkout awx-operator
uses: actions/checkout@v2
- name: Get python version from Makefile
working-directory: awx
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
with:
repository: ${{ github.repository_owner }}/awx-operator
path: awx-operator
python-version: ${{ env.py_version }}
- name: Install playbook dependencies
run: |
python3 -m pip install docker
- name: Build and stage AWX
- name: Log into registry ghcr.io
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Copy logos for inclusion in sdist for official build
working-directory: awx
run: |
ansible-playbook -v tools/ansible/build.yml \
-e registry=ghcr.io \
-e registry_username=${{ github.actor }} \
-e registry_password=${{ secrets.GITHUB_TOKEN }} \
-e awx_image=${{ github.repository }} \
-e awx_version=${{ github.event.inputs.version }} \
-e ansible_python_interpreter=$(which python3) \
-e push=yes \
-e awx_official=yes
cp ../awx-logos/awx/ui/client/assets/* awx/ui/public/static/media/
- name: Log in to GHCR
run: |
echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Setup node and npm
uses: actions/setup-node@v2
with:
node-version: '16.13.1'
- name: Log in to Quay
- name: Prebuild UI for awx image (to speed up build process)
working-directory: awx
run: |
echo ${{ secrets.QUAY_TOKEN }} | docker login quay.io -u ${{ secrets.QUAY_USER }} --password-stdin
sudo apt-get install gettext
make ui-release
make ui-next
- name: Set build env variables
run: |
echo "DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER,,}" >> $GITHUB_ENV
echo "COMPOSE_TAG=${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "VERSION=${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "AWX_TEST_VERSION=${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "AWX_TEST_IMAGE=ghcr.io/${OWNER,,}/awx" >> $GITHUB_ENV
echo "AWX_EE_TEST_IMAGE=ghcr.io/${OWNER,,}/awx-ee:${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "AWX_OPERATOR_TEST_IMAGE=ghcr.io/${OWNER,,}/awx-operator:${{ github.event.inputs.operator_version }}" >> $GITHUB_ENV
env:
OWNER: ${{ github.repository_owner }}
- name: Build and stage AWX
working-directory: awx
env:
DOCKER_BUILDX_PUSH: true
HEADLESS: false
PLATFORMS: linux/amd64,linux/arm64
run: |
make awx-kube-buildx
- name: tag awx-ee:latest with version input
run: |
docker pull quay.io/ansible/awx-ee:latest
docker tag quay.io/ansible/awx-ee:latest ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
docker push ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
docker buildx imagetools create \
quay.io/ansible/awx-ee:latest \
--tag ${AWX_EE_TEST_IMAGE}
- name: Build and stage awx-operator
- name: Stage awx-operator image
working-directory: awx-operator
run: |
BUILD_ARGS="--build-arg DEFAULT_AWX_VERSION=${{ github.event.inputs.version }} \
--build-arg OPERATOR_VERSION=${{ github.event.inputs.operator_version }}" \
IMAGE_TAG_BASE=ghcr.io/${{ github.repository_owner }}/awx-operator \
VERSION=${{ github.event.inputs.operator_version }} make docker-build docker-push
BUILD_ARGS="--build-arg DEFAULT_AWX_VERSION=${{ github.event.inputs.version}} \
--build-arg OPERATOR_VERSION=${{ github.event.inputs.operator_version }}" \
IMG=${AWX_OPERATOR_TEST_IMAGE} \
make docker-buildx
- name: Pulling images for test deployment with awx-operator
# awx operator molecue test expect to kind load image and buildx exports image to registry and not local
run: |
docker pull ${AWX_OPERATOR_TEST_IMAGE}
docker pull ${AWX_EE_TEST_IMAGE}
docker pull ${AWX_TEST_IMAGE}:${AWX_TEST_VERSION}
- name: Run test deployment with awx-operator
working-directory: awx-operator
@@ -115,10 +148,6 @@ jobs:
sudo rm -f $(which kustomize)
make kustomize
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule test -s kind
env:
AWX_TEST_IMAGE: ${{ github.repository }}
AWX_TEST_VERSION: ${{ github.event.inputs.version }}
AWX_EE_TEST_IMAGE: ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
- name: Create draft release for AWX
working-directory: awx

View File

@@ -9,6 +9,7 @@ jobs:
name: Update Dependabot Prs
if: contains(github.event.pull_request.labels.*.name, 'dependencies') && contains(github.event.pull_request.labels.*.name, 'component:ui')
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: Checkout branch

View File

@@ -13,17 +13,18 @@ on:
jobs:
push:
runs-on: ubuntu-latest
timeout-minutes: 60
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}

12
.gitignore vendored
View File

@@ -46,6 +46,11 @@ tools/docker-compose/overrides/
tools/docker-compose-minikube/_sources
tools/docker-compose/keycloak.awx.realm.json
!tools/docker-compose/editable_dependencies
tools/docker-compose/editable_dependencies/*
!tools/docker-compose/editable_dependencies/README.md
!tools/docker-compose/editable_dependencies/install.sh
# Tower setup playbook testing
setup/test/roles/postgresql
**/provision_docker
@@ -165,3 +170,10 @@ use_dev_supervisor.txt
awx/ui_next/src
awx/ui_next/build
# Docs build stuff
docs/docsite/build/
_readthedocs/
# Pyenv
.python-version

5
.gitleaks.toml Normal file
View File

@@ -0,0 +1,5 @@
[allowlist]
description = "Documentation contains example secrets and passwords"
paths = [
"docs/docsite/rst/administration/oauth2_token_auth.rst",
]

5
.pip-tools.toml Normal file
View File

@@ -0,0 +1,5 @@
[tool.pip-tools]
resolver = "backtracking"
allow-unsafe = true
strip-extras = true
quiet = true

16
.readthedocs.yaml Normal file
View File

@@ -0,0 +1,16 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
version: 2
build:
os: ubuntu-22.04
tools:
python: >-
3.11
commands:
- pip install --user tox
- python3 -m tox -e docs --notest -v
- python3 -m tox -e docs --skip-pkg-install -q
- mkdir -p _readthedocs/html/
- mv docs/docsite/build/html/* _readthedocs/html/

113
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,113 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "run_ws_heartbeat",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_ws_heartbeat"],
"django": true,
"preLaunchTask": "stop awx-ws-heartbeat",
"postDebugTask": "start awx-ws-heartbeat"
},
{
"name": "run_cache_clear",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_cache_clear"],
"django": true,
"preLaunchTask": "stop awx-cache-clear",
"postDebugTask": "start awx-cache-clear"
},
{
"name": "run_callback_receiver",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_callback_receiver"],
"django": true,
"preLaunchTask": "stop awx-receiver",
"postDebugTask": "start awx-receiver"
},
{
"name": "run_dispatcher",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_dispatcher"],
"django": true,
"preLaunchTask": "stop awx-dispatcher",
"postDebugTask": "start awx-dispatcher"
},
{
"name": "run_rsyslog_configurer",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_rsyslog_configurer"],
"django": true,
"preLaunchTask": "stop awx-rsyslog-configurer",
"postDebugTask": "start awx-rsyslog-configurer"
},
{
"name": "run_cache_clear",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_cache_clear"],
"django": true,
"preLaunchTask": "stop awx-cache-clear",
"postDebugTask": "start awx-cache-clear"
},
{
"name": "run_wsrelay",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["run_wsrelay"],
"django": true,
"preLaunchTask": "stop awx-wsrelay",
"postDebugTask": "start awx-wsrelay"
},
{
"name": "daphne",
"type": "debugpy",
"request": "launch",
"program": "/var/lib/awx/venv/awx/bin/daphne",
"args": ["-b", "127.0.0.1", "-p", "8051", "awx.asgi:channel_layer"],
"django": true,
"preLaunchTask": "stop awx-daphne",
"postDebugTask": "start awx-daphne"
},
{
"name": "runserver(uwsgi alternative)",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["runserver", "127.0.0.1:8052"],
"django": true,
"preLaunchTask": "stop awx-uwsgi",
"postDebugTask": "start awx-uwsgi"
},
{
"name": "runserver_plus(uwsgi alternative)",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["runserver_plus", "127.0.0.1:8052"],
"django": true,
"preLaunchTask": "stop awx-uwsgi and install Werkzeug",
"postDebugTask": "start awx-uwsgi"
},
{
"name": "shell_plus",
"type": "debugpy",
"request": "launch",
"program": "manage.py",
"args": ["shell_plus"],
"django": true,
},
]
}

100
.vscode/tasks.json vendored Normal file
View File

@@ -0,0 +1,100 @@
{
"version": "2.0.0",
"tasks": [
{
"label": "start awx-cache-clear",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-cache-clear"
},
{
"label": "stop awx-cache-clear",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-cache-clear"
},
{
"label": "start awx-daphne",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-daphne"
},
{
"label": "stop awx-daphne",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-daphne"
},
{
"label": "start awx-dispatcher",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-dispatcher"
},
{
"label": "stop awx-dispatcher",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-dispatcher"
},
{
"label": "start awx-receiver",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-receiver"
},
{
"label": "stop awx-receiver",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-receiver"
},
{
"label": "start awx-rsyslog-configurer",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-rsyslog-configurer"
},
{
"label": "stop awx-rsyslog-configurer",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-rsyslog-configurer"
},
{
"label": "start awx-rsyslogd",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-rsyslogd"
},
{
"label": "stop awx-rsyslogd",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-rsyslogd"
},
{
"label": "start awx-uwsgi",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-uwsgi"
},
{
"label": "stop awx-uwsgi",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-uwsgi"
},
{
"label": "stop awx-uwsgi and install Werkzeug",
"type": "shell",
"command": "pip install Werkzeug; supervisorctl stop tower-processes:awx-uwsgi"
},
{
"label": "start awx-ws-heartbeat",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-ws-heartbeat"
},
{
"label": "stop awx-ws-heartbeat",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-ws-heartbeat"
},
{
"label": "start awx-wsrelay",
"type": "shell",
"command": "supervisorctl start tower-processes:awx-wsrelay"
},
{
"label": "stop awx-wsrelay",
"type": "shell",
"command": "supervisorctl stop tower-processes:awx-wsrelay"
}
]
}

View File

@@ -10,6 +10,7 @@ ignore: |
tools/docker-compose/_sources
# django template files
awx/api/templates/instance_install_bundle/**
.readthedocs.yaml
extends: default

View File

@@ -4,6 +4,6 @@
Early versions of AWX did not support seamless upgrades between major versions and required the use of a backup and restore tool to perform upgrades.
Users who wish to upgrade modern AWX installations should follow the instructions at:
As of version 18.0, `awx-operator` is the preferred install/upgrade method. Users who wish to upgrade modern AWX installations should follow the instructions at:
https://github.com/ansible/awx/blob/devel/INSTALL.md#upgrading-from-previous-versions
https://github.com/ansible/awx-operator/blob/devel/docs/upgrade/upgrading.md

View File

@@ -31,7 +31,7 @@ If your issue isn't considered high priority, then please be patient as it may t
`state:needs_info` The issue needs more information. This could be more debug output, more specifics out the system such as version information. Any detail that is currently preventing this issue from moving forward. This should be considered a blocked state.
`state:needs_review` The issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familar with an area of the code base the issue is for.
`state:needs_review` The issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familiar with an area of the code base the issue is for.
`state:needs_revision` More commonly used on pull requests, this state represents that there are changes that are being waited on.
@@ -80,7 +80,7 @@ If any of those items are missing your pull request will still get the `needs_tr
Currently you can expect awxbot to add common labels such as `state:needs_triage`, `type:bug`, `component:docs`, etc...
These labels are determined by the template data. Please use the template and fill it out as accurately as possible.
The `state:needs_triage` label will will remain on your pull request until a person has looked at it.
The `state:needs_triage` label will remain on your pull request until a person has looked at it.
You can also expect the bot to CC maintainers of specific areas of the code, this will notify them that there is a pull request by placing a comment on the pull request.
The comment will look something like `CC @matburt @wwitzel3 ...`.

View File

@@ -22,7 +22,7 @@ recursive-exclude awx/settings local_settings.py*
include tools/scripts/request_tower_configuration.sh
include tools/scripts/request_tower_configuration.ps1
include tools/scripts/automation-controller-service
include tools/scripts/failure-event-handler
include tools/scripts/rsyslog-4xx-recovery
include tools/scripts/awx-python
include awx/playbooks/library/mkfifo.py
include tools/sosreport/*

121
Makefile
View File

@@ -1,14 +1,16 @@
-include awx/ui_next/Makefile
PYTHON := $(notdir $(shell for i in python3.9 python3; do command -v $$i; done|sed 1q))
DOCKER_COMPOSE ?= docker-compose
PYTHON := $(notdir $(shell for i in python3.11 python3; do command -v $$i; done|sed 1q))
SHELL := bash
DOCKER_COMPOSE ?= docker compose
OFFICIAL ?= no
NODE ?= node
NPM_BIN ?= npm
KIND_BIN ?= $(shell which kind)
CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
VERSION := $(shell $(PYTHON) tools/scripts/scm_version.py)
VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py 2> /dev/null)
# ansible-test requires semver compatable version, so we allow overrides to hack it
COLLECTION_VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py | cut -d . -f 1-3)
@@ -27,6 +29,8 @@ COLLECTION_TEMPLATE_VERSION ?= false
# NOTE: This defaults the container image version to the branch that's active
COMPOSE_TAG ?= $(GIT_BRANCH)
MAIN_NODE_TYPE ?= hybrid
# If set to true docker-compose will also start a pgbouncer instance and use it
PGBOUNCER ?= false
# If set to true docker-compose will also start a keycloak instance
KEYCLOAK ?= false
# If set to true docker-compose will also start an ldap instance
@@ -37,8 +41,14 @@ SPLUNK ?= false
PROMETHEUS ?= false
# If set to true docker-compose will also start a grafana instance
GRAFANA ?= false
# If set to true docker-compose will also start a hashicorp vault instance
VAULT ?= false
# If set to true docker-compose will also start a hashicorp vault instance with TLS enabled
VAULT_TLS ?= false
# If set to true docker-compose will also start a tacacs+ instance
TACACS ?= false
# If set to true docker-compose will install editable dependencies
EDITABLE_DEPENDENCIES ?= false
VENV_BASE ?= /var/lib/awx/venv
@@ -52,10 +62,10 @@ RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
# Python packages to install only from source (not from binary wheels)
# Comma separated list
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg2,twilio
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio
# These should be upgraded in the AWX and Ansible venv before attempting
# to install the actual requirements
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==65.6.3 setuptools_scm[toml]==7.0.5 wheel==0.38.4
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==69.0.2 setuptools_scm[toml]==8.0.4 wheel==0.42.0
NAME ?= awx
@@ -67,13 +77,16 @@ SDIST_TAR_FILE ?= $(SDIST_TAR_NAME).tar.gz
I18N_FLAG_FILE = .i18n_built
## PLATFORMS defines the target platforms for the manager image be build to provide support to multiple
PLATFORMS ?= linux/amd64,linux/arm64 # linux/ppc64le,linux/s390x
.PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \
develop refresh adduser migrate dbchange \
receiver test test_unit test_coverage coverage_html \
sdist \
ui-release ui-devel \
VERSION PYTHON_VERSION docker-compose-sources \
.git/hooks/pre-commit github_ci_setup github_ci_runner
.git/hooks/pre-commit
clean-tmp:
rm -rf tmp/
@@ -205,8 +218,6 @@ collectstatic:
fi; \
$(PYTHON) manage.py collectstatic --clear --noinput > /dev/null 2>&1
DEV_RELOAD_COMMAND ?= supervisorctl restart tower-processes:*
uwsgi: collectstatic
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -214,7 +225,7 @@ uwsgi: collectstatic
uwsgi /etc/tower/uwsgi.ini
awx-autoreload:
@/awx_devel/tools/docker-compose/awx-autoreload /awx_devel/awx "$(DEV_RELOAD_COMMAND)"
@/awx_devel/tools/docker-compose/awx-autoreload /awx_devel/awx
daphne:
@if [ "$(VENV_BASE)" ]; then \
@@ -267,11 +278,11 @@ run-wsrelay:
$(PYTHON) manage.py run_wsrelay
## Start the heartbeat process in background in development environment.
run-heartbeet:
run-ws-heartbeat:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_heartbeet
$(PYTHON) manage.py run_ws_heartbeat
reports:
mkdir -p $@
@@ -294,7 +305,7 @@ swagger: reports
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
(set -o pipefail && py.test $(PYTEST_ARGS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs --release=$(VERSION_TARGET) | tee reports/$@.report)
(set -o pipefail && py.test $(PYTEST_ARGS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs | tee reports/$@.report)
check: black
@@ -318,21 +329,16 @@ test:
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
## Login to Github container image registry, pull image, then build image.
github_ci_setup:
# GITHUB_ACTOR is automatic github actions env var
# CI_GITHUB_TOKEN is defined in .github files
echo $(CI_GITHUB_TOKEN) | docker login ghcr.io -u $(GITHUB_ACTOR) --password-stdin
docker pull $(DEVEL_IMAGE_NAME) || : # Pre-pull image to warm build cache
$(MAKE) docker-compose-build
test_migrations:
if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider --migrations -m migration_test $(PYTEST_ARGS) $(TEST_DIRS)
## Runs AWX_DOCKER_CMD inside a new docker container.
docker-runner:
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)
## Builds image and runs AWX_DOCKER_CMD in it, mainly for .github checks.
github_ci_runner: github_ci_setup docker-runner
test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
if [ "$(VENV_BASE)" ]; then \
@@ -378,7 +384,7 @@ test_collection_sanity:
cd $(COLLECTION_INSTALL) && ansible-test sanity $(COLLECTION_SANITY_ARGS)
test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && ansible-test integration $(COLLECTION_TEST_TARGET)
cd $(COLLECTION_INSTALL) && ansible-test integration -vvv $(COLLECTION_TEST_TARGET)
test_unit:
@if [ "$(VENV_BASE)" ]; then \
@@ -520,17 +526,32 @@ docker-compose-sources: .git/hooks/pre-commit
-e control_plane_node_count=$(CONTROL_PLANE_NODE_COUNT) \
-e execution_node_count=$(EXECUTION_NODE_COUNT) \
-e minikube_container_group=$(MINIKUBE_CONTAINER_GROUP) \
-e enable_pgbouncer=$(PGBOUNCER) \
-e enable_keycloak=$(KEYCLOAK) \
-e enable_ldap=$(LDAP) \
-e enable_splunk=$(SPLUNK) \
-e enable_prometheus=$(PROMETHEUS) \
-e enable_grafana=$(GRAFANA) \
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS) \
-e enable_tacacs=$(TACACS) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)
-e install_editable_dependencies=$(EDITABLE_DEPENDENCIES) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)
docker-compose: awx/projects docker-compose-sources
ansible-galaxy install --ignore-certs -r tools/docker-compose/ansible/requirements.yml;
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS) \
-e enable_ldap=$(LDAP); \
$(MAKE) docker-compose-up
docker-compose-up:
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans
docker-compose-down:
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) down --remove-orphans
docker-compose-credential-plugins: awx/projects docker-compose-sources
echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m"
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx_1 --remove-orphans
@@ -575,12 +596,27 @@ docker-compose-build: Dockerfile.dev
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
.PHONY: docker-compose-buildx
## Build awx_devel image for docker compose development environment for multiple architectures
docker-compose-buildx: Dockerfile.dev
- docker buildx create --name docker-compose-buildx
docker buildx use docker-compose-buildx
- docker buildx build \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) \
--platform=$(PLATFORMS) \
--tag $(DEVEL_IMAGE_NAME) \
-f Dockerfile.dev .
- docker buildx rm docker-compose-buildx
docker-clean:
-$(foreach container_id,$(shell docker ps -f name=tools_awx -aq && docker ps -f name=tools_receptor -aq),docker stop $(container_id); docker rm -f $(container_id);)
-$(foreach image_id,$(shell docker images --filter=reference='*/*/*awx_devel*' --filter=reference='*/*awx_devel*' --filter=reference='*awx_devel*' -aq),docker rmi --force $(image_id);)
docker-clean-volumes: docker-compose-clean docker-compose-container-group-clean
docker volume rm -f tools_awx_db tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q)
docker volume rm -f tools_var_lib_awx tools_awx_db tools_awx_db_15 tools_vault_1 tools_ldap_1 tools_grafana_storage tools_prometheus_storage $(shell docker volume ls --filter name=tools_redis_socket_ -q)
docker-refresh: docker-clean docker-compose
@@ -602,9 +638,6 @@ clean-elk:
docker rm tools_elasticsearch_1
docker rm tools_kibana_1
psql-container:
docker run -it --net tools_default --rm postgres:12 sh -c 'exec psql -h "postgres" -p "5432" -U postgres'
VERSION:
@echo "awx: $(VERSION)"
@@ -637,6 +670,21 @@ awx-kube-build: Dockerfile
--build-arg HEADLESS=$(HEADLESS) \
-t $(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG) .
## Build multi-arch awx image for deployment on Kubernetes environment.
awx-kube-buildx: Dockerfile
- docker buildx create --name awx-kube-buildx
docker buildx use awx-kube-buildx
- docker buildx build \
--push \
--build-arg VERSION=$(VERSION) \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
--build-arg HEADLESS=$(HEADLESS) \
--platform=$(PLATFORMS) \
--tag $(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG) \
-f Dockerfile .
- docker buildx rm awx-kube-buildx
.PHONY: Dockerfile.kube-dev
## Generate Docker.kube-dev for awx_kube_devel image
Dockerfile.kube-dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
@@ -653,6 +701,21 @@ awx-kube-dev-build: Dockerfile.kube-dev
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
## Build and push multi-arch awx_kube_devel image for development on local Kubernetes environment.
awx-kube-dev-buildx: Dockerfile.kube-dev
- docker buildx create --name awx-kube-dev-buildx
docker buildx use awx-kube-dev-buildx
- docker buildx build \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
--platform=$(PLATFORMS) \
--tag $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
-f Dockerfile.kube-dev .
- docker buildx rm awx-kube-dev-buildx
kind-dev-load: awx-kube-dev-build
$(KIND_BIN) load docker-image $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG)
# Translation TASKS
# --------------------------------------
@@ -660,10 +723,12 @@ awx-kube-dev-build: Dockerfile.kube-dev
## generate UI .pot file, an empty template of strings yet to be translated
pot: $(UI_BUILD_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui --loglevel warn run extract-template --clean
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-template --clean
## generate UI .po files for each locale (will update translated strings for `en`)
po: $(UI_BUILD_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui --loglevel warn run extract-strings -- --clean
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-strings -- --clean
## generate API django .pot .po
messages:

View File

@@ -1,5 +1,5 @@
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX Mailing List](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://groups.google.com/g/awx-project)
[![IRC Chat - #ansible-awx](https://img.shields.io/badge/IRC-%23ansible--awx-blueviolet.svg)](https://libera.chat)
[![Ansible Matrix](https://img.shields.io/badge/matrix-Ansible%20Community-blueviolet.svg?logo=matrix)](https://chat.ansible.im/#/welcome) [![Ansible Discourse](https://img.shields.io/badge/discourse-Ansible%20Community-yellowgreen.svg?logo=discourse)](https://forum.ansible.com)
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
@@ -7,7 +7,7 @@ AWX provides a web-based user interface, REST API, and task engine built on top
To install AWX, please view the [Install guide](./INSTALL.md).
To learn more about using AWX, and Tower, view the [Tower docs site](http://docs.ansible.com/ansible-tower/index.html).
To learn more about using AWX, view the [AWX docs site](https://ansible.readthedocs.io/projects/awx/en/latest/).
The AWX Project Frequently Asked Questions can be found [here](https://www.ansible.com/awx-project-faq).
@@ -30,12 +30,12 @@ If you're experiencing a problem that you feel is a bug in AWX or have ideas for
Code of Conduct
---------------
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
Get Involved
------------
We welcome your feedback and ideas. Here's how to reach us with feedback and questions:
- Join the `#ansible-awx` channel on irc.libera.chat
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)
- Join the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com)
- Join the [Ansible Community Forum](https://forum.ansible.com)

View File

@@ -52,39 +52,14 @@ try:
except ImportError: # pragma: no cover
MODE = 'production'
import hashlib
try:
import django # noqa: F401
HAS_DJANGO = True
except ImportError:
HAS_DJANGO = False
pass
else:
from django.db.backends.base import schema
from django.db.models import indexes
from django.db.backends.utils import names_digest
from django.db import connection
if HAS_DJANGO is True:
# See upgrade blocker note in requirements/README.md
try:
names_digest('foo', 'bar', 'baz', length=8)
except ValueError:
def names_digest(*args, length):
"""
Generate a 32-bit digest of a set of arguments that can be used to shorten
identifying names. Support for use in FIPS environments.
"""
h = hashlib.md5(usedforsecurity=False)
for arg in args:
h.update(arg.encode())
return h.hexdigest()[:length]
schema.names_digest = names_digest
indexes.names_digest = names_digest
def find_commands(management_dir):
# Modified version of function from django/core/management/__init__.py.
@@ -179,10 +154,12 @@ def manage():
from django.conf import settings
from django.core.management import execute_from_command_line
# enforce the postgres version is equal to 12. if not, then terminate program with exit code of 1
# enforce the postgres version is a minimum of 12 (we need this for partitioning); if not, then terminate program with exit code of 1
# In the future if we require a feature of a version of postgres > 12 this should be updated to reflect that.
# The return of connection.pg_version is something like 12013
if not os.getenv('SKIP_PG_VERSION_CHECK', False) and not MODE == 'development':
if (connection.pg_version // 10000) < 12:
sys.stderr.write("Postgres version 12 is required\n")
sys.stderr.write("At a minimum, postgres version 12 is required\n")
sys.exit(1)
if len(sys.argv) >= 2 and sys.argv[1] in ('version', '--version'): # pragma: no cover

View File

@@ -93,6 +93,7 @@ register(
default='',
label=_('Login redirect override URL'),
help_text=_('URL to which unauthorized users will be redirected to log in. If blank, users will be sent to the login page.'),
warning_text=_('Changing the redirect URL could impact the ability to login if local authentication is also disabled.'),
category=_('Authentication'),
category_slug='authentication',
)

View File

@@ -1,450 +0,0 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
# Python
import re
import json
from functools import reduce
# Django
from django.core.exceptions import FieldError, ValidationError, FieldDoesNotExist
from django.db import models
from django.db.models import Q, CharField, IntegerField, BooleanField, TextField, JSONField
from django.db.models.fields.related import ForeignObjectRel, ManyToManyField, ForeignKey
from django.db.models.functions import Cast
from django.contrib.contenttypes.models import ContentType
from django.contrib.contenttypes.fields import GenericForeignKey
from django.utils.encoding import force_str
from django.utils.translation import gettext_lazy as _
# Django REST Framework
from rest_framework.exceptions import ParseError, PermissionDenied
from rest_framework.filters import BaseFilterBackend
# AWX
from awx.main.utils import get_type_for_model, to_python_boolean
from awx.main.utils.db import get_all_field_names
class TypeFilterBackend(BaseFilterBackend):
"""
Filter on type field now returned with all objects.
"""
def filter_queryset(self, request, queryset, view):
try:
types = None
for key, value in request.query_params.items():
if key == 'type':
if ',' in value:
types = value.split(',')
else:
types = (value,)
if types:
types_map = {}
for ct in ContentType.objects.filter(Q(app_label='main') | Q(app_label='auth', model='user')):
ct_model = ct.model_class()
if not ct_model:
continue
ct_type = get_type_for_model(ct_model)
types_map[ct_type] = ct.pk
model = queryset.model
model_type = get_type_for_model(model)
if 'polymorphic_ctype' in get_all_field_names(model):
types_pks = set([v for k, v in types_map.items() if k in types])
queryset = queryset.filter(polymorphic_ctype_id__in=types_pks)
elif model_type in types:
queryset = queryset
else:
queryset = queryset.none()
return queryset
except FieldError as e:
# Return a 400 for invalid field names.
raise ParseError(*e.args)
def get_fields_from_path(model, path):
"""
Given a Django ORM lookup path (possibly over multiple models)
Returns the fields in the line, and also the revised lookup path
ex., given
model=Organization
path='project__timeout'
returns tuple of fields traversed as well and a corrected path,
for special cases we do substitutions
([<IntegerField for timeout>], 'project__timeout')
"""
# Store of all the fields used to detect repeats
field_list = []
new_parts = []
for name in path.split('__'):
if model is None:
raise ParseError(_('No related model for field {}.').format(name))
# HACK: Make project and inventory source filtering by old field names work for backwards compatibility.
if model._meta.object_name in ('Project', 'InventorySource'):
name = {'current_update': 'current_job', 'last_update': 'last_job', 'last_update_failed': 'last_job_failed', 'last_updated': 'last_job_run'}.get(
name, name
)
if name == 'type' and 'polymorphic_ctype' in get_all_field_names(model):
name = 'polymorphic_ctype'
new_parts.append('polymorphic_ctype__model')
else:
new_parts.append(name)
if name in getattr(model, 'PASSWORD_FIELDS', ()):
raise PermissionDenied(_('Filtering on password fields is not allowed.'))
elif name == 'pk':
field = model._meta.pk
else:
name_alt = name.replace("_", "")
if name_alt in model._meta.fields_map.keys():
field = model._meta.fields_map[name_alt]
new_parts.pop()
new_parts.append(name_alt)
else:
field = model._meta.get_field(name)
if isinstance(field, ForeignObjectRel) and getattr(field.field, '__prevent_search__', False):
raise PermissionDenied(_('Filtering on %s is not allowed.' % name))
elif getattr(field, '__prevent_search__', False):
raise PermissionDenied(_('Filtering on %s is not allowed.' % name))
if field in field_list:
# Field traversed twice, could create infinite JOINs, DoSing Tower
raise ParseError(_('Loops not allowed in filters, detected on field {}.').format(field.name))
field_list.append(field)
model = getattr(field, 'related_model', None)
return field_list, '__'.join(new_parts)
def get_field_from_path(model, path):
"""
Given a Django ORM lookup path (possibly over multiple models)
Returns the last field in the line, and the revised lookup path
ex.
(<IntegerField for timeout>, 'project__timeout')
"""
field_list, new_path = get_fields_from_path(model, path)
return (field_list[-1], new_path)
class FieldLookupBackend(BaseFilterBackend):
"""
Filter using field lookups provided via query string parameters.
"""
RESERVED_NAMES = ('page', 'page_size', 'format', 'order', 'order_by', 'search', 'type', 'host_filter', 'count_disabled', 'no_truncate', 'limit')
SUPPORTED_LOOKUPS = (
'exact',
'iexact',
'contains',
'icontains',
'startswith',
'istartswith',
'endswith',
'iendswith',
'regex',
'iregex',
'gt',
'gte',
'lt',
'lte',
'in',
'isnull',
'search',
)
# A list of fields that we know can be filtered on without the possibility
# of introducing duplicates
NO_DUPLICATES_ALLOW_LIST = (CharField, IntegerField, BooleanField, TextField)
def get_fields_from_lookup(self, model, lookup):
if '__' in lookup and lookup.rsplit('__', 1)[-1] in self.SUPPORTED_LOOKUPS:
path, suffix = lookup.rsplit('__', 1)
else:
path = lookup
suffix = 'exact'
if not path:
raise ParseError(_('Query string field name not provided.'))
# FIXME: Could build up a list of models used across relationships, use
# those lookups combined with request.user.get_queryset(Model) to make
# sure user cannot query using objects he could not view.
field_list, new_path = get_fields_from_path(model, path)
new_lookup = new_path
new_lookup = '__'.join([new_path, suffix])
return field_list, new_lookup
def get_field_from_lookup(self, model, lookup):
'''Method to match return type of single field, if needed.'''
field_list, new_lookup = self.get_fields_from_lookup(model, lookup)
return (field_list[-1], new_lookup)
def to_python_related(self, value):
value = force_str(value)
if value.lower() in ('none', 'null'):
return None
else:
return int(value)
def value_to_python_for_field(self, field, value):
if isinstance(field, models.BooleanField):
return to_python_boolean(value)
elif isinstance(field, (ForeignObjectRel, ManyToManyField, GenericForeignKey, ForeignKey)):
try:
return self.to_python_related(value)
except ValueError:
raise ParseError(_('Invalid {field_name} id: {field_id}').format(field_name=getattr(field, 'name', 'related field'), field_id=value))
else:
return field.to_python(value)
def value_to_python(self, model, lookup, value):
try:
lookup.encode("ascii")
except UnicodeEncodeError:
raise ValueError("%r is not an allowed field name. Must be ascii encodable." % lookup)
field_list, new_lookup = self.get_fields_from_lookup(model, lookup)
field = field_list[-1]
needs_distinct = not all(isinstance(f, self.NO_DUPLICATES_ALLOW_LIST) for f in field_list)
# Type names are stored without underscores internally, but are presented and
# and serialized over the API containing underscores so we remove `_`
# for polymorphic_ctype__model lookups.
if new_lookup.startswith('polymorphic_ctype__model'):
value = value.replace('_', '')
elif new_lookup.endswith('__isnull'):
value = to_python_boolean(value)
elif new_lookup.endswith('__in'):
items = []
if not value:
raise ValueError('cannot provide empty value for __in')
for item in value.split(','):
items.append(self.value_to_python_for_field(field, item))
value = items
elif new_lookup.endswith('__regex') or new_lookup.endswith('__iregex'):
try:
re.compile(value)
except re.error as e:
raise ValueError(e.args[0])
elif new_lookup.endswith('__iexact'):
if not isinstance(field, (CharField, TextField)):
raise ValueError(f'{field.name} is not a text field and cannot be filtered by case-insensitive search')
elif new_lookup.endswith('__search'):
related_model = getattr(field, 'related_model', None)
if not related_model:
raise ValueError('%s is not searchable' % new_lookup[:-8])
new_lookups = []
for rm_field in related_model._meta.fields:
if rm_field.name in ('username', 'first_name', 'last_name', 'email', 'name', 'description', 'playbook'):
new_lookups.append('{}__{}__icontains'.format(new_lookup[:-8], rm_field.name))
return value, new_lookups, needs_distinct
else:
if isinstance(field, JSONField):
new_lookup = new_lookup.replace(field.name, f'{field.name}_as_txt')
value = self.value_to_python_for_field(field, value)
return value, new_lookup, needs_distinct
def filter_queryset(self, request, queryset, view):
try:
# Apply filters specified via query_params. Each entry in the lists
# below is (negate, field, value).
and_filters = []
or_filters = []
chain_filters = []
role_filters = []
search_filters = {}
needs_distinct = False
# Can only have two values: 'AND', 'OR'
# If 'AND' is used, an item must satisfy all conditions to show up in the results.
# If 'OR' is used, an item just needs to satisfy one condition to appear in results.
search_filter_relation = 'OR'
for key, values in request.query_params.lists():
if key in self.RESERVED_NAMES:
continue
# HACK: make `created` available via API for the Django User ORM model
# so it keep compatibility with other objects which exposes the `created` attr.
if queryset.model._meta.object_name == 'User' and key.startswith('created'):
key = key.replace('created', 'date_joined')
# HACK: Make job event filtering by host name mostly work even
# when not capturing job event hosts M2M.
if queryset.model._meta.object_name == 'JobEvent' and key.startswith('hosts__name'):
key = key.replace('hosts__name', 'or__host__name')
or_filters.append((False, 'host__name__isnull', True))
# Custom __int filter suffix (internal use only).
q_int = False
if key.endswith('__int'):
key = key[:-5]
q_int = True
# RBAC filtering
if key == 'role_level':
role_filters.append(values[0])
continue
# Search across related objects.
if key.endswith('__search'):
if values and ',' in values[0]:
search_filter_relation = 'AND'
values = reduce(lambda list1, list2: list1 + list2, [i.split(',') for i in values])
for value in values:
search_value, new_keys, _ = self.value_to_python(queryset.model, key, force_str(value))
assert isinstance(new_keys, list)
search_filters[search_value] = new_keys
# by definition, search *only* joins across relations,
# so it _always_ needs a .distinct()
needs_distinct = True
continue
# Custom chain__ and or__ filters, mutually exclusive (both can
# precede not__).
q_chain = False
q_or = False
if key.startswith('chain__'):
key = key[7:]
q_chain = True
elif key.startswith('or__'):
key = key[4:]
q_or = True
# Custom not__ filter prefix.
q_not = False
if key.startswith('not__'):
key = key[5:]
q_not = True
# Convert value(s) to python and add to the appropriate list.
for value in values:
if q_int:
value = int(value)
value, new_key, distinct = self.value_to_python(queryset.model, key, value)
if distinct:
needs_distinct = True
if '_as_txt' in new_key:
fname = next(item for item in new_key.split('__') if item.endswith('_as_txt'))
queryset = queryset.annotate(**{fname: Cast(fname[:-7], output_field=TextField())})
if q_chain:
chain_filters.append((q_not, new_key, value))
elif q_or:
or_filters.append((q_not, new_key, value))
else:
and_filters.append((q_not, new_key, value))
# Now build Q objects for database query filter.
if and_filters or or_filters or chain_filters or role_filters or search_filters:
args = []
for n, k, v in and_filters:
if n:
args.append(~Q(**{k: v}))
else:
args.append(Q(**{k: v}))
for role_name in role_filters:
if not hasattr(queryset.model, 'accessible_pk_qs'):
raise ParseError(_('Cannot apply role_level filter to this list because its model ' 'does not use roles for access control.'))
args.append(Q(pk__in=queryset.model.accessible_pk_qs(request.user, role_name)))
if or_filters:
q = Q()
for n, k, v in or_filters:
if n:
q |= ~Q(**{k: v})
else:
q |= Q(**{k: v})
args.append(q)
if search_filters and search_filter_relation == 'OR':
q = Q()
for term, constrains in search_filters.items():
for constrain in constrains:
q |= Q(**{constrain: term})
args.append(q)
elif search_filters and search_filter_relation == 'AND':
for term, constrains in search_filters.items():
q_chain = Q()
for constrain in constrains:
q_chain |= Q(**{constrain: term})
queryset = queryset.filter(q_chain)
for n, k, v in chain_filters:
if n:
q = ~Q(**{k: v})
else:
q = Q(**{k: v})
queryset = queryset.filter(q)
queryset = queryset.filter(*args)
if needs_distinct:
queryset = queryset.distinct()
return queryset
except (FieldError, FieldDoesNotExist, ValueError, TypeError) as e:
raise ParseError(e.args[0])
except ValidationError as e:
raise ParseError(json.dumps(e.messages, ensure_ascii=False))
class OrderByBackend(BaseFilterBackend):
"""
Filter to apply ordering based on query string parameters.
"""
def filter_queryset(self, request, queryset, view):
try:
order_by = None
for key, value in request.query_params.items():
if key in ('order', 'order_by'):
order_by = value
if ',' in value:
order_by = value.split(',')
else:
order_by = (value,)
default_order_by = self.get_default_ordering(view)
# glue the order by and default order by together so that the default is the backup option
order_by = list(order_by or []) + list(default_order_by or [])
if order_by:
order_by = self._validate_ordering_fields(queryset.model, order_by)
# Special handling of the type field for ordering. In this
# case, we're not sorting exactly on the type field, but
# given the limited number of views with multiple types,
# sorting on polymorphic_ctype.model is effectively the same.
new_order_by = []
if 'polymorphic_ctype' in get_all_field_names(queryset.model):
for field in order_by:
if field == 'type':
new_order_by.append('polymorphic_ctype__model')
elif field == '-type':
new_order_by.append('-polymorphic_ctype__model')
else:
new_order_by.append(field)
else:
for field in order_by:
if field not in ('type', '-type'):
new_order_by.append(field)
queryset = queryset.order_by(*new_order_by)
return queryset
except FieldError as e:
# Return a 400 for invalid field names.
raise ParseError(*e.args)
def get_default_ordering(self, view):
ordering = getattr(view, 'ordering', None)
if isinstance(ordering, str):
return (ordering,)
return ordering
def _validate_ordering_fields(self, model, order_by):
for field_name in order_by:
# strip off the negation prefix `-` if it exists
prefix = ''
path = field_name
if field_name[0] == '-':
prefix = field_name[0]
path = field_name[1:]
try:
field, new_path = get_field_from_path(model, path)
new_path = '{}{}'.format(prefix, new_path)
except (FieldError, FieldDoesNotExist) as e:
raise ParseError(e.args[0])
yield new_path

View File

@@ -30,12 +30,17 @@ from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import StaticHTMLRenderer
from rest_framework.negotiation import DefaultContentNegotiation
# django-ansible-base
from ansible_base.rest_filters.rest_framework.field_lookup_backend import FieldLookupBackend
from ansible_base.lib.utils.models import get_all_field_names
from ansible_base.rbac.models import RoleEvaluation, RoleDefinition
from ansible_base.rbac.permission_registry import permission_registry
# AWX
from awx.api.filters import FieldLookupBackend
from awx.main.models import UnifiedJob, UnifiedJobTemplate, User, Role, Credential, WorkflowJobTemplateNode, WorkflowApprovalTemplate
from awx.main.models.rbac import give_creator_permissions
from awx.main.access import optimize_queryset
from awx.main.utils import camelcase_to_underscore, get_search_fields, getattrd, get_object_or_400, decrypt_field, get_awx_version
from awx.main.utils.db import get_all_field_names
from awx.main.utils.licensing import server_product_name
from awx.main.views import ApiErrorView
from awx.api.serializers import ResourceAccessListElementSerializer, CopySerializer
@@ -90,7 +95,9 @@ class LoggedLoginView(auth_views.LoginView):
ret = super(LoggedLoginView, self).post(request, *args, **kwargs)
if request.user.is_authenticated:
logger.info(smart_str(u"User {} logged in from {}".format(self.request.user.username, request.META.get('REMOTE_ADDR', None))))
ret.set_cookie('userLoggedIn', 'true')
ret.set_cookie(
'userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False), samesite=getattr(settings, 'USER_COOKIE_SAMESITE', 'Lax')
)
ret.setdefault('X-API-Session-Cookie-Name', getattr(settings, 'SESSION_COOKIE_NAME', 'awx_sessionid'))
return ret
@@ -106,7 +113,7 @@ class LoggedLogoutView(auth_views.LogoutView):
original_user = getattr(request, 'user', None)
ret = super(LoggedLogoutView, self).dispatch(request, *args, **kwargs)
current_user = getattr(request, 'user', None)
ret.set_cookie('userLoggedIn', 'false')
ret.set_cookie('userLoggedIn', 'false', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False))
if (not current_user or not getattr(current_user, 'pk', True)) and current_user != original_user:
logger.info("User {} logged out.".format(original_user.username))
return ret
@@ -169,7 +176,7 @@ class APIView(views.APIView):
self.__init_request_error__ = exc
except UnsupportedMediaType as exc:
exc.detail = _(
'You did not use correct Content-Type in your HTTP request. ' 'If you are using our REST API, the Content-Type must be application/json'
'You did not use correct Content-Type in your HTTP request. If you are using our REST API, the Content-Type must be application/json'
)
self.__init_request_error__ = exc
return drf_request
@@ -232,7 +239,8 @@ class APIView(views.APIView):
response = super(APIView, self).finalize_response(request, response, *args, **kwargs)
time_started = getattr(self, 'time_started', None)
response['X-API-Product-Version'] = get_awx_version()
if request.user.is_authenticated:
response['X-API-Product-Version'] = get_awx_version()
response['X-API-Product-Name'] = server_product_name()
response['X-API-Node'] = settings.CLUSTER_HOST_ID
@@ -470,7 +478,11 @@ class ListAPIView(generics.ListAPIView, GenericAPIView):
class ListCreateAPIView(ListAPIView, generics.ListCreateAPIView):
# Base class for a list view that allows creating new objects.
pass
def perform_create(self, serializer):
super().perform_create(serializer)
if serializer.Meta.model in permission_registry.all_registered_models:
if self.request and self.request.user:
give_creator_permissions(self.request.user, serializer.instance)
class ParentMixin(object):
@@ -790,6 +802,7 @@ class RetrieveUpdateDestroyAPIView(RetrieveUpdateAPIView, DestroyAPIView):
class ResourceAccessList(ParentMixin, ListAPIView):
deprecated = True
serializer_class = ResourceAccessListElementSerializer
ordering = ('username',)
@@ -797,6 +810,15 @@ class ResourceAccessList(ParentMixin, ListAPIView):
obj = self.get_parent_object()
content_type = ContentType.objects.get_for_model(obj)
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
ancestors = set(RoleEvaluation.objects.filter(content_type_id=content_type.id, object_id=obj.id).values_list('role_id', flat=True))
qs = User.objects.filter(has_roles__in=ancestors) | User.objects.filter(is_superuser=True)
auditor_role = RoleDefinition.objects.filter(name="System Auditor").first()
if auditor_role:
qs |= User.objects.filter(role_assignments__role_definition=auditor_role)
return qs.distinct()
roles = set(Role.objects.filter(content_type=content_type, object_id=obj.id))
ancestors = set()
@@ -956,7 +978,7 @@ class CopyAPIView(GenericAPIView):
None, None, self.model, obj, request.user, create_kwargs=create_kwargs, copy_name=serializer.validated_data.get('name', '')
)
if hasattr(new_obj, 'admin_role') and request.user not in new_obj.admin_role.members.all():
new_obj.admin_role.members.add(request.user)
give_creator_permissions(request.user, new_obj)
if sub_objs:
permission_check_func = None
if hasattr(type(self), 'deep_copy_permission_check_func'):

View File

@@ -36,11 +36,13 @@ class Metadata(metadata.SimpleMetadata):
field_info = OrderedDict()
field_info['type'] = self.label_lookup[field]
field_info['required'] = getattr(field, 'required', False)
field_info['hidden'] = getattr(field, 'hidden', False)
text_attrs = [
'read_only',
'label',
'help_text',
'warning_text',
'min_length',
'max_length',
'min_value',
@@ -71,7 +73,7 @@ class Metadata(metadata.SimpleMetadata):
'url': _('URL for this {}.'),
'related': _('Data structure with URLs of related resources.'),
'summary_fields': _(
'Data structure with name/description for related resources. ' 'The output for some objects may be limited for performance reasons.'
'Data structure with name/description for related resources. The output for some objects may be limited for performance reasons.'
),
'created': _('Timestamp when this {} was created.'),
'modified': _('Timestamp when this {} was last modified.'),

View File

@@ -6,7 +6,7 @@ import copy
import json
import logging
import re
from collections import OrderedDict
from collections import Counter, OrderedDict
from datetime import timedelta
from uuid import uuid4
@@ -43,9 +43,14 @@ from rest_framework.utils.serializer_helpers import ReturnList
# Django-Polymorphic
from polymorphic.models import PolymorphicModel
# django-ansible-base
from ansible_base.lib.utils.models import get_type_for_model
from ansible_base.rbac.models import RoleEvaluation, ObjectRole
from ansible_base.rbac import permission_registry
# AWX
from awx.main.access import get_user_capabilities
from awx.main.constants import ACTIVE_STATES, CENSOR_VALUE
from awx.main.constants import ACTIVE_STATES, CENSOR_VALUE, org_role_to_permission
from awx.main.models import (
ActivityStream,
AdHocCommand,
@@ -80,6 +85,7 @@ from awx.main.models import (
Project,
ProjectUpdate,
ProjectUpdateEvent,
ReceptorAddress,
RefreshToken,
Role,
Schedule,
@@ -99,10 +105,9 @@ from awx.main.models import (
CLOUD_INVENTORY_SOURCES,
)
from awx.main.models.base import VERBOSITY_CHOICES, NEW_JOB_TYPE_CHOICES
from awx.main.models.rbac import get_roles_on_resource, role_summary_fields_generator
from awx.main.models.rbac import role_summary_fields_generator, give_creator_permissions, get_role_codenames, to_permissions, get_role_from_object_role
from awx.main.fields import ImplicitRoleField
from awx.main.utils import (
get_type_for_model,
get_model_for_type,
camelcase_to_underscore,
getattrd,
@@ -189,6 +194,7 @@ SUMMARIZABLE_FK_FIELDS = {
'webhook_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'approved_or_denied_by': ('id', 'username', 'first_name', 'last_name'),
'credential_type': DEFAULT_SUMMARY_FIELDS,
'resource': ('ansible_id', 'resource_type'),
}
@@ -220,7 +226,7 @@ class CopySerializer(serializers.Serializer):
view = self.context.get('view', None)
obj = view.get_object()
if name == obj.name:
raise serializers.ValidationError(_('The original object is already named {}, a copy from' ' it cannot have the same name.'.format(name)))
raise serializers.ValidationError(_('The original object is already named {}, a copy from it cannot have the same name.'.format(name)))
return attrs
@@ -635,7 +641,7 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
exclusions = self.get_validation_exclusions(self.instance)
obj = self.instance or self.Meta.model()
for k, v in attrs.items():
if k not in exclusions:
if k not in exclusions and k != 'canonical_address_port':
setattr(obj, k, v)
obj.full_clean(exclude=exclusions)
# full_clean may modify values on the instance; copy those changes
@@ -760,7 +766,7 @@ class UnifiedJobTemplateSerializer(BaseSerializer):
class UnifiedJobSerializer(BaseSerializer):
show_capabilities = ['start', 'delete']
event_processing_finished = serializers.BooleanField(
help_text=_('Indicates whether all of the events generated by this ' 'unified job have been saved to the database.'), read_only=True
help_text=_('Indicates whether all of the events generated by this unified job have been saved to the database.'), read_only=True
)
class Meta:
@@ -1579,7 +1585,7 @@ class ProjectPlaybooksSerializer(ProjectSerializer):
class ProjectInventoriesSerializer(ProjectSerializer):
inventory_files = serializers.ReadOnlyField(help_text=_('Array of inventory files and directories available within this project, ' 'not comprehensive.'))
inventory_files = serializers.ReadOnlyField(help_text=_('Array of inventory files and directories available within this project, not comprehensive.'))
class Meta:
model = Project
@@ -1629,8 +1635,8 @@ class ProjectUpdateDetailSerializer(ProjectUpdateSerializer):
fields = ('*', 'host_status_counts', 'playbook_counts')
def get_playbook_counts(self, obj):
task_count = obj.project_update_events.filter(event='playbook_on_task_start').count()
play_count = obj.project_update_events.filter(event='playbook_on_play_start').count()
task_count = obj.get_event_queryset().filter(event='playbook_on_task_start').count()
play_count = obj.get_event_queryset().filter(event='playbook_on_play_start').count()
data = {'play_count': play_count, 'task_count': task_count}
@@ -2201,6 +2207,99 @@ class BulkHostCreateSerializer(serializers.Serializer):
return return_data
class BulkHostDeleteSerializer(serializers.Serializer):
hosts = serializers.ListField(
allow_empty=False,
max_length=100000,
write_only=True,
help_text=_('List of hosts ids to be deleted, e.g. [105, 130, 131, 200]'),
)
class Meta:
model = Host
fields = ('hosts',)
def validate(self, attrs):
request = self.context.get('request', None)
max_hosts = settings.BULK_HOST_MAX_DELETE
# Validating the number of hosts to be deleted
if len(attrs['hosts']) > max_hosts:
raise serializers.ValidationError(
{
"ERROR": 'Number of hosts exceeds system setting BULK_HOST_MAX_DELETE',
"BULK_HOST_MAX_DELETE": max_hosts,
"Hosts_count": len(attrs['hosts']),
}
)
# Getting list of all host objects, filtered by the list of the hosts to delete
attrs['host_qs'] = Host.objects.get_queryset().filter(pk__in=attrs['hosts']).only('id', 'inventory_id', 'name')
# Converting the queryset data in a dict. to reduce the number of queries when
# manipulating the data
attrs['hosts_data'] = attrs['host_qs'].values()
if len(attrs['host_qs']) == 0:
error_hosts = {host: "Hosts do not exist or you lack permission to delete it" for host in attrs['hosts']}
raise serializers.ValidationError({'hosts': error_hosts})
if len(attrs['host_qs']) < len(attrs['hosts']):
hosts_exists = [host['id'] for host in attrs['hosts_data']]
failed_hosts = list(set(attrs['hosts']).difference(hosts_exists))
error_hosts = {host: "Hosts do not exist or you lack permission to delete it" for host in failed_hosts}
raise serializers.ValidationError({'hosts': error_hosts})
# Getting all inventories that the hosts can be in
inv_list = list(set([host['inventory_id'] for host in attrs['hosts_data']]))
# Checking that the user have permission to all inventories
errors = dict()
for inv in Inventory.objects.get_queryset().filter(pk__in=inv_list):
if request and not request.user.is_superuser:
if request.user not in inv.admin_role:
errors[inv.name] = "Lack permissions to delete hosts from this inventory."
if errors != {}:
raise PermissionDenied({"inventories": errors})
# check the inventory type only if the user have permission to it.
errors = dict()
for inv in Inventory.objects.get_queryset().filter(pk__in=inv_list):
if inv.kind != '':
errors[inv.name] = "Hosts can only be deleted from manual inventories."
if errors != {}:
raise serializers.ValidationError({"inventories": errors})
attrs['inventories'] = inv_list
return attrs
def delete(self, validated_data):
result = {"hosts": dict()}
changes = {'deleted_hosts': dict()}
for inventory in validated_data['inventories']:
changes['deleted_hosts'][inventory] = list()
for host in validated_data['hosts_data']:
result["hosts"][host["id"]] = f"The host {host['name']} was deleted"
changes['deleted_hosts'][host["inventory_id"]].append({"host_id": host["id"], "host_name": host["name"]})
try:
validated_data['host_qs'].delete()
except Exception as e:
raise serializers.ValidationError({"detail": _(f"cannot delete hosts, host deletion error {e}")})
request = self.context.get('request', None)
for inventory in validated_data['inventories']:
activity_entry = ActivityStream.objects.create(
operation='update',
object1='inventory',
changes=json.dumps(changes['deleted_hosts'][inventory]),
actor=request.user,
)
activity_entry.inventory.add(inventory)
return result
class GroupTreeSerializer(GroupSerializer):
children = serializers.SerializerMethodField()
@@ -2664,6 +2763,30 @@ class ResourceAccessListElementSerializer(UserSerializer):
if 'summary_fields' not in ret:
ret['summary_fields'] = {}
team_content_type = ContentType.objects.get_for_model(Team)
content_type = ContentType.objects.get_for_model(obj)
reversed_org_map = {}
for k, v in org_role_to_permission.items():
reversed_org_map[v] = k
reversed_role_map = {}
for k, v in to_permissions.items():
reversed_role_map[v] = k
def get_roles_from_perms(perm_list):
"""given a list of permission codenames return a list of role names"""
role_names = set()
for codename in perm_list:
action = codename.split('_', 1)[0]
if action in reversed_role_map:
role_names.add(reversed_role_map[action])
elif codename in reversed_org_map:
if isinstance(obj, Organization):
role_names.add(reversed_org_map[codename])
if 'view_organization' not in role_names:
role_names.add('read_role')
return list(role_names)
def format_role_perm(role):
role_dict = {'id': role.id, 'name': role.name, 'description': role.description}
try:
@@ -2679,13 +2802,21 @@ class ResourceAccessListElementSerializer(UserSerializer):
else:
# Singleton roles should not be managed from this view, as per copy/edit rework spec
role_dict['user_capabilities'] = {'unattach': False}
return {'role': role_dict, 'descendant_roles': get_roles_on_resource(obj, role)}
model_name = content_type.model
if isinstance(obj, Organization):
descendant_perms = [codename for codename in get_role_codenames(role) if codename.endswith(model_name) or codename.startswith('add_')]
else:
descendant_perms = [codename for codename in get_role_codenames(role) if codename.endswith(model_name)]
return {'role': role_dict, 'descendant_roles': get_roles_from_perms(descendant_perms)}
def format_team_role_perm(naive_team_role, permissive_role_ids):
ret = []
team = naive_team_role.content_object
team_role = naive_team_role
if naive_team_role.role_field == 'admin_role':
team_role = naive_team_role.content_object.member_role
team_role = team.member_role
for role in team_role.children.filter(id__in=permissive_role_ids).all():
role_dict = {
'id': role.id,
@@ -2705,13 +2836,87 @@ class ResourceAccessListElementSerializer(UserSerializer):
else:
# Singleton roles should not be managed from this view, as per copy/edit rework spec
role_dict['user_capabilities'] = {'unattach': False}
ret.append({'role': role_dict, 'descendant_roles': get_roles_on_resource(obj, team_role)})
descendant_perms = list(
RoleEvaluation.objects.filter(role__in=team.has_roles.all(), object_id=obj.id, content_type_id=content_type.id)
.values_list('codename', flat=True)
.distinct()
)
ret.append({'role': role_dict, 'descendant_roles': get_roles_from_perms(descendant_perms)})
return ret
team_content_type = ContentType.objects.get_for_model(Team)
content_type = ContentType.objects.get_for_model(obj)
gfk_kwargs = dict(content_type_id=content_type.id, object_id=obj.id)
direct_permissive_role_ids = Role.objects.filter(**gfk_kwargs).values_list('id', flat=True)
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
ret['summary_fields']['direct_access'] = []
ret['summary_fields']['indirect_access'] = []
new_roles_seen = set()
all_team_roles = set()
all_permissive_role_ids = set()
for evaluation in RoleEvaluation.objects.filter(role__in=user.has_roles.all(), **gfk_kwargs).prefetch_related('role'):
new_role = evaluation.role
if new_role.id in new_roles_seen:
continue
new_roles_seen.add(new_role.id)
old_role = get_role_from_object_role(new_role)
all_permissive_role_ids.add(old_role.id)
if int(new_role.object_id) == obj.id and new_role.content_type_id == content_type.id:
ret['summary_fields']['direct_access'].append(format_role_perm(old_role))
elif new_role.content_type_id == team_content_type.id:
all_team_roles.add(old_role)
else:
ret['summary_fields']['indirect_access'].append(format_role_perm(old_role))
# Lazy role creation gives us a big problem, where some intermediate roles are not easy to find
# like when a team has indirect permission, so here we get all roles the users teams have
# these contribute to all potential permission-granting roles of the object
user_teams_qs = permission_registry.team_model.objects.filter(member_roles__in=ObjectRole.objects.filter(users=user))
team_obj_roles = ObjectRole.objects.filter(teams__in=user_teams_qs)
for evaluation in RoleEvaluation.objects.filter(role__in=team_obj_roles, **gfk_kwargs).prefetch_related('role'):
new_role = evaluation.role
if new_role.id in new_roles_seen:
continue
new_roles_seen.add(new_role.id)
old_role = get_role_from_object_role(new_role)
all_permissive_role_ids.add(old_role.id)
# In DAB RBAC, superuser is strictly a user flag, and global roles are not in the RoleEvaluation table
if user.is_superuser:
ret['summary_fields'].setdefault('indirect_access', [])
all_role_names = [field.name for field in obj._meta.get_fields() if isinstance(field, ImplicitRoleField)]
ret['summary_fields']['indirect_access'].append(
{
"role": {
"id": None,
"name": _("System Administrator"),
"description": _("Can manage all aspects of the system"),
"user_capabilities": {"unattach": False},
},
"descendant_roles": all_role_names,
}
)
elif user.is_system_auditor:
ret['summary_fields'].setdefault('indirect_access', [])
ret['summary_fields']['indirect_access'].append(
{
"role": {
"id": None,
"name": _("System Auditor"),
"description": _("Can view all aspects of the system"),
"user_capabilities": {"unattach": False},
},
"descendant_roles": ["read_role"],
}
)
ret['summary_fields']['direct_access'].extend([y for x in (format_team_role_perm(r, all_permissive_role_ids) for r in all_team_roles) for y in x])
return ret
direct_permissive_role_ids = Role.objects.filter(content_type=content_type, object_id=obj.id).values_list('id', flat=True)
all_permissive_role_ids = Role.objects.filter(content_type=content_type, object_id=obj.id).values_list('ancestors__id', flat=True)
direct_access_roles = user.roles.filter(id__in=direct_permissive_role_ids).all()
@@ -2905,7 +3110,7 @@ class CredentialSerializer(BaseSerializer):
):
if getattr(self.instance, related_objects).count() > 0:
raise ValidationError(
_('You cannot change the credential type of the credential, as it may break the functionality' ' of the resources using it.')
_('You cannot change the credential type of the credential, as it may break the functionality of the resources using it.')
)
return credential_type
@@ -2925,7 +3130,7 @@ class CredentialSerializerCreate(CredentialSerializer):
default=None,
write_only=True,
allow_null=True,
help_text=_('Write-only field used to add user to owner role. If provided, ' 'do not give either team or organization. Only valid for creation.'),
help_text=_('Write-only field used to add user to owner role. If provided, do not give either team or organization. Only valid for creation.'),
)
team = serializers.PrimaryKeyRelatedField(
queryset=Team.objects.all(),
@@ -2933,14 +3138,14 @@ class CredentialSerializerCreate(CredentialSerializer):
default=None,
write_only=True,
allow_null=True,
help_text=_('Write-only field used to add team to owner role. If provided, ' 'do not give either user or organization. Only valid for creation.'),
help_text=_('Write-only field used to add team to owner role. If provided, do not give either user or organization. Only valid for creation.'),
)
organization = serializers.PrimaryKeyRelatedField(
queryset=Organization.objects.all(),
required=False,
default=None,
allow_null=True,
help_text=_('Inherit permissions from organization roles. If provided on creation, ' 'do not give either user or team.'),
help_text=_('Inherit permissions from organization roles. If provided on creation, do not give either user or team.'),
)
class Meta:
@@ -2962,7 +3167,7 @@ class CredentialSerializerCreate(CredentialSerializer):
if len(owner_fields) > 1:
received = ", ".join(sorted(owner_fields))
raise serializers.ValidationError(
{"detail": _("Only one of 'user', 'team', or 'organization' should be provided, " "received {} fields.".format(received))}
{"detail": _("Only one of 'user', 'team', or 'organization' should be provided, received {} fields.".format(received))}
)
if attrs.get('team'):
@@ -2980,7 +3185,7 @@ class CredentialSerializerCreate(CredentialSerializer):
credential = super(CredentialSerializerCreate, self).create(validated_data)
if user:
credential.admin_role.members.add(user)
give_creator_permissions(user, credential)
if team:
if not credential.organization or team.organization.id != credential.organization.id:
raise serializers.ValidationError({"detail": _("Credential organization must be set and match before assigning to a team")})
@@ -3233,7 +3438,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
if get_field_from_model_or_attrs('host_config_key') and not inventory:
raise serializers.ValidationError({'host_config_key': _("Cannot enable provisioning callback without an inventory set.")})
prompting_error_message = _("Must either set a default value or ask to prompt on launch.")
prompting_error_message = _("You must either set a default value or ask to prompt on launch.")
if project is None:
raise serializers.ValidationError({'project': _("Job Templates must have a project assigned.")})
elif inventory is None and not get_field_from_model_or_attrs('ask_inventory_on_launch'):
@@ -3622,7 +3827,7 @@ class SystemJobSerializer(UnifiedJobSerializer):
try:
return obj.result_stdout
except StdoutMaxBytesExceeded as e:
return _("Standard Output too large to display ({text_size} bytes), " "only download supported for sizes over {supported_size} bytes.").format(
return _("Standard Output too large to display ({text_size} bytes), only download supported for sizes over {supported_size} bytes.").format(
text_size=e.total, supported_size=e.supported
)
@@ -4536,7 +4741,7 @@ class JobLaunchSerializer(BaseSerializer):
if cred.unique_hash() in provided_mapping.keys():
continue # User replaced credential with new of same type
errors.setdefault('credentials', []).append(
_('Removing {} credential at launch time without replacement is not supported. ' 'Provided list lacked credential(s): {}.').format(
_('Removing {} credential at launch time without replacement is not supported. Provided list lacked credential(s): {}.').format(
cred.unique_hash(display=True), ', '.join([str(c) for c in removed_creds])
)
)
@@ -4686,12 +4891,11 @@ class BulkJobNodeSerializer(WorkflowJobNodeSerializer):
# many-to-many fields
credentials = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
labels = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
# TODO: Use instance group role added via PR 13584(once merged), for now everything related to instance group is commented
# instance_groups = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
instance_groups = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
class Meta:
model = WorkflowJobNode
fields = ('*', 'credentials', 'labels') # m2m fields are not canonical for WJ nodes, TODO: add instance_groups once supported
fields = ('*', 'credentials', 'labels', 'instance_groups') # m2m fields are not canonical for WJ nodes
def validate(self, attrs):
return super(LaunchConfigurationBaseSerializer, self).validate(attrs)
@@ -4751,21 +4955,21 @@ class BulkJobLaunchSerializer(serializers.Serializer):
requested_use_execution_environments = {job['execution_environment'] for job in attrs['jobs'] if 'execution_environment' in job}
requested_use_credentials = set()
requested_use_labels = set()
# requested_use_instance_groups = set()
requested_use_instance_groups = set()
for job in attrs['jobs']:
for cred in job.get('credentials', []):
requested_use_credentials.add(cred)
for label in job.get('labels', []):
requested_use_labels.add(label)
# for instance_group in job.get('instance_groups', []):
# requested_use_instance_groups.add(instance_group)
for instance_group in job.get('instance_groups', []):
requested_use_instance_groups.add(instance_group)
key_to_obj_map = {
"unified_job_template": {obj.id: obj for obj in UnifiedJobTemplate.objects.filter(id__in=requested_ujts)},
"inventory": {obj.id: obj for obj in Inventory.objects.filter(id__in=requested_use_inventories)},
"credentials": {obj.id: obj for obj in Credential.objects.filter(id__in=requested_use_credentials)},
"labels": {obj.id: obj for obj in Label.objects.filter(id__in=requested_use_labels)},
# "instance_groups": {obj.id: obj for obj in InstanceGroup.objects.filter(id__in=requested_use_instance_groups)},
"instance_groups": {obj.id: obj for obj in InstanceGroup.objects.filter(id__in=requested_use_instance_groups)},
"execution_environment": {obj.id: obj for obj in ExecutionEnvironment.objects.filter(id__in=requested_use_execution_environments)},
}
@@ -4792,7 +4996,7 @@ class BulkJobLaunchSerializer(serializers.Serializer):
self.check_list_permission(Credential, requested_use_credentials, 'use_role')
self.check_list_permission(Label, requested_use_labels)
# self.check_list_permission(InstanceGroup, requested_use_instance_groups) # TODO: change to use_role for conflict
self.check_list_permission(InstanceGroup, requested_use_instance_groups) # TODO: change to use_role for conflict
self.check_list_permission(ExecutionEnvironment, requested_use_execution_environments) # TODO: change if roles introduced
jobs_object = self.get_objectified_jobs(attrs, key_to_obj_map)
@@ -4839,7 +5043,7 @@ class BulkJobLaunchSerializer(serializers.Serializer):
node_m2m_object_types_to_through_model = {
'credentials': WorkflowJobNode.credentials.through,
'labels': WorkflowJobNode.labels.through,
# 'instance_groups': WorkflowJobNode.instance_groups.through,
'instance_groups': WorkflowJobNode.instance_groups.through,
}
node_deferred_attr_names = (
'limit',
@@ -4892,9 +5096,9 @@ class BulkJobLaunchSerializer(serializers.Serializer):
if field_name in node_m2m_objects[node_identifier] and field_name == 'labels':
for label in node_m2m_objects[node_identifier][field_name]:
through_model_objects.append(through_model(label=label, workflowjobnode=node_m2m_objects[node_identifier]['node']))
# if obj_type in node_m2m_objects[node_identifier] and obj_type == 'instance_groups':
# for instance_group in node_m2m_objects[node_identifier][obj_type]:
# through_model_objects.append(through_model(instancegroup=instance_group, workflowjobnode=node_m2m_objects[node_identifier]['node']))
if field_name in node_m2m_objects[node_identifier] and field_name == 'instance_groups':
for instance_group in node_m2m_objects[node_identifier][field_name]:
through_model_objects.append(through_model(instancegroup=instance_group, workflowjobnode=node_m2m_objects[node_identifier]['node']))
if through_model_objects:
through_model.objects.bulk_create(through_model_objects)
@@ -5019,7 +5223,7 @@ class NotificationTemplateSerializer(BaseSerializer):
for subevent in event_messages:
if subevent not in ('running', 'approved', 'timed_out', 'denied'):
error_list.append(
_("Workflow Approval event '{}' invalid, must be one of " "'running', 'approved', 'timed_out', or 'denied'").format(subevent)
_("Workflow Approval event '{}' invalid, must be one of 'running', 'approved', 'timed_out', or 'denied'").format(subevent)
)
continue
subevent_messages = event_messages[subevent]
@@ -5075,16 +5279,21 @@ class NotificationTemplateSerializer(BaseSerializer):
body = messages[event].get('body', {})
if body:
try:
rendered_body = (
sandbox.ImmutableSandboxedEnvironment(undefined=DescriptiveUndefined).from_string(body).render(JobNotificationMixin.context_stub())
)
potential_body = json.loads(rendered_body)
if not isinstance(potential_body, dict):
error_list.append(
_("Webhook body for '{}' should be a json dictionary. Found type '{}'.".format(event, type(potential_body).__name__))
)
except json.JSONDecodeError as exc:
error_list.append(_("Webhook body for '{}' is not a valid json dictionary ({}).".format(event, exc)))
sandbox.ImmutableSandboxedEnvironment(undefined=DescriptiveUndefined).from_string(body).render(JobNotificationMixin.context_stub())
# https://github.com/ansible/awx/issues/14410
# When rendering something such as "{{ job.id }}"
# the return type is not a dict, unlike "{{ job_metadata }}" which is a dict
# potential_body = json.loads(rendered_body)
# if not isinstance(potential_body, dict):
# error_list.append(
# _("Webhook body for '{}' should be a json dictionary. Found type '{}'.".format(event, type(potential_body).__name__))
# )
except Exception as exc:
error_list.append(_("Webhook body for '{}' is not valid. The following gave an error ({}).".format(event, exc)))
if error_list:
raise serializers.ValidationError(error_list)
@@ -5357,10 +5566,24 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
class InstanceLinkSerializer(BaseSerializer):
class Meta:
model = InstanceLink
fields = ('source', 'target', 'link_state')
fields = ('id', 'related', 'source', 'target', 'target_full_address', 'link_state')
source = serializers.SlugRelatedField(slug_field="hostname", read_only=True)
target = serializers.SlugRelatedField(slug_field="hostname", read_only=True)
source = serializers.SlugRelatedField(slug_field="hostname", queryset=Instance.objects.all())
target = serializers.SerializerMethodField()
target_full_address = serializers.SerializerMethodField()
def get_related(self, obj):
res = super(InstanceLinkSerializer, self).get_related(obj)
res['source_instance'] = self.reverse('api:instance_detail', kwargs={'pk': obj.source.id})
res['target_address'] = self.reverse('api:receptor_address_detail', kwargs={'pk': obj.target.id})
return res
def get_target(self, obj):
return obj.target.instance.hostname
def get_target_full_address(self, obj):
return obj.target.get_full_address()
class InstanceNodeSerializer(BaseSerializer):
@@ -5369,6 +5592,29 @@ class InstanceNodeSerializer(BaseSerializer):
fields = ('id', 'hostname', 'node_type', 'node_state', 'enabled')
class ReceptorAddressSerializer(BaseSerializer):
full_address = serializers.SerializerMethodField()
class Meta:
model = ReceptorAddress
fields = (
'id',
'url',
'address',
'port',
'protocol',
'websocket_path',
'is_internal',
'canonical',
'instance',
'peers_from_control_nodes',
'full_address',
)
def get_full_address(self, obj):
return obj.get_full_address()
class InstanceSerializer(BaseSerializer):
show_capabilities = ['edit']
@@ -5377,10 +5623,17 @@ class InstanceSerializer(BaseSerializer):
jobs_running = serializers.IntegerField(help_text=_('Count of jobs in the running or waiting state that are targeted for this instance'), read_only=True)
jobs_total = serializers.IntegerField(help_text=_('Count of all jobs that target this instance'), read_only=True)
health_check_pending = serializers.SerializerMethodField()
peers = serializers.PrimaryKeyRelatedField(
help_text=_('Primary keys of receptor addresses to peer to.'), many=True, required=False, queryset=ReceptorAddress.objects.all()
)
reverse_peers = serializers.SerializerMethodField()
listener_port = serializers.IntegerField(source='canonical_address_port', required=False, allow_null=True)
peers_from_control_nodes = serializers.BooleanField(source='canonical_address_peers_from_control_nodes', required=False)
protocol = serializers.SerializerMethodField()
class Meta:
model = Instance
read_only_fields = ('ip_address', 'uuid', 'version')
read_only_fields = ('ip_address', 'uuid', 'version', 'managed', 'reverse_peers')
fields = (
'id',
'hostname',
@@ -5411,8 +5664,13 @@ class InstanceSerializer(BaseSerializer):
'managed_by_policy',
'node_type',
'node_state',
'managed',
'ip_address',
'peers',
'reverse_peers',
'listener_port',
'peers_from_control_nodes',
'protocol',
)
extra_kwargs = {
'node_type': {'initial': Instance.Types.EXECUTION, 'default': Instance.Types.EXECUTION},
@@ -5434,16 +5692,54 @@ class InstanceSerializer(BaseSerializer):
def get_related(self, obj):
res = super(InstanceSerializer, self).get_related(obj)
res['receptor_addresses'] = self.reverse('api:instance_receptor_addresses_list', kwargs={'pk': obj.pk})
res['jobs'] = self.reverse('api:instance_unified_jobs_list', kwargs={'pk': obj.pk})
res['instance_groups'] = self.reverse('api:instance_instance_groups_list', kwargs={'pk': obj.pk})
if settings.IS_K8S and obj.node_type in (Instance.Types.EXECUTION,):
res['install_bundle'] = self.reverse('api:instance_install_bundle', kwargs={'pk': obj.pk})
res['peers'] = self.reverse('api:instance_peers_list', kwargs={"pk": obj.pk})
res['instance_groups'] = self.reverse('api:instance_instance_groups_list', kwargs={'pk': obj.pk})
if obj.node_type in [Instance.Types.EXECUTION, Instance.Types.HOP] and not obj.managed:
res['install_bundle'] = self.reverse('api:instance_install_bundle', kwargs={'pk': obj.pk})
if self.context['request'].user.is_superuser or self.context['request'].user.is_system_auditor:
if obj.node_type == 'execution':
res['health_check'] = self.reverse('api:instance_health_check', kwargs={'pk': obj.pk})
return res
def create_or_update(self, validated_data, obj=None, create=True):
# create a managed receptor address if listener port is defined
port = validated_data.pop('listener_port', -1)
peers_from_control_nodes = validated_data.pop('peers_from_control_nodes', -1)
# delete the receptor address if the port is explicitly set to None
if obj and port == None:
obj.receptor_addresses.filter(address=obj.hostname).delete()
if create:
instance = super(InstanceSerializer, self).create(validated_data)
else:
instance = super(InstanceSerializer, self).update(obj, validated_data)
instance.refresh_from_db() # instance canonical address lookup is deferred, so needs to be reloaded
# only create or update if port is defined in validated_data or already exists in the
# canonical address
# this prevents creating a receptor address if peers_from_control_nodes is in
# validated_data but a port is not set
if (port != None and port != -1) or instance.canonical_address_port:
kwargs = {}
if port != -1:
kwargs['port'] = port
if peers_from_control_nodes != -1:
kwargs['peers_from_control_nodes'] = peers_from_control_nodes
if kwargs:
kwargs['canonical'] = True
instance.receptor_addresses.update_or_create(address=instance.hostname, defaults=kwargs)
return instance
def create(self, validated_data):
return self.create_or_update(validated_data, create=True)
def update(self, obj, validated_data):
return self.create_or_update(validated_data, obj, create=False)
def get_summary_fields(self, obj):
summary = super().get_summary_fields(obj)
@@ -5453,6 +5749,16 @@ class InstanceSerializer(BaseSerializer):
return summary
def get_reverse_peers(self, obj):
return Instance.objects.prefetch_related('peers').filter(peers__in=obj.receptor_addresses.all()).values_list('id', flat=True)
def get_protocol(self, obj):
# note: don't create a different query for receptor addresses, as this is prefetched on the View for optimization
for addr in obj.receptor_addresses.all():
if addr.canonical:
return addr.protocol
return ""
def get_consumed_capacity(self, obj):
return obj.consumed_capacity
@@ -5465,22 +5771,30 @@ class InstanceSerializer(BaseSerializer):
def get_health_check_pending(self, obj):
return obj.health_check_pending
def validate(self, data):
if self.instance:
if self.instance.node_type == Instance.Types.HOP:
raise serializers.ValidationError("Hop node instances may not be changed.")
else:
if not settings.IS_K8S:
raise serializers.ValidationError("Can only create instances on Kubernetes or OpenShift.")
return data
def validate(self, attrs):
# Oddly, using 'source' on a DRF field populates attrs with the source name, so we should rename it back
if 'canonical_address_port' in attrs:
attrs['listener_port'] = attrs.pop('canonical_address_port')
if 'canonical_address_peers_from_control_nodes' in attrs:
attrs['peers_from_control_nodes'] = attrs.pop('canonical_address_peers_from_control_nodes')
if not self.instance and not settings.IS_K8S:
raise serializers.ValidationError(_("Can only create instances on Kubernetes or OpenShift."))
# cannot enable peers_from_control_nodes if listener_port is not set
if attrs.get('peers_from_control_nodes'):
port = attrs.get('listener_port', -1) # -1 denotes missing, None denotes explicit null
if (port is None) or (port == -1 and self.instance and self.instance.canonical_address is None):
raise serializers.ValidationError(_("Cannot enable peers_from_control_nodes if listener_port is not set."))
return super().validate(attrs)
def validate_node_type(self, value):
if not self.instance:
if value not in (Instance.Types.EXECUTION,):
raise serializers.ValidationError("Can only create execution nodes.")
else:
if self.instance.node_type != value:
raise serializers.ValidationError("Cannot change node type.")
if not self.instance and value not in [Instance.Types.HOP, Instance.Types.EXECUTION]:
raise serializers.ValidationError(_("Can only create execution or hop nodes."))
if self.instance and self.instance.node_type != value:
raise serializers.ValidationError(_("Cannot change node type."))
return value
@@ -5488,30 +5802,71 @@ class InstanceSerializer(BaseSerializer):
if self.instance:
if value != self.instance.node_state:
if not settings.IS_K8S:
raise serializers.ValidationError("Can only change the state on Kubernetes or OpenShift.")
raise serializers.ValidationError(_("Can only change the state on Kubernetes or OpenShift."))
if value != Instance.States.DEPROVISIONING:
raise serializers.ValidationError("Can only change instances to the 'deprovisioning' state.")
if self.instance.node_type not in (Instance.Types.EXECUTION,):
raise serializers.ValidationError("Can only deprovision execution nodes.")
raise serializers.ValidationError(_("Can only change instances to the 'deprovisioning' state."))
if self.instance.managed:
raise serializers.ValidationError(_("Cannot deprovision managed nodes."))
else:
if value and value != Instance.States.INSTALLED:
raise serializers.ValidationError("Can only create instances in the 'installed' state.")
raise serializers.ValidationError(_("Can only create instances in the 'installed' state."))
return value
def validate_hostname(self, value):
"""
- Hostname cannot be "localhost" - but can be something like localhost.domain
- Cannot change the hostname of an-already instantiated & initialized Instance object
Cannot change the hostname
"""
if self.instance and self.instance.hostname != value:
raise serializers.ValidationError("Cannot change hostname.")
raise serializers.ValidationError(_("Cannot change hostname."))
return value
def validate_listener_port(self, value):
if self.instance and self.instance.listener_port != value:
raise serializers.ValidationError("Cannot change listener port.")
"""
Cannot change listener port, unless going from none to integer, and vice versa
If instance is managed, cannot change listener port at all
"""
if self.instance:
canonical_address_port = self.instance.canonical_address_port
if value and canonical_address_port and canonical_address_port != value:
raise serializers.ValidationError(_("Cannot change listener port."))
if self.instance.managed and value != canonical_address_port:
raise serializers.ValidationError(_("Cannot change listener port for managed nodes."))
return value
def validate_peers(self, value):
# cannot peer to an instance more than once
peers_instances = Counter(p.instance_id for p in value)
if any(count > 1 for count in peers_instances.values()):
raise serializers.ValidationError(_("Cannot peer to the same instance more than once."))
if self.instance:
instance_addresses = set(self.instance.receptor_addresses.all())
setting_peers = set(value)
peers_changed = set(self.instance.peers.all()) != setting_peers
if not settings.IS_K8S and peers_changed:
raise serializers.ValidationError(_("Cannot change peers."))
if self.instance.managed and peers_changed:
raise serializers.ValidationError(_("Setting peers manually for managed nodes is not allowed."))
# cannot peer to self
if instance_addresses & setting_peers:
raise serializers.ValidationError(_("Instance cannot peer to its own address."))
# cannot peer to an instance that is already peered to this instance
if instance_addresses:
for p in setting_peers:
if set(p.instance.peers.all()) & instance_addresses:
raise serializers.ValidationError(_(f"Instance {p.instance.hostname} is already peered to this instance."))
return value
def validate_peers_from_control_nodes(self, value):
if self.instance and self.instance.managed and self.instance.canonical_address_peers_from_control_nodes != value:
raise serializers.ValidationError(_("Cannot change peers_from_control_nodes for managed nodes."))
return value
@@ -5519,7 +5874,19 @@ class InstanceSerializer(BaseSerializer):
class InstanceHealthCheckSerializer(BaseSerializer):
class Meta:
model = Instance
read_only_fields = ('uuid', 'hostname', 'version', 'last_health_check', 'errors', 'cpu', 'memory', 'cpu_capacity', 'mem_capacity', 'capacity')
read_only_fields = (
'uuid',
'hostname',
'ip_address',
'version',
'last_health_check',
'errors',
'cpu',
'memory',
'cpu_capacity',
'mem_capacity',
'capacity',
)
fields = read_only_fields
@@ -5559,7 +5926,7 @@ class InstanceGroupSerializer(BaseSerializer):
instances = serializers.SerializerMethodField()
is_container_group = serializers.BooleanField(
required=False,
help_text=_('Indicates whether instances in this group are containerized.' 'Containerized groups have a designated Openshift or Kubernetes cluster.'),
help_text=_('Indicates whether instances in this group are containerized.Containerized groups have a designated Openshift or Kubernetes cluster.'),
)
# NOTE: help_text is duplicated from field definitions, no obvious way of
# both defining field details here and also getting the field's help_text
@@ -5570,7 +5937,7 @@ class InstanceGroupSerializer(BaseSerializer):
required=False,
initial=0,
label=_('Policy Instance Percentage'),
help_text=_("Minimum percentage of all instances that will be automatically assigned to " "this group when new instances come online."),
help_text=_("Minimum percentage of all instances that will be automatically assigned to this group when new instances come online."),
)
policy_instance_minimum = serializers.IntegerField(
default=0,
@@ -5578,7 +5945,7 @@ class InstanceGroupSerializer(BaseSerializer):
required=False,
initial=0,
label=_('Policy Instance Minimum'),
help_text=_("Static minimum number of Instances that will be automatically assign to " "this group when new instances come online."),
help_text=_("Static minimum number of Instances that will be automatically assign to this group when new instances come online."),
)
max_concurrent_jobs = serializers.IntegerField(
default=0,

View File

@@ -1,16 +1,10 @@
import json
import warnings
from coreapi.document import Object, Link
from rest_framework import exceptions
from rest_framework.permissions import AllowAny
from rest_framework.renderers import CoreJSONRenderer
from rest_framework.response import Response
from rest_framework.schemas import SchemaGenerator, AutoSchema as DRFAuthSchema
from rest_framework.views import APIView
from rest_framework_swagger import renderers
from drf_yasg.views import get_schema_view
from drf_yasg import openapi
class SuperUserSchemaGenerator(SchemaGenerator):
@@ -55,43 +49,15 @@ class AutoSchema(DRFAuthSchema):
return description
class SwaggerSchemaView(APIView):
_ignore_model_permissions = True
exclude_from_schema = True
permission_classes = [AllowAny]
renderer_classes = [CoreJSONRenderer, renderers.OpenAPIRenderer, renderers.SwaggerUIRenderer]
def get(self, request):
generator = SuperUserSchemaGenerator(title='Ansible Automation Platform controller API', patterns=None, urlconf=None)
schema = generator.get_schema(request=request)
# python core-api doesn't support the deprecation yet, so track it
# ourselves and return it in a response header
_deprecated = []
# By default, DRF OpenAPI serialization places all endpoints in
# a single node based on their root path (/api). Instead, we want to
# group them by topic/tag so that they're categorized in the rendered
# output
document = schema._data.pop('api')
for path, node in document.items():
if isinstance(node, Object):
for action in node.values():
topic = getattr(action, 'topic', None)
if topic:
schema._data.setdefault(topic, Object())
schema._data[topic]._data[path] = node
if isinstance(action, Object):
for link in action.links.values():
if link.deprecated:
_deprecated.append(link.url)
elif isinstance(node, Link):
topic = getattr(node, 'topic', None)
if topic:
schema._data.setdefault(topic, Object())
schema._data[topic]._data[path] = node
if not schema:
raise exceptions.ValidationError('The schema generator did not return a schema Document')
return Response(schema, headers={'X-Deprecated-Paths': json.dumps(_deprecated)})
schema_view = get_schema_view(
openapi.Info(
title="Snippets API",
default_version='v1',
description="Test description",
terms_of_service="https://www.google.com/policies/terms/",
contact=openapi.Contact(email="contact@snippets.local"),
license=openapi.License(name="BSD License"),
),
public=True,
permission_classes=[AllowAny],
)

View File

@@ -0,0 +1,22 @@
# Bulk Host Delete
This endpoint allows the client to delete multiple hosts from inventories.
They may do this by providing a list of hosts ID's to be deleted.
Example:
{
"hosts": [1, 2, 3, 4, 5]
}
Return data:
{
"hosts": {
"1": "The host a1 was deleted",
"2": "The host a2 was deleted",
"3": "The host a3 was deleted",
"4": "The host a4 was deleted",
"5": "The host a5 was deleted",
}
}

View File

@@ -3,21 +3,34 @@ receptor_group: awx
receptor_verify: true
receptor_tls: true
receptor_mintls13: false
{% if instance.node_type == "execution" %}
receptor_work_commands:
ansible-runner:
command: ansible-runner
params: worker
allowruntimeparams: true
verifysignature: true
custom_worksign_public_keyfile: receptor/work-public-key.pem
additional_python_packages:
- ansible-runner
{% endif %}
custom_worksign_public_keyfile: receptor/work_public_key.pem
custom_tls_certfile: receptor/tls/receptor.crt
custom_tls_keyfile: receptor/tls/receptor.key
custom_ca_certfile: receptor/tls/ca/receptor-ca.crt
receptor_protocol: 'tcp'
custom_ca_certfile: receptor/tls/ca/mesh-CA.crt
{% if listener_port %}
receptor_protocol: {{ listener_protocol }}
receptor_listener: true
receptor_port: {{ instance.listener_port }}
receptor_dependencies:
- python39-pip
receptor_port: {{ listener_port }}
{% else %}
receptor_listener: false
{% endif %}
{% if peers %}
receptor_peers:
{% for peer in peers %}
- address: {{ peer.address }}
protocol: {{ peer.protocol }}
{% endfor %}
{% endif %}
{% verbatim %}
podman_user: "{{ receptor_user }}"
podman_group: "{{ receptor_group }}"

View File

@@ -1,20 +1,16 @@
{% verbatim %}
---
- hosts: all
become: yes
tasks:
- name: Create the receptor user
user:
{% verbatim %}
name: "{{ receptor_user }}"
{% endverbatim %}
shell: /bin/bash
- name: Enable Copr repo for Receptor
command: dnf copr enable ansible-awx/receptor -y
{% if instance.node_type == "execution" %}
- import_role:
name: ansible.receptor.podman
{% endif %}
- import_role:
name: ansible.receptor.setup
- name: Install ansible-runner
pip:
name: ansible-runner
executable: pip3.9
{% endverbatim %}

View File

@@ -1,4 +1,4 @@
---
collections:
- name: ansible.receptor
version: 1.1.0
version: 2.0.3

View File

@@ -10,6 +10,7 @@ from awx.api.views import (
InstanceInstanceGroupsList,
InstanceHealthCheck,
InstancePeersList,
InstanceReceptorAddressesList,
)
from awx.api.views.instance_install_bundle import InstanceInstallBundle
@@ -21,6 +22,7 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/instance_groups/$', InstanceInstanceGroupsList.as_view(), name='instance_instance_groups_list'),
re_path(r'^(?P<pk>[0-9]+)/health_check/$', InstanceHealthCheck.as_view(), name='instance_health_check'),
re_path(r'^(?P<pk>[0-9]+)/peers/$', InstancePeersList.as_view(), name='instance_peers_list'),
re_path(r'^(?P<pk>[0-9]+)/receptor_addresses/$', InstanceReceptorAddressesList.as_view(), name='instance_receptor_addresses_list'),
re_path(r'^(?P<pk>[0-9]+)/install_bundle/$', InstanceInstallBundle.as_view(), name='instance_install_bundle'),
]

View File

@@ -0,0 +1,17 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.urls import re_path
from awx.api.views import (
ReceptorAddressesList,
ReceptorAddressDetail,
)
urls = [
re_path(r'^$', ReceptorAddressesList.as_view(), name='receptor_addresses_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ReceptorAddressDetail.as_view(), name='receptor_address_detail'),
]
__all__ = ['urls']

View File

@@ -30,12 +30,13 @@ from awx.api.views import (
OAuth2TokenList,
ApplicationOAuth2TokenList,
OAuth2ApplicationDetail,
# HostMetricSummaryMonthlyList, # It will be enabled in future version of the AWX
HostMetricSummaryMonthlyList,
)
from awx.api.views.bulk import (
BulkView,
BulkHostCreateView,
BulkHostDeleteView,
BulkJobLaunchView,
)
@@ -84,6 +85,7 @@ from .oauth2_root import urls as oauth2_root_urls
from .workflow_approval_template import urls as workflow_approval_template_urls
from .workflow_approval import urls as workflow_approval_urls
from .analytics import urls as analytics_urls
from .receptor_address import urls as receptor_address_urls
v2_urls = [
re_path(r'^$', ApiV2RootView.as_view(), name='api_v2_root_view'),
@@ -123,8 +125,7 @@ v2_urls = [
re_path(r'^constructed_inventories/', include(constructed_inventory_urls)),
re_path(r'^hosts/', include(host_urls)),
re_path(r'^host_metrics/', include(host_metric_urls)),
# It will be enabled in future version of the AWX
# re_path(r'^host_metric_summary_monthly/$', HostMetricSummaryMonthlyList.as_view(), name='host_metric_summary_monthly_list'),
re_path(r'^host_metric_summary_monthly/$', HostMetricSummaryMonthlyList.as_view(), name='host_metric_summary_monthly_list'),
re_path(r'^groups/', include(group_urls)),
re_path(r'^inventory_sources/', include(inventory_source_urls)),
re_path(r'^inventory_updates/', include(inventory_update_urls)),
@@ -153,7 +154,9 @@ v2_urls = [
re_path(r'^workflow_approvals/', include(workflow_approval_urls)),
re_path(r'^bulk/$', BulkView.as_view(), name='bulk'),
re_path(r'^bulk/host_create/$', BulkHostCreateView.as_view(), name='bulk_host_create'),
re_path(r'^bulk/host_delete/$', BulkHostDeleteView.as_view(), name='bulk_host_delete'),
re_path(r'^bulk/job_launch/$', BulkJobLaunchView.as_view(), name='bulk_job_launch'),
re_path(r'^receptor_addresses/', include(receptor_address_urls)),
]
@@ -167,10 +170,13 @@ urlpatterns = [
]
if MODE == 'development':
# Only include these if we are in the development environment
from awx.api.swagger import SwaggerSchemaView
urlpatterns += [re_path(r'^swagger/$', SwaggerSchemaView.as_view(), name='swagger_view')]
from awx.api.swagger import schema_view
from awx.api.urls.debug import urls as debug_urls
urlpatterns += [re_path(r'^debug/', include(debug_urls))]
urlpatterns += [
re_path(r'^swagger(?P<format>\.json|\.yaml)/$', schema_view.without_ui(cache_timeout=0), name='schema-json'),
re_path(r'^swagger/$', schema_view.with_ui('swagger', cache_timeout=0), name='schema-swagger-ui'),
re_path(r'^redoc/$', schema_view.with_ui('redoc', cache_timeout=0), name='schema-redoc'),
]

View File

@@ -1,10 +1,11 @@
from django.urls import re_path
from awx.api.views.webhooks import WebhookKeyView, GithubWebhookReceiver, GitlabWebhookReceiver
from awx.api.views.webhooks import WebhookKeyView, GithubWebhookReceiver, GitlabWebhookReceiver, BitbucketDcWebhookReceiver
urlpatterns = [
re_path(r'^webhook_key/$', WebhookKeyView.as_view(), name='webhook_key'),
re_path(r'^github/$', GithubWebhookReceiver.as_view(), name='webhook_receiver_github'),
re_path(r'^gitlab/$', GitlabWebhookReceiver.as_view(), name='webhook_receiver_gitlab'),
re_path(r'^bitbucket_dc/$', BitbucketDcWebhookReceiver.as_view(), name='webhook_receiver_bitbucket_dc'),
]

View File

@@ -2,28 +2,21 @@
# All Rights Reserved.
from django.conf import settings
from django.urls import NoReverseMatch
from rest_framework.reverse import _reverse
from rest_framework.reverse import reverse as drf_reverse
from rest_framework.versioning import URLPathVersioning as BaseVersioning
def drf_reverse(viewname, args=None, kwargs=None, request=None, format=None, **extra):
"""
Copy and monkey-patch `rest_framework.reverse.reverse` to prevent adding unwarranted
query string parameters.
"""
scheme = getattr(request, 'versioning_scheme', None)
if scheme is not None:
try:
url = scheme.reverse(viewname, args, kwargs, request, format, **extra)
except NoReverseMatch:
# In case the versioning scheme reversal fails, fallback to the
# default implementation
url = _reverse(viewname, args, kwargs, request, format, **extra)
else:
url = _reverse(viewname, args, kwargs, request, format, **extra)
def is_optional_api_urlpattern_prefix_request(request):
if settings.OPTIONAL_API_URLPATTERN_PREFIX and request:
if request.path.startswith(f"/api/{settings.OPTIONAL_API_URLPATTERN_PREFIX}"):
return True
return False
def transform_optional_api_urlpattern_prefix_url(request, url):
if is_optional_api_urlpattern_prefix_request(request):
url = url.replace('/api', f"/api/{settings.OPTIONAL_API_URLPATTERN_PREFIX}")
return url

View File

@@ -60,6 +60,9 @@ from oauth2_provider.models import get_access_token_model
import pytz
from wsgiref.util import FileWrapper
# django-ansible-base
from ansible_base.rbac.models import RoleEvaluation, ObjectRole
# AWX
from awx.main.tasks.system import send_notifications, update_inventory_computed_fields
from awx.main.access import get_user_queryset
@@ -87,6 +90,7 @@ from awx.api.generics import (
from awx.api.views.labels import LabelSubListCreateAttachDetachView
from awx.api.versioning import reverse
from awx.main import models
from awx.main.models.rbac import get_role_definition
from awx.main.utils import (
camelcase_to_underscore,
extract_ansible_vars,
@@ -128,6 +132,10 @@ logger = logging.getLogger('awx.api.views')
def unpartitioned_event_horizon(cls):
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE table_name = '_unpartitioned_{cls._meta.db_table}';")
if not cursor.fetchone():
return 0
with connection.cursor() as cursor:
try:
cursor.execute(f'SELECT MAX(id) FROM _unpartitioned_{cls._meta.db_table}')
@@ -268,16 +276,24 @@ class DashboardJobsGraphView(APIView):
success_query = user_unified_jobs.filter(status='successful')
failed_query = user_unified_jobs.filter(status='failed')
canceled_query = user_unified_jobs.filter(status='canceled')
error_query = user_unified_jobs.filter(status='error')
if job_type == 'inv_sync':
success_query = success_query.filter(instance_of=models.InventoryUpdate)
failed_query = failed_query.filter(instance_of=models.InventoryUpdate)
canceled_query = canceled_query.filter(instance_of=models.InventoryUpdate)
error_query = error_query.filter(instance_of=models.InventoryUpdate)
elif job_type == 'playbook_run':
success_query = success_query.filter(instance_of=models.Job)
failed_query = failed_query.filter(instance_of=models.Job)
canceled_query = canceled_query.filter(instance_of=models.Job)
error_query = error_query.filter(instance_of=models.Job)
elif job_type == 'scm_update':
success_query = success_query.filter(instance_of=models.ProjectUpdate)
failed_query = failed_query.filter(instance_of=models.ProjectUpdate)
canceled_query = canceled_query.filter(instance_of=models.ProjectUpdate)
error_query = error_query.filter(instance_of=models.ProjectUpdate)
end = now()
interval = 'day'
@@ -293,10 +309,12 @@ class DashboardJobsGraphView(APIView):
else:
return Response({'error': _('Unknown period "%s"') % str(period)}, status=status.HTTP_400_BAD_REQUEST)
dashboard_data = {"jobs": {"successful": [], "failed": []}}
dashboard_data = {"jobs": {"successful": [], "failed": [], "canceled": [], "error": []}}
succ_list = dashboard_data['jobs']['successful']
fail_list = dashboard_data['jobs']['failed']
canceled_list = dashboard_data['jobs']['canceled']
error_list = dashboard_data['jobs']['error']
qs_s = (
success_query.filter(finished__range=(start, end))
@@ -314,6 +332,22 @@ class DashboardJobsGraphView(APIView):
.annotate(agg=Count('id', distinct=True))
)
data_f = {item['d']: item['agg'] for item in qs_f}
qs_c = (
canceled_query.filter(finished__range=(start, end))
.annotate(d=Trunc('finished', interval, tzinfo=end.tzinfo))
.order_by()
.values('d')
.annotate(agg=Count('id', distinct=True))
)
data_c = {item['d']: item['agg'] for item in qs_c}
qs_e = (
error_query.filter(finished__range=(start, end))
.annotate(d=Trunc('finished', interval, tzinfo=end.tzinfo))
.order_by()
.values('d')
.annotate(agg=Count('id', distinct=True))
)
data_e = {item['d']: item['agg'] for item in qs_e}
start_date = start.replace(hour=0, minute=0, second=0, microsecond=0)
for d in itertools.count():
@@ -322,6 +356,8 @@ class DashboardJobsGraphView(APIView):
break
succ_list.append([time.mktime(date.timetuple()), data_s.get(date, 0)])
fail_list.append([time.mktime(date.timetuple()), data_f.get(date, 0)])
canceled_list.append([time.mktime(date.timetuple()), data_c.get(date, 0)])
error_list.append([time.mktime(date.timetuple()), data_e.get(date, 0)])
return Response(dashboard_data)
@@ -333,25 +369,34 @@ class InstanceList(ListCreateAPIView):
search_fields = ('hostname',)
ordering = ('id',)
def get_queryset(self):
qs = super().get_queryset().prefetch_related('receptor_addresses')
return qs
class InstanceDetail(RetrieveUpdateAPIView):
name = _("Instance Detail")
model = models.Instance
serializer_class = serializers.InstanceSerializer
def get_queryset(self):
qs = super().get_queryset().prefetch_related('receptor_addresses')
return qs
def update_raw_data(self, data):
# these fields are only valid on creation of an instance, so they unwanted on detail view
data.pop('listener_port', None)
data.pop('node_type', None)
data.pop('hostname', None)
data.pop('ip_address', None)
return super(InstanceDetail, self).update_raw_data(data)
def update(self, request, *args, **kwargs):
r = super(InstanceDetail, self).update(request, *args, **kwargs)
if status.is_success(r.status_code):
obj = self.get_object()
obj.set_capacity_value()
obj.save(update_fields=['capacity'])
capacity_changed = obj.set_capacity_value()
if capacity_changed:
obj.save(update_fields=['capacity'])
r.data = serializers.InstanceSerializer(obj, context=self.get_serializer_context()).to_representation(obj)
return r
@@ -370,13 +415,37 @@ class InstanceUnifiedJobsList(SubListAPIView):
class InstancePeersList(SubListAPIView):
name = _("Instance Peers")
name = _("Peers")
model = models.ReceptorAddress
serializer_class = serializers.ReceptorAddressSerializer
parent_model = models.Instance
model = models.Instance
serializer_class = serializers.InstanceSerializer
parent_access = 'read'
search_fields = {'hostname'}
relationship = 'peers'
search_fields = ('address',)
class InstanceReceptorAddressesList(SubListAPIView):
name = _("Receptor Addresses")
model = models.ReceptorAddress
parent_key = 'instance'
parent_model = models.Instance
serializer_class = serializers.ReceptorAddressSerializer
search_fields = ('address',)
class ReceptorAddressesList(ListAPIView):
name = _("Receptor Addresses")
model = models.ReceptorAddress
serializer_class = serializers.ReceptorAddressSerializer
search_fields = ('address',)
class ReceptorAddressDetail(RetrieveAPIView):
name = _("Receptor Address Detail")
model = models.ReceptorAddress
serializer_class = serializers.ReceptorAddressSerializer
parent_model = models.Instance
relationship = 'receptor_addresses'
class InstanceInstanceGroupsList(InstanceGroupMembershipMixin, SubListCreateAttachDetachAPIView):
@@ -471,6 +540,7 @@ class InstanceGroupAccessList(ResourceAccessList):
class InstanceGroupObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.InstanceGroup
@@ -565,7 +635,7 @@ class LaunchConfigCredentialsBase(SubListAttachDetachAPIView):
if self.relationship not in ask_mapping:
return {"msg": _("Related template cannot accept {} on launch.").format(self.relationship)}
elif sub.passwords_needed:
return {"msg": _("Credential that requires user input on launch " "cannot be used in saved launch configuration.")}
return {"msg": _("Credential that requires user input on launch cannot be used in saved launch configuration.")}
ask_field_name = ask_mapping[self.relationship]
@@ -659,6 +729,7 @@ class TeamUsersList(BaseUsersList):
class TeamRolesList(SubListAttachDetachAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializerWithParentAccess
metadata_class = RoleMetadata
@@ -698,10 +769,12 @@ class TeamRolesList(SubListAttachDetachAPIView):
class TeamObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Team
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -719,8 +792,15 @@ class TeamProjectsList(SubListAPIView):
self.check_parent_access(team)
model_ct = ContentType.objects.get_for_model(self.model)
parent_ct = ContentType.objects.get_for_model(self.parent_model)
proj_roles = models.Role.objects.filter(Q(ancestors__content_type=parent_ct) & Q(ancestors__object_id=team.pk), content_type=model_ct)
return self.model.accessible_objects(self.request.user, 'read_role').filter(pk__in=[t.content_object.pk for t in proj_roles])
rd = get_role_definition(team.member_role)
role = ObjectRole.objects.filter(object_id=team.id, content_type=parent_ct, role_definition=rd).first()
if role is None:
# Team has no permissions, therefore team has no projects
return self.model.objects.none()
else:
project_qs = self.model.accessible_objects(self.request.user, 'read_role')
return project_qs.filter(id__in=RoleEvaluation.objects.filter(content_type_id=model_ct.id, role=role).values_list('object_id'))
class TeamActivityStreamList(SubListAPIView):
@@ -735,10 +815,23 @@ class TeamActivityStreamList(SubListAPIView):
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(
Q(team=parent)
| Q(project__in=models.Project.accessible_objects(parent, 'read_role'))
| Q(credential__in=models.Credential.accessible_objects(parent, 'read_role'))
| Q(
project__in=RoleEvaluation.objects.filter(
role__in=parent.has_roles.all(), content_type_id=ContentType.objects.get_for_model(models.Project).id, codename='view_project'
)
.values_list('object_id')
.distinct()
)
| Q(
credential__in=RoleEvaluation.objects.filter(
role__in=parent.has_roles.all(), content_type_id=ContentType.objects.get_for_model(models.Credential).id, codename='view_credential'
)
.values_list('object_id')
.distinct()
)
)
@@ -990,10 +1083,12 @@ class ProjectAccessList(ResourceAccessList):
class ProjectObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Project
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -1151,6 +1246,7 @@ class UserTeamsList(SubListAPIView):
class UserRolesList(SubListAttachDetachAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializerWithParentAccess
metadata_class = RoleMetadata
@@ -1392,7 +1488,7 @@ class OrganizationCredentialList(SubListCreateAPIView):
self.check_parent_access(organization)
user_visible = models.Credential.accessible_objects(self.request.user, 'read_role').all()
org_set = models.Credential.accessible_objects(organization.admin_role, 'read_role').all()
org_set = models.Credential.objects.filter(organization=organization)
if self.request.user.is_superuser or self.request.user.is_system_auditor:
return org_set
@@ -1425,10 +1521,12 @@ class CredentialAccessList(ResourceAccessList):
class CredentialObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Credential
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -1564,16 +1662,15 @@ class HostMetricDetail(RetrieveDestroyAPIView):
return Response(status=status.HTTP_204_NO_CONTENT)
# It will be enabled in future version of the AWX
# class HostMetricSummaryMonthlyList(ListAPIView):
# name = _("Host Metrics Summary Monthly")
# model = models.HostMetricSummaryMonthly
# serializer_class = serializers.HostMetricSummaryMonthlySerializer
# permission_classes = (IsSystemAdminOrAuditor,)
# search_fields = ('date',)
#
# def get_queryset(self):
# return self.model.objects.all()
class HostMetricSummaryMonthlyList(ListAPIView):
name = _("Host Metrics Summary Monthly")
model = models.HostMetricSummaryMonthly
serializer_class = serializers.HostMetricSummaryMonthlySerializer
permission_classes = (IsSystemAdminOrAuditor,)
search_fields = ('date',)
def get_queryset(self):
return self.model.objects.all()
class HostList(HostRelatedSearchMixin, ListCreateAPIView):
@@ -2216,13 +2313,6 @@ class JobTemplateList(ListCreateAPIView):
serializer_class = serializers.JobTemplateSerializer
always_allow_superuser = False
def post(self, request, *args, **kwargs):
ret = super(JobTemplateList, self).post(request, *args, **kwargs)
if ret.status_code == 201:
job_template = models.JobTemplate.objects.get(id=ret.data['id'])
job_template.admin_role.members.add(request.user)
return ret
class JobTemplateDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = models.JobTemplate
@@ -2501,7 +2591,7 @@ class JobTemplateSurveySpec(GenericAPIView):
return Response(
dict(
error=_(
"$encrypted$ is a reserved keyword for password question defaults, " "survey question {idx} is type {survey_item[type]}."
"$encrypted$ is a reserved keyword for password question defaults, survey question {idx} is type {survey_item[type]}."
).format(**context)
),
status=status.HTTP_400_BAD_REQUEST,
@@ -2768,10 +2858,12 @@ class JobTemplateAccessList(ResourceAccessList):
class JobTemplateObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.JobTemplate
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -3154,10 +3246,12 @@ class WorkflowJobTemplateAccessList(ResourceAccessList):
class WorkflowJobTemplateObjectRolesList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.WorkflowJobTemplate
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()
@@ -3333,7 +3427,6 @@ class JobLabelList(SubListAPIView):
serializer_class = serializers.LabelSerializer
parent_model = models.Job
relationship = 'labels'
parent_key = 'job'
class WorkflowJobLabelList(JobLabelList):
@@ -4056,7 +4149,7 @@ class UnifiedJobStdout(RetrieveAPIView):
return super(UnifiedJobStdout, self).retrieve(request, *args, **kwargs)
except models.StdoutMaxBytesExceeded as e:
response_message = _(
"Standard Output too large to display ({text_size} bytes), " "only download supported for sizes over {supported_size} bytes."
"Standard Output too large to display ({text_size} bytes), only download supported for sizes over {supported_size} bytes."
).format(text_size=e.total, supported_size=e.supported)
if request.accepted_renderer.format == 'json':
return Response({'range': {'start': 0, 'end': 1, 'absolute_end': 1}, 'content': response_message})
@@ -4167,6 +4260,7 @@ class ActivityStreamDetail(RetrieveAPIView):
class RoleList(ListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
permission_classes = (IsAuthenticated,)
@@ -4174,11 +4268,13 @@ class RoleList(ListAPIView):
class RoleDetail(RetrieveAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
class RoleUsersList(SubListAttachDetachAPIView):
deprecated = True
model = models.User
serializer_class = serializers.UserSerializer
parent_model = models.Role
@@ -4213,6 +4309,7 @@ class RoleUsersList(SubListAttachDetachAPIView):
class RoleTeamsList(SubListAttachDetachAPIView):
deprecated = True
model = models.Team
serializer_class = serializers.TeamSerializer
parent_model = models.Role
@@ -4257,10 +4354,12 @@ class RoleTeamsList(SubListAttachDetachAPIView):
team.member_role.children.remove(role)
else:
team.member_role.children.add(role)
return Response(status=status.HTTP_204_NO_CONTENT)
class RoleParentsList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Role
@@ -4274,6 +4373,7 @@ class RoleParentsList(SubListAPIView):
class RoleChildrenList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Role

View File

@@ -48,23 +48,23 @@ class AnalyticsRootView(APIView):
def get(self, request, format=None):
data = OrderedDict()
data['authorized'] = reverse('api:analytics_authorized')
data['reports'] = reverse('api:analytics_reports_list')
data['report_options'] = reverse('api:analytics_report_options_list')
data['adoption_rate'] = reverse('api:analytics_adoption_rate')
data['adoption_rate_options'] = reverse('api:analytics_adoption_rate_options')
data['event_explorer'] = reverse('api:analytics_event_explorer')
data['event_explorer_options'] = reverse('api:analytics_event_explorer_options')
data['host_explorer'] = reverse('api:analytics_host_explorer')
data['host_explorer_options'] = reverse('api:analytics_host_explorer_options')
data['job_explorer'] = reverse('api:analytics_job_explorer')
data['job_explorer_options'] = reverse('api:analytics_job_explorer_options')
data['probe_templates'] = reverse('api:analytics_probe_templates_explorer')
data['probe_templates_options'] = reverse('api:analytics_probe_templates_options')
data['probe_template_for_hosts'] = reverse('api:analytics_probe_template_for_hosts_explorer')
data['probe_template_for_hosts_options'] = reverse('api:analytics_probe_template_for_hosts_options')
data['roi_templates'] = reverse('api:analytics_roi_templates_explorer')
data['roi_templates_options'] = reverse('api:analytics_roi_templates_options')
data['authorized'] = reverse('api:analytics_authorized', request=request)
data['reports'] = reverse('api:analytics_reports_list', request=request)
data['report_options'] = reverse('api:analytics_report_options_list', request=request)
data['adoption_rate'] = reverse('api:analytics_adoption_rate', request=request)
data['adoption_rate_options'] = reverse('api:analytics_adoption_rate_options', request=request)
data['event_explorer'] = reverse('api:analytics_event_explorer', request=request)
data['event_explorer_options'] = reverse('api:analytics_event_explorer_options', request=request)
data['host_explorer'] = reverse('api:analytics_host_explorer', request=request)
data['host_explorer_options'] = reverse('api:analytics_host_explorer_options', request=request)
data['job_explorer'] = reverse('api:analytics_job_explorer', request=request)
data['job_explorer_options'] = reverse('api:analytics_job_explorer_options', request=request)
data['probe_templates'] = reverse('api:analytics_probe_templates_explorer', request=request)
data['probe_templates_options'] = reverse('api:analytics_probe_templates_options', request=request)
data['probe_template_for_hosts'] = reverse('api:analytics_probe_template_for_hosts_explorer', request=request)
data['probe_template_for_hosts_options'] = reverse('api:analytics_probe_template_for_hosts_options', request=request)
data['roi_templates'] = reverse('api:analytics_roi_templates_explorer', request=request)
data['roi_templates_options'] = reverse('api:analytics_roi_templates_options', request=request)
return Response(data)

View File

@@ -1,5 +1,7 @@
from collections import OrderedDict
from django.utils.translation import gettext_lazy as _
from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import JSONRenderer
from rest_framework.reverse import reverse
@@ -18,6 +20,9 @@ from awx.api import (
class BulkView(APIView):
name = _('Bulk')
swagger_topic = 'Bulk'
permission_classes = [IsAuthenticated]
renderer_classes = [
renderers.BrowsableAPIRenderer,
@@ -29,6 +34,7 @@ class BulkView(APIView):
'''List top level resources'''
data = OrderedDict()
data['host_create'] = reverse('api:bulk_host_create', request=request)
data['host_delete'] = reverse('api:bulk_host_delete', request=request)
data['job_launch'] = reverse('api:bulk_job_launch', request=request)
return Response(data)
@@ -67,3 +73,20 @@ class BulkHostCreateView(GenericAPIView):
result = serializer.create(serializer.validated_data)
return Response(result, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
class BulkHostDeleteView(GenericAPIView):
permission_classes = [IsAuthenticated]
model = Host
serializer_class = serializers.BulkHostDeleteSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
def get(self, request):
return Response({"detail": "Bulk delete hosts with this endpoint"}, status=status.HTTP_200_OK)
def post(self, request):
serializer = serializers.BulkHostDeleteSerializer(data=request.data, context={'request': request})
if serializer.is_valid():
result = serializer.delete(serializer.validated_data)
return Response(result, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)

View File

@@ -6,6 +6,8 @@ import io
import ipaddress
import os
import tarfile
import time
import re
import asn1
from awx.api import serializers
@@ -40,6 +42,8 @@ RECEPTOR_OID = "1.3.6.1.4.1.2312.19.1"
# │ │ └── receptor.key
# │ └── work-public-key.pem
# └── requirements.yml
class InstanceInstallBundle(GenericAPIView):
name = _('Install Bundle')
model = models.Instance
@@ -49,56 +53,54 @@ class InstanceInstallBundle(GenericAPIView):
def get(self, request, *args, **kwargs):
instance_obj = self.get_object()
if instance_obj.node_type not in ('execution',):
if instance_obj.node_type not in ('execution', 'hop'):
return Response(
data=dict(msg=_('Install bundle can only be generated for execution nodes.')),
data=dict(msg=_('Install bundle can only be generated for execution or hop nodes.')),
status=status.HTTP_400_BAD_REQUEST,
)
with io.BytesIO() as f:
with tarfile.open(fileobj=f, mode='w:gz') as tar:
# copy /etc/receptor/tls/ca/receptor-ca.crt to receptor/tls/ca in the tar file
tar.add(
os.path.realpath('/etc/receptor/tls/ca/receptor-ca.crt'), arcname=f"{instance_obj.hostname}_install_bundle/receptor/tls/ca/receptor-ca.crt"
)
# copy /etc/receptor/tls/ca/mesh-CA.crt to receptor/tls/ca in the tar file
tar.add(os.path.realpath('/etc/receptor/tls/ca/mesh-CA.crt'), arcname=f"{instance_obj.hostname}_install_bundle/receptor/tls/ca/mesh-CA.crt")
# copy /etc/receptor/signing/work-public-key.pem to receptor/work-public-key.pem
tar.add('/etc/receptor/signing/work-public-key.pem', arcname=f"{instance_obj.hostname}_install_bundle/receptor/work-public-key.pem")
# copy /etc/receptor/work_public_key.pem to receptor/work_public_key.pem
tar.add('/etc/receptor/work_public_key.pem', arcname=f"{instance_obj.hostname}_install_bundle/receptor/work_public_key.pem")
# generate and write the receptor key to receptor/tls/receptor.key in the tar file
key, cert = generate_receptor_tls(instance_obj)
def tar_addfile(tarinfo, filecontent):
tarinfo.mtime = time.time()
tarinfo.size = len(filecontent)
tar.addfile(tarinfo, io.BytesIO(filecontent))
key_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/receptor/tls/receptor.key")
key_tarinfo.size = len(key)
tar.addfile(key_tarinfo, io.BytesIO(key))
tar_addfile(key_tarinfo, key)
cert_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/receptor/tls/receptor.crt")
cert_tarinfo.size = len(cert)
tar.addfile(cert_tarinfo, io.BytesIO(cert))
tar_addfile(cert_tarinfo, cert)
# generate and write install_receptor.yml to the tar file
playbook = generate_playbook().encode('utf-8')
playbook = generate_playbook(instance_obj).encode('utf-8')
playbook_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/install_receptor.yml")
playbook_tarinfo.size = len(playbook)
tar.addfile(playbook_tarinfo, io.BytesIO(playbook))
tar_addfile(playbook_tarinfo, playbook)
# generate and write inventory.yml to the tar file
inventory_yml = generate_inventory_yml(instance_obj).encode('utf-8')
inventory_yml_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/inventory.yml")
inventory_yml_tarinfo.size = len(inventory_yml)
tar.addfile(inventory_yml_tarinfo, io.BytesIO(inventory_yml))
tar_addfile(inventory_yml_tarinfo, inventory_yml)
# generate and write group_vars/all.yml to the tar file
group_vars = generate_group_vars_all_yml(instance_obj).encode('utf-8')
group_vars_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/group_vars/all.yml")
group_vars_tarinfo.size = len(group_vars)
tar.addfile(group_vars_tarinfo, io.BytesIO(group_vars))
tar_addfile(group_vars_tarinfo, group_vars)
# generate and write requirements.yml to the tar file
requirements_yml = generate_requirements_yml().encode('utf-8')
requirements_yml_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/requirements.yml")
requirements_yml_tarinfo.size = len(requirements_yml)
tar.addfile(requirements_yml_tarinfo, io.BytesIO(requirements_yml))
tar_addfile(requirements_yml_tarinfo, requirements_yml)
# respond with the tarfile
f.seek(0)
@@ -107,8 +109,10 @@ class InstanceInstallBundle(GenericAPIView):
return response
def generate_playbook():
return render_to_string("instance_install_bundle/install_receptor.yml")
def generate_playbook(instance_obj):
playbook_yaml = render_to_string("instance_install_bundle/install_receptor.yml", context=dict(instance=instance_obj))
# convert consecutive newlines with a single newline
return re.sub(r'\n+', '\n', playbook_yaml)
def generate_requirements_yml():
@@ -120,7 +124,21 @@ def generate_inventory_yml(instance_obj):
def generate_group_vars_all_yml(instance_obj):
return render_to_string("instance_install_bundle/group_vars/all.yml", context=dict(instance=instance_obj))
# get peers
peers = []
for addr in instance_obj.peers.select_related('instance'):
peers.append(dict(address=addr.get_full_address(), protocol=addr.protocol))
context = dict(instance=instance_obj, peers=peers)
canonical_addr = instance_obj.canonical_address
if canonical_addr:
context['listener_port'] = canonical_addr.port
protocol = canonical_addr.protocol if canonical_addr.protocol != 'wss' else 'ws'
context['listener_protocol'] = protocol
all_yaml = render_to_string("instance_install_bundle/group_vars/all.yml", context=context)
# convert consecutive newlines with a single newline
return re.sub(r'\n+', '\n', all_yaml)
def generate_receptor_tls(instance_obj):
@@ -161,14 +179,14 @@ def generate_receptor_tls(instance_obj):
.sign(key, hashes.SHA256())
)
# sign csr with the receptor ca key from /etc/receptor/ca/receptor-ca.key
with open('/etc/receptor/tls/ca/receptor-ca.key', 'rb') as f:
# sign csr with the receptor ca key from /etc/receptor/ca/mesh-CA.key
with open('/etc/receptor/tls/ca/mesh-CA.key', 'rb') as f:
ca_key = serialization.load_pem_private_key(
f.read(),
password=None,
)
with open('/etc/receptor/tls/ca/receptor-ca.crt', 'rb') as f:
with open('/etc/receptor/tls/ca/mesh-CA.crt', 'rb') as f:
ca_cert = x509.load_pem_x509_certificate(f.read())
cert = (

View File

@@ -152,6 +152,7 @@ class InventoryObjectRolesList(SubListAPIView):
serializer_class = RoleSerializer
parent_model = Inventory
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()

View File

@@ -17,7 +17,7 @@ class MeshVisualizer(APIView):
def get(self, request, format=None):
data = {
'nodes': InstanceNodeSerializer(Instance.objects.all(), many=True).data,
'links': InstanceLinkSerializer(InstanceLink.objects.select_related('target', 'source'), many=True).data,
'links': InstanceLinkSerializer(InstanceLink.objects.select_related('target__instance', 'source'), many=True).data,
}
return Response(data)

View File

@@ -50,7 +50,7 @@ class UnifiedJobDeletionMixin(object):
return Response({"error": _("Job has not finished processing events.")}, status=status.HTTP_400_BAD_REQUEST)
else:
# if it has been > 1 minute, events are probably lost
logger.warning('Allowing deletion of {} through the API without all events ' 'processed.'.format(obj.log_format))
logger.warning('Allowing deletion of {} through the API without all events processed.'.format(obj.log_format))
# Manually cascade delete events if unpartitioned job
if obj.has_unpartitioned_events:

View File

@@ -226,6 +226,7 @@ class OrganizationObjectRolesList(SubListAPIView):
serializer_class = RoleSerializer
parent_model = Organization
search_fields = ('role_field', 'content_type__model')
deprecated = True
def get_queryset(self):
po = self.get_parent_object()

View File

@@ -13,6 +13,7 @@ from django.utils.decorators import method_decorator
from django.views.decorators.csrf import ensure_csrf_cookie
from django.template.loader import render_to_string
from django.utils.translation import gettext_lazy as _
from django.urls import reverse as django_reverse
from rest_framework.permissions import AllowAny, IsAuthenticated
from rest_framework.response import Response
@@ -20,13 +21,14 @@ from rest_framework import status
import requests
from awx import MODE
from awx.api.generics import APIView
from awx.conf.registry import settings_registry
from awx.main.analytics import all_collectors
from awx.main.ha import is_ha_environment
from awx.main.utils import get_awx_version, get_custom_venv_choices
from awx.main.utils.licensing import validate_entitlement_manifest
from awx.api.versioning import reverse, drf_reverse
from awx.api.versioning import URLPathVersioning, is_optional_api_urlpattern_prefix_request, reverse, drf_reverse
from awx.main.constants import PRIVILEGE_ESCALATION_METHODS
from awx.main.models import Project, Organization, Instance, InstanceGroup, JobTemplate
from awx.main.utils import set_environ
@@ -38,22 +40,24 @@ logger = logging.getLogger('awx.api.views.root')
class ApiRootView(APIView):
permission_classes = (AllowAny,)
name = _('REST API')
versioning_class = None
versioning_class = URLPathVersioning
swagger_topic = 'Versioning'
@method_decorator(ensure_csrf_cookie)
def get(self, request, format=None):
'''List supported API versions'''
v2 = reverse('api:api_v2_root_view', kwargs={'version': 'v2'})
v2 = reverse('api:api_v2_root_view', request=request, kwargs={'version': 'v2'})
data = OrderedDict()
data['description'] = _('AWX REST API')
data['current_version'] = v2
data['available_versions'] = dict(v2=v2)
data['oauth2'] = drf_reverse('api:oauth_authorization_root_view')
if not is_optional_api_urlpattern_prefix_request(request):
data['oauth2'] = drf_reverse('api:oauth_authorization_root_view')
data['custom_logo'] = settings.CUSTOM_LOGO
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
data['login_redirect_override'] = settings.LOGIN_REDIRECT_OVERRIDE
if MODE == 'development':
data['swagger'] = drf_reverse('api:schema-swagger-ui')
return Response(data)
@@ -81,6 +85,7 @@ class ApiVersionRootView(APIView):
data['ping'] = reverse('api:api_v2_ping_view', request=request)
data['instances'] = reverse('api:instance_list', request=request)
data['instance_groups'] = reverse('api:instance_group_list', request=request)
data['receptor_addresses'] = reverse('api:receptor_addresses_list', request=request)
data['config'] = reverse('api:api_v2_config_view', request=request)
data['settings'] = reverse('api:setting_category_list', request=request)
data['me'] = reverse('api:user_me_list', request=request)
@@ -104,8 +109,7 @@ class ApiVersionRootView(APIView):
data['groups'] = reverse('api:group_list', request=request)
data['hosts'] = reverse('api:host_list', request=request)
data['host_metrics'] = reverse('api:host_metric_list', request=request)
# It will be enabled in future version of the AWX
# data['host_metric_summary_monthly'] = reverse('api:host_metric_summary_monthly_list', request=request)
data['host_metric_summary_monthly'] = reverse('api:host_metric_summary_monthly_list', request=request)
data['job_templates'] = reverse('api:job_template_list', request=request)
data['jobs'] = reverse('api:job_list', request=request)
data['ad_hoc_commands'] = reverse('api:ad_hoc_command_list', request=request)
@@ -127,6 +131,10 @@ class ApiVersionRootView(APIView):
data['mesh_visualizer'] = reverse('api:mesh_visualizer_view', request=request)
data['bulk'] = reverse('api:bulk', request=request)
data['analytics'] = reverse('api:analytics_root_view', request=request)
data['service_index'] = django_reverse('service-index-root')
data['role_definitions'] = django_reverse('roledefinition-list')
data['role_user_assignments'] = django_reverse('roleuserassignment-list')
data['role_team_assignments'] = django_reverse('roleteamassignment-list')
return Response(data)

View File

@@ -1,4 +1,4 @@
from hashlib import sha1
from hashlib import sha1, sha256
import hmac
import logging
import urllib.parse
@@ -99,14 +99,31 @@ class WebhookReceiverBase(APIView):
def get_signature(self):
raise NotImplementedError
def must_check_signature(self):
return True
def is_ignored_request(self):
return False
def check_signature(self, obj):
if not obj.webhook_key:
raise PermissionDenied
if not self.must_check_signature():
logger.debug("skipping signature validation")
return
mac = hmac.new(force_bytes(obj.webhook_key), msg=force_bytes(self.request.body), digestmod=sha1)
logger.debug("header signature: %s", self.get_signature())
hash_alg, expected_digest = self.get_signature()
if hash_alg == 'sha1':
mac = hmac.new(force_bytes(obj.webhook_key), msg=force_bytes(self.request.body), digestmod=sha1)
elif hash_alg == 'sha256':
mac = hmac.new(force_bytes(obj.webhook_key), msg=force_bytes(self.request.body), digestmod=sha256)
else:
logger.debug("Unsupported signature type, supported: sha1, sha256, received: {}".format(hash_alg))
raise PermissionDenied
logger.debug("header signature: %s", expected_digest)
logger.debug("calculated signature: %s", force_bytes(mac.hexdigest()))
if not hmac.compare_digest(force_bytes(mac.hexdigest()), self.get_signature()):
if not hmac.compare_digest(force_bytes(mac.hexdigest()), expected_digest):
raise PermissionDenied
@csrf_exempt
@@ -114,10 +131,14 @@ class WebhookReceiverBase(APIView):
# Ensure that the full contents of the request are captured for multiple uses.
request.body
logger.debug("headers: {}\n" "data: {}\n".format(request.headers, request.data))
logger.debug("headers: {}\ndata: {}\n".format(request.headers, request.data))
obj = self.get_object()
self.check_signature(obj)
if self.is_ignored_request():
# This was an ignored request type (e.g. ping), don't act on it
return Response({'message': _("Webhook ignored")}, status=status.HTTP_200_OK)
event_type = self.get_event_type()
event_guid = self.get_event_guid()
event_ref = self.get_event_ref()
@@ -186,7 +207,7 @@ class GithubWebhookReceiver(WebhookReceiverBase):
if hash_alg != 'sha1':
logger.debug("Unsupported signature type, expected: sha1, received: {}".format(hash_alg))
raise PermissionDenied
return force_bytes(signature)
return hash_alg, force_bytes(signature)
class GitlabWebhookReceiver(WebhookReceiverBase):
@@ -214,15 +235,73 @@ class GitlabWebhookReceiver(WebhookReceiverBase):
return "{}://{}/api/v4/projects/{}/statuses/{}".format(parsed.scheme, parsed.netloc, project['id'], self.get_event_ref())
def get_signature(self):
return force_bytes(self.request.META.get('HTTP_X_GITLAB_TOKEN') or '')
def check_signature(self, obj):
if not obj.webhook_key:
raise PermissionDenied
token_from_request = force_bytes(self.request.META.get('HTTP_X_GITLAB_TOKEN') or '')
# GitLab only returns the secret token, not an hmac hash. Use
# the hmac `compare_digest` helper function to prevent timing
# analysis by attackers.
if not hmac.compare_digest(force_bytes(obj.webhook_key), self.get_signature()):
if not hmac.compare_digest(force_bytes(obj.webhook_key), token_from_request):
raise PermissionDenied
class BitbucketDcWebhookReceiver(WebhookReceiverBase):
service = 'bitbucket_dc'
ref_keys = {
'repo:refs_changed': 'changes.0.toHash',
'mirror:repo_synchronized': 'changes.0.toHash',
'pr:opened': 'pullRequest.toRef.latestCommit',
'pr:from_ref_updated': 'pullRequest.toRef.latestCommit',
'pr:modified': 'pullRequest.toRef.latestCommit',
}
def get_event_type(self):
return self.request.META.get('HTTP_X_EVENT_KEY')
def get_event_guid(self):
return self.request.META.get('HTTP_X_REQUEST_ID')
def get_event_status_api(self):
# https://<bitbucket-base-url>/rest/build-status/1.0/commits/<commit-hash>
if self.get_event_type() not in self.ref_keys.keys():
return
if self.get_event_ref() is None:
return
any_url = None
if 'actor' in self.request.data:
any_url = self.request.data['actor'].get('links', {}).get('self')
if any_url is None and 'repository' in self.request.data:
any_url = self.request.data['repository'].get('links', {}).get('self')
if any_url is None:
return
any_url = any_url[0].get('href')
if any_url is None:
return
parsed = urllib.parse.urlparse(any_url)
return "{}://{}/rest/build-status/1.0/commits/{}".format(parsed.scheme, parsed.netloc, self.get_event_ref())
def is_ignored_request(self):
return self.get_event_type() not in [
'repo:refs_changed',
'mirror:repo_synchronized',
'pr:opened',
'pr:from_ref_updated',
'pr:modified',
]
def must_check_signature(self):
# Bitbucket does not sign ping requests...
return self.get_event_type() != 'diagnostics:ping'
def get_signature(self):
header_sig = self.request.META.get('HTTP_X_HUB_SIGNATURE')
if not header_sig:
logger.debug("Expected signature missing from header key HTTP_X_HUB_SIGNATURE")
raise PermissionDenied
hash_alg, signature = header_sig.split('=')
return hash_alg, force_bytes(signature)

View File

@@ -14,7 +14,7 @@ class ConfConfig(AppConfig):
def ready(self):
self.module.autodiscover()
if not set(sys.argv) & {'migrate', 'check_migrations'}:
if not set(sys.argv) & {'migrate', 'check_migrations', 'showmigrations'}:
from .settings import SettingsWrapper
SettingsWrapper.initialize()

View File

@@ -55,6 +55,7 @@ register(
# Optional; category_slug will be slugified version of category if not
# explicitly provided.
category_slug='cows',
hidden=True,
)

View File

@@ -61,6 +61,10 @@ class StringListBooleanField(ListField):
def to_representation(self, value):
try:
if isinstance(value, str):
# https://github.com/encode/django-rest-framework/commit/a180bde0fd965915718b070932418cabc831cee1
# DRF changed truthy and falsy lists to be capitalized
value = value.lower()
if isinstance(value, (list, tuple)):
return super(StringListBooleanField, self).to_representation(value)
elif value in BooleanField.TRUE_VALUES:
@@ -78,6 +82,8 @@ class StringListBooleanField(ListField):
def to_internal_value(self, data):
try:
if isinstance(data, str):
data = data.lower()
if isinstance(data, (list, tuple)):
return super(StringListBooleanField, self).to_internal_value(data)
elif data in BooleanField.TRUE_VALUES:

View File

@@ -0,0 +1,17 @@
# Generated by Django 4.2 on 2023-06-09 19:51
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('conf', '0009_rename_proot_settings'),
]
operations = [
migrations.AlterField(
model_name='setting',
name='value',
field=models.JSONField(null=True),
),
]

View File

@@ -7,9 +7,10 @@ import json
# Django
from django.db import models
from ansible_base.lib.utils.models import prevent_search
# AWX
from awx.main.fields import JSONBlob
from awx.main.models.base import CreatedModifiedModel, prevent_search
from awx.main.models.base import CreatedModifiedModel
from awx.main.utils import encrypt_field
from awx.conf import settings_registry
@@ -18,7 +19,7 @@ __all__ = ['Setting']
class Setting(CreatedModifiedModel):
key = models.CharField(max_length=255)
value = JSONBlob(null=True)
value = models.JSONField(null=True)
user = prevent_search(models.ForeignKey('auth.User', related_name='settings', default=None, null=True, editable=False, on_delete=models.CASCADE))
def __str__(self):

View File

@@ -127,6 +127,8 @@ class SettingsRegistry(object):
encrypted = bool(field_kwargs.pop('encrypted', False))
defined_in_file = bool(field_kwargs.pop('defined_in_file', False))
unit = field_kwargs.pop('unit', None)
hidden = field_kwargs.pop('hidden', False)
warning_text = field_kwargs.pop('warning_text', None)
if getattr(field_kwargs.get('child', None), 'source', None) is not None:
field_kwargs['child'].source = None
field_instance = field_class(**field_kwargs)
@@ -134,12 +136,14 @@ class SettingsRegistry(object):
field_instance.category = category
field_instance.depends_on = depends_on
field_instance.unit = unit
field_instance.hidden = hidden
if placeholder is not empty:
field_instance.placeholder = placeholder
field_instance.defined_in_file = defined_in_file
if field_instance.defined_in_file:
field_instance.help_text = str(_('This value has been set manually in a settings file.')) + '\n\n' + str(field_instance.help_text)
field_instance.encrypted = encrypted
field_instance.warning_text = warning_text
original_field_instance = field_instance
if field_class != original_field_class:
original_field_instance = original_field_class(**field_kwargs)

View File

@@ -1,6 +1,7 @@
# Python
import contextlib
import logging
import psycopg
import threading
import time
import os
@@ -13,7 +14,7 @@ from django.conf import settings, UserSettingsHolder
from django.core.cache import cache as django_cache
from django.core.exceptions import ImproperlyConfigured, SynchronousOnlyOperation
from django.db import transaction, connection
from django.db.utils import Error as DBError, ProgrammingError
from django.db.utils import DatabaseError, ProgrammingError
from django.utils.functional import cached_property
# Django REST Framework
@@ -80,18 +81,26 @@ def _ctit_db_wrapper(trans_safe=False):
logger.debug('Obtaining database settings in spite of broken transaction.')
transaction.set_rollback(False)
yield
except DBError as exc:
except ProgrammingError as e:
# Exception raised for programming errors
# Examples may be table not found or already exists,
# this generally means we can't fetch Tower configuration
# because the database hasn't actually finished migrating yet;
# this is usually a sign that a service in a container (such as ws_broadcast)
# has come up *before* the database has finished migrating, and
# especially that the conf.settings table doesn't exist yet
# syntax error in the SQL statement, wrong number of parameters specified, etc.
if trans_safe:
level = logger.warning
if isinstance(exc, ProgrammingError):
if 'relation' in str(exc) and 'does not exist' in str(exc):
# this generally means we can't fetch Tower configuration
# because the database hasn't actually finished migrating yet;
# this is usually a sign that a service in a container (such as ws_broadcast)
# has come up *before* the database has finished migrating, and
# especially that the conf.settings table doesn't exist yet
level = logger.debug
level(f'Database settings are not available, using defaults. error: {str(exc)}')
logger.debug(f'Database settings are not available, using defaults. error: {str(e)}')
else:
logger.exception('Error modifying something related to database settings.')
except DatabaseError as e:
if trans_safe:
cause = e.__cause__
if cause and hasattr(cause, 'sqlstate'):
sqlstate = cause.sqlstate
sqlstate_str = psycopg.errors.lookup(sqlstate)
logger.error('SQL Error state: {} - {}'.format(sqlstate, sqlstate_str))
else:
logger.exception('Error modifying something related to database settings.')
finally:
@@ -418,6 +427,10 @@ class SettingsWrapper(UserSettingsHolder):
"""Get value while accepting the in-memory cache if key is available"""
with _ctit_db_wrapper(trans_safe=True):
return self._get_local(name)
# If the last line did not return, that means we hit a database error
# in that case, we should not have a local cache value
# thus, return empty as a signal to use the default
return empty
def __getattr__(self, name):
value = empty

View File

@@ -35,7 +35,7 @@ class TestStringListBooleanField:
field = StringListBooleanField()
with pytest.raises(ValidationError) as e:
field.to_internal_value(value)
assert e.value.detail[0] == "Expected None, True, False, a string or list " "of strings but got {} instead.".format(type(value))
assert e.value.detail[0] == "Expected None, True, False, a string or list of strings but got {} instead.".format(type(value))
@pytest.mark.parametrize("value_in, value_known", FIELD_VALUES)
def test_to_representation_valid(self, value_in, value_known):
@@ -48,7 +48,7 @@ class TestStringListBooleanField:
field = StringListBooleanField()
with pytest.raises(ValidationError) as e:
field.to_representation(value)
assert e.value.detail[0] == "Expected None, True, False, a string or list " "of strings but got {} instead.".format(type(value))
assert e.value.detail[0] == "Expected None, True, False, a string or list of strings but got {} instead.".format(type(value))
class TestListTuplesField:
@@ -67,7 +67,7 @@ class TestListTuplesField:
field = ListTuplesField()
with pytest.raises(ValidationError) as e:
field.to_internal_value(value)
assert e.value.detail[0] == "Expected a list of tuples of max length 2 " "but got {} instead.".format(t)
assert e.value.detail[0] == "Expected a list of tuples of max length 2 but got {} instead.".format(t)
class TestStringListPathField:

View File

@@ -13,6 +13,7 @@ from unittest import mock
from django.conf import LazySettings
from django.core.cache.backends.locmem import LocMemCache
from django.core.exceptions import ImproperlyConfigured
from django.db.utils import Error as DBError
from django.utils.translation import gettext_lazy as _
import pytest
@@ -331,3 +332,18 @@ def test_in_memory_cache_works(settings):
with mock.patch.object(settings, '_get_local') as mock_get:
assert settings.AWX_VAR == 'DEFAULT'
mock_get.assert_not_called()
@pytest.mark.defined_in_file(AWX_VAR=[])
def test_getattr_with_database_error(settings):
"""
If a setting is defined via the registry and has a null-ish default which is not None
then referencing that setting during a database outage should give that default
this is regression testing for a bug where it would return None
"""
settings.registry.register('AWX_VAR', field_class=fields.StringListField, default=[], category=_('System'), category_slug='system')
settings._awx_conf_memoizedcache.clear()
with mock.patch('django.db.backends.base.base.BaseDatabaseWrapper.ensure_connection') as mock_ensure:
mock_ensure.side_effect = DBError('for test')
assert settings.AWX_VAR == []

View File

@@ -20,11 +20,15 @@ from rest_framework.exceptions import ParseError, PermissionDenied
# Django OAuth Toolkit
from awx.main.models.oauth import OAuth2Application, OAuth2AccessToken
# django-ansible-base
from ansible_base.lib.utils.validation import to_python_boolean
from ansible_base.rbac.models import RoleEvaluation
from ansible_base.rbac import permission_registry
# AWX
from awx.main.utils import (
get_object_or_400,
get_pk_from_dict,
to_python_boolean,
get_licenser,
)
from awx.main.models import (
@@ -56,6 +60,7 @@ from awx.main.models import (
Project,
ProjectUpdate,
ProjectUpdateEvent,
ReceptorAddress,
Role,
Schedule,
SystemJob,
@@ -70,8 +75,6 @@ from awx.main.models import (
WorkflowJobTemplateNode,
WorkflowApproval,
WorkflowApprovalTemplate,
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
ROLE_SINGLETON_SYSTEM_AUDITOR,
)
from awx.main.models.mixins import ResourceMixin
@@ -79,7 +82,6 @@ __all__ = [
'get_user_queryset',
'check_user_access',
'check_user_access_with_errors',
'user_accessible_objects',
'consumer_access',
]
@@ -136,10 +138,6 @@ def register_access(model_class, access_class):
access_registry[model_class] = access_class
def user_accessible_objects(user, role_name):
return ResourceMixin._accessible_objects(User, user, role_name)
def get_user_queryset(user, model_class):
"""
Return a queryset for the given model_class containing only the instances
@@ -267,7 +265,11 @@ class BaseAccess(object):
return self.can_change(obj, data)
def can_delete(self, obj):
return self.user.is_superuser
if self.user.is_superuser:
return True
if obj._meta.model_name in [cls._meta.model_name for cls in permission_registry.all_registered_models]:
return self.user.has_obj_perm(obj, 'delete')
return False
def can_copy(self, obj):
return self.can_add({'reference_obj': obj})
@@ -366,9 +368,9 @@ class BaseAccess(object):
report_violation = lambda message: None
else:
report_violation = lambda message: logger.warning(message)
if validation_info.get('trial', False) is True or validation_info['instance_count'] == 10: # basic 10 license
if validation_info.get('trial', False) is True:
def report_violation(message):
def report_violation(message): # noqa
raise PermissionDenied(message)
if check_expiration and validation_info.get('time_remaining', None) is None:
@@ -642,7 +644,10 @@ class UserAccess(BaseAccess):
"""
model = User
prefetch_related = ('profile',)
prefetch_related = (
'profile',
'resource',
)
def filtered_queryset(self):
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and (self.user.admin_of_organizations.exists() or self.user.auditor_of_organizations.exists()):
@@ -651,9 +656,7 @@ class UserAccess(BaseAccess):
qs = (
User.objects.filter(pk__in=Organization.accessible_objects(self.user, 'read_role').values('member_role__members'))
| User.objects.filter(pk=self.user.id)
| User.objects.filter(
pk__in=Role.objects.filter(singleton_name__in=[ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR]).values('members')
)
| User.objects.filter(is_superuser=True)
).distinct()
return qs
@@ -711,6 +714,15 @@ class UserAccess(BaseAccess):
if not allow_orphans:
# in these cases only superusers can modify orphan users
return False
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
# Permission granted if the user has all permissions that the target user has
target_perms = set(
RoleEvaluation.objects.filter(role__in=obj.has_roles.all()).values_list('object_id', 'content_type_id', 'codename').distinct()
)
user_perms = set(
RoleEvaluation.objects.filter(role__in=self.user.has_roles.all()).values_list('object_id', 'content_type_id', 'codename').distinct()
)
return not (target_perms - user_perms)
return not obj.roles.all().exclude(ancestors__in=self.user.roles.all()).exists()
else:
return self.is_all_org_admin(obj)
@@ -838,6 +850,7 @@ class OrganizationAccess(NotificationAttachMixin, BaseAccess):
prefetch_related = (
'created_by',
'modified_by',
'resource', # dab_resource_registry
)
# organization admin_role is not a parent of organization auditor_role
notification_attach_roles = ['admin_role', 'auditor_role']
@@ -948,9 +961,6 @@ class InventoryAccess(BaseAccess):
def can_update(self, obj):
return self.user in obj.update_role
def can_delete(self, obj):
return self.can_admin(obj, None)
def can_run_ad_hoc_commands(self, obj):
return self.user in obj.adhoc_role
@@ -1306,6 +1316,7 @@ class TeamAccess(BaseAccess):
'created_by',
'modified_by',
'organization',
'resource', # dab_resource_registry
)
def filtered_queryset(self):
@@ -1403,8 +1414,12 @@ class ExecutionEnvironmentAccess(BaseAccess):
def can_change(self, obj, data):
if obj and obj.organization_id is None:
raise PermissionDenied
if self.user not in obj.organization.execution_environment_admin_role:
raise PermissionDenied
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
if not self.user.has_obj_perm(obj, 'change'):
raise PermissionDenied
else:
if self.user not in obj.organization.execution_environment_admin_role:
raise PermissionDenied
if data and 'organization' in data:
new_org = get_object_from_data('organization', Organization, data, obj=obj)
if not new_org or self.user not in new_org.execution_environment_admin_role:
@@ -2234,7 +2249,7 @@ class WorkflowJobAccess(BaseAccess):
if not node_access.can_add({'reference_obj': node}):
wj_add_perm = False
if not wj_add_perm and self.save_messages:
self.messages['workflow_job_template'] = _('You do not have permission to the workflow job ' 'resources required for relaunch.')
self.messages['workflow_job_template'] = _('You do not have permission to the workflow job resources required for relaunch.')
return wj_add_perm
def can_cancel(self, obj):
@@ -2434,6 +2449,29 @@ class InventoryUpdateEventAccess(BaseAccess):
return False
class ReceptorAddressAccess(BaseAccess):
"""
I can see receptor address records whenever I can access the instance
"""
model = ReceptorAddress
def filtered_queryset(self):
return self.model.objects.filter(Q(instance__in=Instance.accessible_pk_qs(self.user, 'read_role')))
@check_superuser
def can_add(self, data):
return False
@check_superuser
def can_change(self, obj, data):
return False
@check_superuser
def can_delete(self, obj):
return False
class SystemJobEventAccess(BaseAccess):
"""
I can only see manage System Jobs events if I'm a super user
@@ -2567,6 +2605,8 @@ class ScheduleAccess(UnifiedCredentialsMixin, BaseAccess):
if not JobLaunchConfigAccess(self.user).can_add(data):
return False
if not data:
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
return self.user.has_roles.filter(permission_partials__codename__in=['execute_jobtemplate', 'update_project', 'update_inventory']).exists()
return Role.objects.filter(role_field__in=['update_role', 'execute_role'], ancestors__in=self.user.roles.all()).exists()
return self.check_related('unified_job_template', UnifiedJobTemplate, data, role_field='execute_role', mandatory=True)
@@ -2595,6 +2635,8 @@ class NotificationTemplateAccess(BaseAccess):
prefetch_related = ('created_by', 'modified_by', 'organization')
def filtered_queryset(self):
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
return self.model.access_qs(self.user, 'view')
return self.model.objects.filter(
Q(organization__in=Organization.accessible_objects(self.user, 'notification_admin_role')) | Q(organization__in=self.user.auditor_of_organizations)
).distinct()
@@ -2763,7 +2805,7 @@ class ActivityStreamAccess(BaseAccess):
| Q(notification_template__organization__in=auditing_orgs)
| Q(notification__notification_template__organization__in=auditing_orgs)
| Q(label__organization__in=auditing_orgs)
| Q(role__in=Role.objects.filter(ancestors__in=self.user.roles.all()) if auditing_orgs else [])
| Q(role__in=Role.visible_roles(self.user) if auditing_orgs else [])
)
project_set = Project.accessible_pk_qs(self.user, 'read_role')
@@ -2820,13 +2862,10 @@ class RoleAccess(BaseAccess):
def filtered_queryset(self):
result = Role.visible_roles(self.user)
# Sanity check: is the requesting user an orphaned non-admin/auditor?
# if yes, make system admin/auditor mandatorily visible.
if not self.user.is_superuser and not self.user.is_system_auditor and not self.user.organizations.exists():
mandatories = ('system_administrator', 'system_auditor')
super_qs = Role.objects.filter(singleton_name__in=mandatories)
result = result | super_qs
return result
# Make system admin/auditor mandatorily visible.
mandatories = ('system_administrator', 'system_auditor')
super_qs = Role.objects.filter(singleton_name__in=mandatories)
return result | super_qs
def can_add(self, obj, data):
# Unsupported for now

View File

@@ -2,7 +2,7 @@
import logging
# AWX
from awx.main.analytics.subsystem_metrics import Metrics
from awx.main.analytics.subsystem_metrics import DispatcherMetrics, CallbackReceiverMetrics
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_task_queuename
@@ -11,4 +11,5 @@ logger = logging.getLogger('awx.main.scheduler')
@task(queue=get_task_queuename)
def send_subsystem_metrics():
Metrics().send_metrics()
DispatcherMetrics().send_metrics()
CallbackReceiverMetrics().send_metrics()

View File

@@ -399,7 +399,10 @@ def _copy_table(table, query, path):
file_path = os.path.join(path, table + '_table.csv')
file = FileSplitter(filespec=file_path)
with connection.cursor() as cursor:
cursor.copy_expert(query, file)
with cursor.copy(query) as copy:
while data := copy.read():
byte_data = bytes(data)
file.write(byte_data.decode())
return file.file_list()
@@ -416,7 +419,7 @@ def _events_table(since, full_path, until, tbl, where_column, project_job_create
resolved_action,
resolved_role,
-- '-' operator listed here:
-- https://www.postgresql.org/docs/12/functions-json.html
-- https://www.postgresql.org/docs/15/functions-json.html
-- note that operator is only supported by jsonb objects
-- https://www.postgresql.org/docs/current/datatype-json.html
(CASE WHEN event = 'playbook_on_stats' THEN {event_data} - 'artifact_data' END) as playbook_on_stats,
@@ -610,3 +613,20 @@ def host_metric_table(since, full_path, until, **kwargs):
since.isoformat(), until.isoformat(), since.isoformat(), until.isoformat()
)
return _copy_table(table='host_metric', query=host_metric_query, path=full_path)
@register('host_metric_summary_monthly_table', '1.0', format='csv', description=_('HostMetricSummaryMonthly export, full sync'), expensive=trivial_slicing)
def host_metric_summary_monthly_table(since, full_path, **kwargs):
query = '''
COPY (SELECT main_hostmetricsummarymonthly.id,
main_hostmetricsummarymonthly.date,
main_hostmetricsummarymonthly.license_capacity,
main_hostmetricsummarymonthly.license_consumed,
main_hostmetricsummarymonthly.hosts_added,
main_hostmetricsummarymonthly.hosts_deleted,
main_hostmetricsummarymonthly.indirectly_managed_hosts
FROM main_hostmetricsummarymonthly
ORDER BY main_hostmetricsummarymonthly.id ASC) TO STDOUT WITH CSV HEADER
'''
return _copy_table(table='host_metric_summary_monthly', query=query, path=full_path)

View File

@@ -1,10 +1,15 @@
import itertools
import redis
import json
import time
import logging
import prometheus_client
from prometheus_client.core import GaugeMetricFamily, HistogramMetricFamily
from prometheus_client.registry import CollectorRegistry
from django.conf import settings
from django.apps import apps
from django.http import HttpRequest
from rest_framework.request import Request
from awx.main.consumers import emit_channel_notification
from awx.main.utils import is_testing
@@ -13,6 +18,30 @@ root_key = settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX
logger = logging.getLogger('awx.main.analytics')
class MetricsNamespace:
def __init__(self, namespace):
self._namespace = namespace
class MetricsServerSettings(MetricsNamespace):
def port(self):
return settings.METRICS_SUBSYSTEM_CONFIG['server'][self._namespace]['port']
class MetricsServer(MetricsServerSettings):
def __init__(self, namespace, registry):
MetricsNamespace.__init__(self, namespace)
self._registry = registry
def start(self):
try:
# TODO: addr for ipv6 ?
prometheus_client.start_http_server(self.port(), addr='localhost', registry=self._registry)
except Exception:
logger.error(f"MetricsServer failed to start for service '{self._namespace}.")
raise
class BaseM:
def __init__(self, field, help_text):
self.field = field
@@ -148,71 +177,40 @@ class HistogramM(BaseM):
return output_text
class Metrics:
def __init__(self, auto_pipe_execute=False, instance_name=None):
class Metrics(MetricsNamespace):
# metric name, help_text
METRICSLIST = []
_METRICSLIST = [
FloatM('subsystem_metrics_pipe_execute_seconds', 'Time spent saving metrics to redis'),
IntM('subsystem_metrics_pipe_execute_calls', 'Number of calls to pipe_execute'),
FloatM('subsystem_metrics_send_metrics_seconds', 'Time spent sending metrics to other nodes'),
]
def __init__(self, namespace, auto_pipe_execute=False, instance_name=None, metrics_have_changed=True, **kwargs):
MetricsNamespace.__init__(self, namespace)
self.pipe = redis.Redis.from_url(settings.BROKER_URL).pipeline()
self.conn = redis.Redis.from_url(settings.BROKER_URL)
self.last_pipe_execute = time.time()
# track if metrics have been modified since last saved to redis
# start with True so that we get an initial save to redis
self.metrics_have_changed = True
self.metrics_have_changed = metrics_have_changed
self.pipe_execute_interval = settings.SUBSYSTEM_METRICS_INTERVAL_SAVE_TO_REDIS
self.send_metrics_interval = settings.SUBSYSTEM_METRICS_INTERVAL_SEND_METRICS
# auto pipe execute will commit transaction of metric data to redis
# at a regular interval (pipe_execute_interval). If set to False,
# the calling function should call .pipe_execute() explicitly
self.auto_pipe_execute = auto_pipe_execute
Instance = apps.get_model('main', 'Instance')
if instance_name:
self.instance_name = instance_name
elif is_testing():
self.instance_name = "awx_testing"
else:
self.instance_name = Instance.objects.my_hostname()
self.instance_name = settings.CLUSTER_HOST_ID # Same as Instance.objects.my_hostname() BUT we do not need to import Instance
# metric name, help_text
METRICSLIST = [
SetIntM('callback_receiver_events_queue_size_redis', 'Current number of events in redis queue'),
IntM('callback_receiver_events_popped_redis', 'Number of events popped from redis'),
IntM('callback_receiver_events_in_memory', 'Current number of events in memory (in transfer from redis to db)'),
IntM('callback_receiver_batch_events_errors', 'Number of times batch insertion failed'),
FloatM('callback_receiver_events_insert_db_seconds', 'Total time spent saving events to database'),
IntM('callback_receiver_events_insert_db', 'Number of events batch inserted into database'),
IntM('callback_receiver_events_broadcast', 'Number of events broadcast to other control plane nodes'),
HistogramM(
'callback_receiver_batch_events_insert_db', 'Number of events batch inserted into database', settings.SUBSYSTEM_METRICS_BATCH_INSERT_BUCKETS
),
SetFloatM('callback_receiver_event_processing_avg_seconds', 'Average processing time per event per callback receiver batch'),
FloatM('subsystem_metrics_pipe_execute_seconds', 'Time spent saving metrics to redis'),
IntM('subsystem_metrics_pipe_execute_calls', 'Number of calls to pipe_execute'),
FloatM('subsystem_metrics_send_metrics_seconds', 'Time spent sending metrics to other nodes'),
SetFloatM('task_manager_get_tasks_seconds', 'Time spent in loading tasks from db'),
SetFloatM('task_manager_start_task_seconds', 'Time spent starting task'),
SetFloatM('task_manager_process_running_tasks_seconds', 'Time spent processing running tasks'),
SetFloatM('task_manager_process_pending_tasks_seconds', 'Time spent processing pending tasks'),
SetFloatM('task_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('task_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('task_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('task_manager_tasks_started', 'Number of tasks started'),
SetIntM('task_manager_running_processed', 'Number of running tasks processed'),
SetIntM('task_manager_pending_processed', 'Number of pending tasks processed'),
SetIntM('task_manager_tasks_blocked', 'Number of tasks blocked from running'),
SetFloatM('task_manager_commit_seconds', 'Time spent in db transaction, including on_commit calls'),
SetFloatM('dependency_manager_get_tasks_seconds', 'Time spent loading pending tasks from db'),
SetFloatM('dependency_manager_generate_dependencies_seconds', 'Time spent generating dependencies for pending tasks'),
SetFloatM('dependency_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('dependency_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('dependency_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('dependency_manager_pending_processed', 'Number of pending tasks processed'),
SetFloatM('workflow_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('workflow_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
]
# turn metric list into dictionary with the metric name as a key
self.METRICS = {}
for m in METRICSLIST:
for m in itertools.chain(self.METRICSLIST, self._METRICSLIST):
self.METRICS[m.field] = m
# track last time metrics were sent to other nodes
@@ -225,7 +223,7 @@ class Metrics:
m.reset_value(self.conn)
self.metrics_have_changed = True
self.conn.delete(root_key + "_lock")
for m in self.conn.scan_iter(root_key + '_instance_*'):
for m in self.conn.scan_iter(root_key + '-' + self._namespace + '_instance_*'):
self.conn.delete(m)
def inc(self, field, value):
@@ -292,7 +290,7 @@ class Metrics:
def send_metrics(self):
# more than one thread could be calling this at the same time, so should
# acquire redis lock before sending metrics
lock = self.conn.lock(root_key + '_lock')
lock = self.conn.lock(root_key + '-' + self._namespace + '_lock')
if not lock.acquire(blocking=False):
return
try:
@@ -302,9 +300,10 @@ class Metrics:
payload = {
'instance': self.instance_name,
'metrics': serialized_metrics,
'metrics_namespace': self._namespace,
}
# store the serialized data locally as well, so that load_other_metrics will read it
self.conn.set(root_key + '_instance_' + self.instance_name, serialized_metrics)
self.conn.set(root_key + '-' + self._namespace + '_instance_' + self.instance_name, serialized_metrics)
emit_channel_notification("metrics", payload)
self.previous_send_metrics.set(current_time)
@@ -326,14 +325,14 @@ class Metrics:
instances_filter = request.query_params.getlist("node")
# get a sorted list of instance names
instance_names = [self.instance_name]
for m in self.conn.scan_iter(root_key + '_instance_*'):
for m in self.conn.scan_iter(root_key + '-' + self._namespace + '_instance_*'):
instance_names.append(m.decode('UTF-8').split('_instance_')[1])
instance_names.sort()
# load data, including data from the this local instance
instance_data = {}
for instance in instance_names:
if len(instances_filter) == 0 or instance in instances_filter:
instance_data_from_redis = self.conn.get(root_key + '_instance_' + instance)
instance_data_from_redis = self.conn.get(root_key + '-' + self._namespace + '_instance_' + instance)
# data from other instances may not be available. That is OK.
if instance_data_from_redis:
instance_data[instance] = json.loads(instance_data_from_redis.decode('UTF-8'))
@@ -352,6 +351,120 @@ class Metrics:
return output_text
class DispatcherMetrics(Metrics):
METRICSLIST = [
SetFloatM('task_manager_get_tasks_seconds', 'Time spent in loading tasks from db'),
SetFloatM('task_manager_start_task_seconds', 'Time spent starting task'),
SetFloatM('task_manager_process_running_tasks_seconds', 'Time spent processing running tasks'),
SetFloatM('task_manager_process_pending_tasks_seconds', 'Time spent processing pending tasks'),
SetFloatM('task_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('task_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('task_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('task_manager_tasks_started', 'Number of tasks started'),
SetIntM('task_manager_running_processed', 'Number of running tasks processed'),
SetIntM('task_manager_pending_processed', 'Number of pending tasks processed'),
SetIntM('task_manager_tasks_blocked', 'Number of tasks blocked from running'),
SetFloatM('task_manager_commit_seconds', 'Time spent in db transaction, including on_commit calls'),
SetFloatM('dependency_manager_get_tasks_seconds', 'Time spent loading pending tasks from db'),
SetFloatM('dependency_manager_generate_dependencies_seconds', 'Time spent generating dependencies for pending tasks'),
SetFloatM('dependency_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('dependency_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('dependency_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('dependency_manager_pending_processed', 'Number of pending tasks processed'),
SetFloatM('workflow_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('workflow_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
# dispatcher subsystem metrics
SetIntM('dispatcher_pool_scale_up_events', 'Number of times local dispatcher scaled up a worker since startup'),
SetIntM('dispatcher_pool_active_task_count', 'Number of active tasks in the worker pool when last task was submitted'),
SetIntM('dispatcher_pool_max_worker_count', 'Highest number of workers in worker pool in last collection interval, about 20s'),
SetFloatM('dispatcher_availability', 'Fraction of time (in last collection interval) dispatcher was able to receive messages'),
]
def __init__(self, *args, **kwargs):
super().__init__(settings.METRICS_SERVICE_DISPATCHER, *args, **kwargs)
class CallbackReceiverMetrics(Metrics):
METRICSLIST = [
SetIntM('callback_receiver_events_queue_size_redis', 'Current number of events in redis queue'),
IntM('callback_receiver_events_popped_redis', 'Number of events popped from redis'),
IntM('callback_receiver_events_in_memory', 'Current number of events in memory (in transfer from redis to db)'),
IntM('callback_receiver_batch_events_errors', 'Number of times batch insertion failed'),
FloatM('callback_receiver_events_insert_db_seconds', 'Total time spent saving events to database'),
IntM('callback_receiver_events_insert_db', 'Number of events batch inserted into database'),
IntM('callback_receiver_events_broadcast', 'Number of events broadcast to other control plane nodes'),
HistogramM(
'callback_receiver_batch_events_insert_db', 'Number of events batch inserted into database', settings.SUBSYSTEM_METRICS_BATCH_INSERT_BUCKETS
),
SetFloatM('callback_receiver_event_processing_avg_seconds', 'Average processing time per event per callback receiver batch'),
]
def __init__(self, *args, **kwargs):
super().__init__(settings.METRICS_SERVICE_CALLBACK_RECEIVER, *args, **kwargs)
def metrics(request):
m = Metrics()
return m.generate_metrics(request)
output_text = ''
for m in [DispatcherMetrics(), CallbackReceiverMetrics()]:
output_text += m.generate_metrics(request)
return output_text
class CustomToPrometheusMetricsCollector(prometheus_client.registry.Collector):
"""
Takes the metric data from redis -> our custom metric fields -> prometheus
library metric fields.
The plan is to get rid of the use of redis, our custom metric fields, and
to switch fully to the prometheus library. At that point, this translation
code will be deleted.
"""
def __init__(self, metrics_obj, *args, **kwargs):
super().__init__(*args, **kwargs)
self._metrics = metrics_obj
def collect(self):
my_hostname = settings.CLUSTER_HOST_ID
instance_data = self._metrics.load_other_metrics(Request(HttpRequest()))
if not instance_data:
logger.debug(f"No metric data not found in redis for metric namespace '{self._metrics._namespace}'")
return None
host_metrics = instance_data.get(my_hostname)
for _, metric in self._metrics.METRICS.items():
entry = host_metrics.get(metric.field)
if not entry:
logger.debug(f"{self._metrics._namespace} metric '{metric.field}' not found in redis data payload {json.dumps(instance_data, indent=2)}")
continue
if isinstance(metric, HistogramM):
buckets = list(zip(metric.buckets, entry['counts']))
buckets = [[str(i[0]), str(i[1])] for i in buckets]
yield HistogramMetricFamily(metric.field, metric.help_text, buckets=buckets, sum_value=entry['sum'])
else:
yield GaugeMetricFamily(metric.field, metric.help_text, value=entry)
class CallbackReceiverMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
registry.register(CustomToPrometheusMetricsCollector(DispatcherMetrics(metrics_have_changed=False)))
super().__init__(settings.METRICS_SERVICE_CALLBACK_RECEIVER, registry)
class DispatcherMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
registry.register(CustomToPrometheusMetricsCollector(CallbackReceiverMetrics(metrics_have_changed=False)))
super().__init__(settings.METRICS_SERVICE_DISPATCHER, registry)
class WebsocketsMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
# registry.register()
super().__init__(settings.METRICS_SERVICE_WEBSOCKETS, registry)

View File

@@ -1,7 +1,40 @@
from django.apps import AppConfig
from django.utils.translation import gettext_lazy as _
from awx.main.utils.named_url_graph import _customize_graph, generate_graph
from awx.conf import register, fields
class MainConfig(AppConfig):
name = 'awx.main'
verbose_name = _('Main')
def load_named_url_feature(self):
models = [m for m in self.get_models() if hasattr(m, 'get_absolute_url')]
generate_graph(models)
_customize_graph()
register(
'NAMED_URL_FORMATS',
field_class=fields.DictField,
read_only=True,
label=_('Formats of all available named urls'),
help_text=_('Read-only list of key-value pairs that shows the standard format of all available named URLs.'),
category=_('Named URL'),
category_slug='named-url',
)
register(
'NAMED_URL_GRAPH_NODES',
field_class=fields.DictField,
read_only=True,
label=_('List of all named url graph nodes.'),
help_text=_(
'Read-only list of key-value pairs that exposes named URL graph topology.'
' Use this list to programmatically generate named URLs for resources'
),
category=_('Named URL'),
category_slug='named-url',
)
def ready(self):
super().ready()
self.load_named_url_feature()

87
awx/main/cache.py Normal file
View File

@@ -0,0 +1,87 @@
import functools
from django.conf import settings
from django.core.cache.backends.base import DEFAULT_TIMEOUT
from django.core.cache.backends.redis import RedisCache
from redis.exceptions import ConnectionError, ResponseError, TimeoutError
import socket
# This list comes from what django-redis ignores and the behavior we are trying
# to retain while dropping the dependency on django-redis.
IGNORED_EXCEPTIONS = (TimeoutError, ResponseError, ConnectionError, socket.timeout)
CONNECTION_INTERRUPTED_SENTINEL = object()
def optionally_ignore_exceptions(func=None, return_value=None):
if func is None:
return functools.partial(optionally_ignore_exceptions, return_value=return_value)
@functools.wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except IGNORED_EXCEPTIONS as e:
if settings.DJANGO_REDIS_IGNORE_EXCEPTIONS:
return return_value
raise e.__cause__ or e
return wrapper
class AWXRedisCache(RedisCache):
"""
We just want to wrap the upstream RedisCache class so that we can ignore
the exceptions that it raises when the cache is unavailable.
"""
@optionally_ignore_exceptions
def add(self, key, value, timeout=DEFAULT_TIMEOUT, version=None):
return super().add(key, value, timeout, version)
@optionally_ignore_exceptions(return_value=CONNECTION_INTERRUPTED_SENTINEL)
def _get(self, key, default=None, version=None):
return super().get(key, default, version)
def get(self, key, default=None, version=None):
value = self._get(key, default, version)
if value is CONNECTION_INTERRUPTED_SENTINEL:
return default
return value
@optionally_ignore_exceptions
def set(self, key, value, timeout=DEFAULT_TIMEOUT, version=None):
return super().set(key, value, timeout, version)
@optionally_ignore_exceptions
def touch(self, key, timeout=DEFAULT_TIMEOUT, version=None):
return super().touch(key, timeout, version)
@optionally_ignore_exceptions
def delete(self, key, version=None):
return super().delete(key, version)
@optionally_ignore_exceptions
def get_many(self, keys, version=None):
return super().get_many(keys, version)
@optionally_ignore_exceptions
def has_key(self, key, version=None):
return super().has_key(key, version)
@optionally_ignore_exceptions
def incr(self, key, delta=1, version=None):
return super().incr(key, delta, version)
@optionally_ignore_exceptions
def set_many(self, data, timeout=DEFAULT_TIMEOUT, version=None):
return super().set_many(data, timeout, version)
@optionally_ignore_exceptions
def delete_many(self, keys, version=None):
return super().delete_many(keys, version)
@optionally_ignore_exceptions
def clear(self):
return super().clear()

View File

@@ -2,6 +2,7 @@
import logging
# Django
from django.core.checks import Error
from django.utils.translation import gettext_lazy as _
# Django REST Framework
@@ -92,6 +93,21 @@ register(
),
category=_('System'),
category_slug='system',
required=False,
)
register(
'CSRF_TRUSTED_ORIGINS',
default=[],
field_class=fields.StringListField,
label=_('CSRF Trusted Origins List'),
help_text=_(
"If the service is behind a reverse proxy/load balancer, use this setting "
"to configure the schema://addresses from which the service should trust "
"Origin header values. "
),
category=_('System'),
category_slug='system',
)
register(
@@ -680,15 +696,33 @@ register(
category_slug='logging',
)
register(
'LOG_AGGREGATOR_MAX_DISK_USAGE_GB',
'LOG_AGGREGATOR_ACTION_QUEUE_SIZE',
field_class=fields.IntegerField,
default=131072,
min_value=1,
label=_('Maximum number of messages that can be stored in the log action queue'),
help_text=_(
'Defines how large the rsyslog action queue can grow in number of messages '
'stored. This can have an impact on memory utilization. When the queue '
'reaches 75% of this number, the queue will start writing to disk '
'(queue.highWatermark in rsyslog). When it reaches 90%, NOTICE, INFO, and '
'DEBUG messages will start to be discarded (queue.discardMark with '
'queue.discardSeverity=5).'
),
category=_('Logging'),
category_slug='logging',
)
register(
'LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB',
field_class=fields.IntegerField,
default=1,
min_value=1,
label=_('Maximum disk persistance for external log aggregation (in GB)'),
label=_('Maximum disk persistence for rsyslogd action queuing (in GB)'),
help_text=_(
'Amount of data to store (in gigabytes) during an outage of '
'the external log aggregator (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting.'
'Amount of data to store (in gigabytes) if an rsyslog action takes time '
'to process an incoming message (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). '
'It stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
),
category=_('Logging'),
category_slug='logging',
@@ -742,6 +776,7 @@ register(
allow_null=True,
category=_('System'),
category_slug='system',
required=False,
)
register(
'AUTOMATION_ANALYTICS_LAST_ENTRIES',
@@ -783,6 +818,7 @@ register(
help_text=_('Max jobs to allow bulk jobs to launch'),
category=_('Bulk Actions'),
category_slug='bulk',
hidden=True,
)
register(
@@ -793,6 +829,18 @@ register(
help_text=_('Max number of hosts to allow to be created in a single bulk action'),
category=_('Bulk Actions'),
category_slug='bulk',
hidden=True,
)
register(
'BULK_HOST_MAX_DELETE',
field_class=fields.IntegerField,
default=250,
label=_('Max number of hosts to allow to be deleted in a single bulk action'),
help_text=_('Max number of hosts to allow to be deleted in a single bulk action'),
category=_('Bulk Actions'),
category_slug='bulk',
hidden=True,
)
register(
@@ -803,6 +851,7 @@ register(
help_text=_('Enable preview of new user interface.'),
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -831,6 +880,55 @@ register(
category_slug='system',
)
register(
'HOST_METRIC_SUMMARY_TASK_LAST_TS',
field_class=fields.DateTimeField,
label=_('Last computing date of HostMetricSummaryMonthly'),
allow_null=True,
category=_('System'),
category_slug='system',
)
register(
'AWX_CLEANUP_PATHS',
field_class=fields.BooleanField,
label=_('Enable or Disable tmp dir cleanup'),
default=True,
help_text=_('Enable or Disable TMP Dir cleanup'),
category=('Debug'),
category_slug='debug',
)
register(
'AWX_REQUEST_PROFILE',
field_class=fields.BooleanField,
label=_('Debug Web Requests'),
default=False,
help_text=_('Debug web request python timing'),
category=('Debug'),
category_slug='debug',
)
register(
'DEFAULT_CONTAINER_RUN_OPTIONS',
field_class=fields.StringListField,
label=_('Container Run Options'),
default=['--network', 'slirp4netns:enable_ipv6=true'],
help_text=_("List of options to pass to podman run example: ['--network', 'slirp4netns:enable_ipv6=true', '--log-level', 'debug']"),
category=('Jobs'),
category_slug='jobs',
)
register(
'RECEPTOR_RELEASE_WORK',
field_class=fields.BooleanField,
label=_('Release Receptor Work'),
default=True,
help_text=_('Release receptor work'),
category=('Debug'),
category_slug='debug',
)
def logging_validate(serializer, attrs):
if not serializer.instance or not hasattr(serializer.instance, 'LOG_AGGREGATOR_HOST') or not hasattr(serializer.instance, 'LOG_AGGREGATOR_TYPE'):
@@ -857,3 +955,27 @@ def logging_validate(serializer, attrs):
register_validate('logging', logging_validate)
def csrf_trusted_origins_validate(serializer, attrs):
if not serializer.instance or not hasattr(serializer.instance, 'CSRF_TRUSTED_ORIGINS'):
return attrs
if 'CSRF_TRUSTED_ORIGINS' not in attrs:
return attrs
errors = []
for origin in attrs['CSRF_TRUSTED_ORIGINS']:
if "://" not in origin:
errors.append(
Error(
"As of Django 4.0, the values in the CSRF_TRUSTED_ORIGINS "
"setting must start with a scheme (usually http:// or "
"https://) but found %s. See the release notes for details." % origin,
)
)
if errors:
error_messages = [error.msg for error in errors]
raise serializers.ValidationError(_('\n'.join(error_messages)))
return attrs
register_validate('system', csrf_trusted_origins_validate)

View File

@@ -14,7 +14,7 @@ __all__ = [
'STANDARD_INVENTORY_UPDATE_ENV',
]
CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'controller', 'insights')
CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'controller', 'insights', 'terraform')
PRIVILEGE_ESCALATION_METHODS = [
('sudo', _('Sudo')),
('su', _('Su')),
@@ -114,3 +114,28 @@ SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS = 'unique_managed_hosts'
# Shared prefetch to use for creating a queryset for the purpose of writing or saving facts
HOST_FACTS_FIELDS = ('name', 'ansible_facts', 'ansible_facts_modified', 'modified', 'inventory_id')
# Data for RBAC compatibility layer
role_name_to_perm_mapping = {
'adhoc_role': ['adhoc_'],
'approval_role': ['approve_'],
'auditor_role': ['audit_'],
'admin_role': ['change_', 'add_', 'delete_'],
'execute_role': ['execute_'],
'read_role': ['view_'],
'update_role': ['update_'],
'member_role': ['member_'],
'use_role': ['use_'],
}
org_role_to_permission = {
'notification_admin_role': 'add_notificationtemplate',
'project_admin_role': 'add_project',
'execute_role': 'execute_jobtemplate',
'inventory_admin_role': 'add_inventory',
'credential_admin_role': 'add_credential',
'workflow_admin_role': 'add_workflowjobtemplate',
'job_template_admin_role': 'change_jobtemplate', # TODO: this doesnt really work, solution not clear
'execution_environment_admin_role': 'add_executionenvironment',
'auditor_role': 'view_project', # TODO: also doesnt really work
}

View File

@@ -106,7 +106,7 @@ class RelayConsumer(AsyncJsonWebsocketConsumer):
if group == "metrics":
message = json.loads(message['text'])
conn = redis.Redis.from_url(settings.BROKER_URL)
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "_instance_" + message['instance'], message['metrics'])
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "-" + message['metrics_namespace'] + "_instance_" + message['instance'], message['metrics'])
else:
await self.channel_layer.group_send(group, message)

View File

@@ -58,7 +58,7 @@ aim_inputs = {
'id': 'object_property',
'label': _('Object Property'),
'type': 'string',
'help_text': _('The property of the object to return. Default: Content Ex: Username, Address, etc.'),
'help_text': _('The property of the object to return. Available properties: Username, Password and Address.'),
},
{
'id': 'reason',
@@ -111,8 +111,12 @@ def aim_backend(**kwargs):
object_property = 'Content'
elif object_property.lower() == 'username':
object_property = 'UserName'
elif object_property.lower() == 'password':
object_property = 'Content'
elif object_property.lower() == 'address':
object_property = 'Address'
elif object_property not in res:
raise KeyError('Property {} not found in object'.format(object_property))
raise KeyError('Property {} not found in object, available properties: Username, Password and Address'.format(object_property))
else:
object_property = object_property.capitalize()

View File

@@ -0,0 +1,65 @@
import boto3
from botocore.exceptions import ClientError
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
secrets_manager_inputs = {
'fields': [
{
'id': 'aws_access_key',
'label': _('AWS Access Key'),
'type': 'string',
},
{
'id': 'aws_secret_key',
'label': _('AWS Secret Key'),
'type': 'string',
'secret': True,
},
],
'metadata': [
{
'id': 'region_name',
'label': _('AWS Secrets Manager Region'),
'type': 'string',
'help_text': _('Region which the secrets manager is located'),
},
{
'id': 'secret_name',
'label': _('AWS Secret Name'),
'type': 'string',
},
],
'required': ['aws_access_key', 'aws_secret_key', 'region_name', 'secret_name'],
}
def aws_secretsmanager_backend(**kwargs):
secret_name = kwargs['secret_name']
region_name = kwargs['region_name']
aws_secret_access_key = kwargs['aws_secret_key']
aws_access_key_id = kwargs['aws_access_key']
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager', region_name=region_name, aws_secret_access_key=aws_secret_access_key, aws_access_key_id=aws_access_key_id
)
try:
get_secret_value_response = client.get_secret_value(SecretId=secret_name)
except ClientError as e:
raise e
# Secrets Manager decrypts the secret value using the associated KMS CMK
# Depending on whether the secret was a string or binary, only one of these fields will be populated
if 'SecretString' in get_secret_value_response:
secret = get_secret_value_response['SecretString']
else:
secret = get_secret_value_response['SecretBinary']
return secret
aws_secretmanager_plugin = CredentialPlugin('AWS Secrets Manager lookup', inputs=secrets_manager_inputs, backend=aws_secretsmanager_backend)

View File

@@ -1,9 +1,10 @@
from azure.keyvault.secrets import SecretClient
from azure.identity import ClientSecretCredential
from msrestazure import azure_cloud
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
from azure.keyvault import KeyVaultClient, KeyVaultAuthentication
from azure.common.credentials import ServicePrincipalCredentials
from msrestazure import azure_cloud
# https://github.com/Azure/msrestazure-for-python/blob/master/msrestazure/azure_cloud.py
@@ -54,22 +55,9 @@ azure_keyvault_inputs = {
def azure_keyvault_backend(**kwargs):
url = kwargs['url']
[cloud] = [c for c in clouds if c.name == kwargs.get('cloud_name', default_cloud.name)]
def auth_callback(server, resource, scope):
credentials = ServicePrincipalCredentials(
url=url,
client_id=kwargs['client'],
secret=kwargs['secret'],
tenant=kwargs['tenant'],
resource=f"https://{cloud.suffixes.keyvault_dns.split('.', 1).pop()}",
)
token = credentials.token
return token['token_type'], token['access_token']
kv = KeyVaultClient(KeyVaultAuthentication(auth_callback))
return kv.get_secret(url, kwargs['secret_field'], kwargs.get('secret_version', '')).value
csc = ClientSecretCredential(tenant_id=kwargs['tenant'], client_id=kwargs['client'], client_secret=kwargs['secret'])
kv = SecretClient(credential=csc, vault_url=kwargs['url'])
return kv.get_secret(name=kwargs['secret_field'], version=kwargs.get('secret_version', '')).value
azure_keyvault_plugin = CredentialPlugin('Microsoft Azure Key Vault', inputs=azure_keyvault_inputs, backend=azure_keyvault_backend)

View File

@@ -4,6 +4,8 @@ from urllib.parse import urljoin, quote
from django.utils.translation import gettext_lazy as _
import requests
import base64
import binascii
conjur_inputs = {
@@ -50,6 +52,13 @@ conjur_inputs = {
}
def _is_base64(s: str) -> bool:
try:
return base64.b64encode(base64.b64decode(s.encode("utf-8"))) == s.encode("utf-8")
except binascii.Error:
return False
def conjur_backend(**kwargs):
url = kwargs['url']
api_key = kwargs['api_key']
@@ -77,7 +86,7 @@ def conjur_backend(**kwargs):
token = resp.content.decode('utf-8')
lookup_kwargs = {
'headers': {'Authorization': 'Token token="{}"'.format(token)},
'headers': {'Authorization': 'Token token="{}"'.format(token if _is_base64(token) else base64.b64encode(token.encode('utf-8')).decode('utf-8'))},
'allow_redirects': False,
}

View File

@@ -2,25 +2,29 @@ from .plugin import CredentialPlugin
from django.conf import settings
from django.utils.translation import gettext_lazy as _
from thycotic.secrets.vault import SecretsVault
from delinea.secrets.vault import PasswordGrantAuthorizer, SecretsVault
from base64 import b64decode
dsv_inputs = {
'fields': [
{
'id': 'tenant',
'label': _('Tenant'),
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretservercloud.com'),
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretsvaultcloud.com'),
'type': 'string',
},
{
'id': 'tld',
'label': _('Top-level Domain (TLD)'),
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretservercloud.com'),
'choices': ['ca', 'com', 'com.au', 'com.sg', 'eu'],
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretsvaultcloud.com'),
'choices': ['ca', 'com', 'com.au', 'eu'],
'default': 'com',
},
{'id': 'client_id', 'label': _('Client ID'), 'type': 'string'},
{
'id': 'client_id',
'label': _('Client ID'),
'type': 'string',
},
{
'id': 'client_secret',
'label': _('Client Secret'),
@@ -41,8 +45,16 @@ dsv_inputs = {
'help_text': _('The field to extract from the secret'),
'type': 'string',
},
{
'id': 'secret_decoding',
'label': _('Should the secret be base64 decoded?'),
'help_text': _('Specify whether the secret should be base64 decoded, typically used for storing files, such as SSH keys'),
'choices': ['No Decoding', 'Decode Base64'],
'type': 'string',
'default': 'No Decoding',
},
],
'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field'],
'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field', 'secret_decoding'],
}
if settings.DEBUG:
@@ -51,12 +63,32 @@ if settings.DEBUG:
'id': 'url_template',
'label': _('URL template'),
'type': 'string',
'default': 'https://{}.secretsvaultcloud.{}/v1',
'default': 'https://{}.secretsvaultcloud.{}',
}
)
dsv_plugin = CredentialPlugin(
'Thycotic DevOps Secrets Vault',
dsv_inputs,
lambda **kwargs: SecretsVault(**{k: v for (k, v) in kwargs.items() if k in [field['id'] for field in dsv_inputs['fields']]}).get_secret(kwargs['path'])['data'][kwargs['secret_field']], # fmt: skip
)
def dsv_backend(**kwargs):
tenant_name = kwargs['tenant']
tenant_tld = kwargs.get('tld', 'com')
tenant_url_template = kwargs.get('url_template', 'https://{}.secretsvaultcloud.{}')
client_id = kwargs['client_id']
client_secret = kwargs['client_secret']
secret_path = kwargs['path']
secret_field = kwargs['secret_field']
# providing a default value to remain backward compatible for secrets that have not specified this option
secret_decoding = kwargs.get('secret_decoding', 'No Decoding')
tenant_url = tenant_url_template.format(tenant_name, tenant_tld.strip("."))
authorizer = PasswordGrantAuthorizer(tenant_url, client_id, client_secret)
dsv_secret = SecretsVault(tenant_url, authorizer).get_secret(secret_path)
# files can be uploaded base64 decoded to DSV and thus decoding it only, when asked for
if secret_decoding == 'Decode Base64':
return b64decode(dsv_secret['data'][secret_field]).decode()
return dsv_secret['data'][secret_field]
dsv_plugin = CredentialPlugin(name='Thycotic DevOps Secrets Vault', inputs=dsv_inputs, backend=dsv_backend)

View File

@@ -41,6 +41,34 @@ base_inputs = {
'secret': True,
'help_text': _('The Secret ID for AppRole Authentication'),
},
{
'id': 'client_cert_public',
'label': _('Client Certificate'),
'type': 'string',
'multiline': True,
'help_text': _(
'The PEM-encoded client certificate used for TLS client authentication.'
' This should include the certificate and any intermediate certififcates.'
),
},
{
'id': 'client_cert_private',
'label': _('Client Certificate Key'),
'type': 'string',
'multiline': True,
'secret': True,
'help_text': _('The certificate private key used for TLS client authentication.'),
},
{
'id': 'client_cert_role',
'label': _('TLS Authentication Role'),
'type': 'string',
'multiline': False,
'help_text': _(
'The role configured in Hashicorp Vault for TLS client authentication.'
' If not provided, Hashicorp Vault may assign roles based on the certificate used.'
),
},
{
'id': 'namespace',
'label': _('Namespace name (Vault Enterprise only)'),
@@ -59,6 +87,20 @@ base_inputs = {
' see https://www.vaultproject.io/docs/auth/kubernetes#configuration'
),
},
{
'id': 'username',
'label': _('Username'),
'type': 'string',
'secret': False,
'help_text': _('Username for user authentication.'),
},
{
'id': 'password',
'label': _('Password'),
'type': 'string',
'secret': True,
'help_text': _('Password for user authentication.'),
},
{
'id': 'default_auth_path',
'label': _('Path to Auth'),
@@ -157,19 +199,25 @@ hashi_ssh_inputs['required'].extend(['public_key', 'role'])
def handle_auth(**kwargs):
token = None
if kwargs.get('token'):
token = kwargs['token']
elif kwargs.get('username') and kwargs.get('password'):
token = method_auth(**kwargs, auth_param=userpass_auth(**kwargs))
elif kwargs.get('role_id') and kwargs.get('secret_id'):
token = method_auth(**kwargs, auth_param=approle_auth(**kwargs))
elif kwargs.get('kubernetes_role'):
token = method_auth(**kwargs, auth_param=kubernetes_auth(**kwargs))
elif kwargs.get('client_cert_public') and kwargs.get('client_cert_private'):
token = method_auth(**kwargs, auth_param=client_cert_auth(**kwargs))
else:
raise Exception('Either token or AppRole/Kubernetes authentication parameters must be set')
raise Exception('Token, Username/Password, AppRole, Kubernetes, or TLS authentication parameters must be set')
return token
def userpass_auth(**kwargs):
return {'username': kwargs['username'], 'password': kwargs['password']}
def approle_auth(**kwargs):
return {'role_id': kwargs['role_id'], 'secret_id': kwargs['secret_id']}
@@ -181,6 +229,10 @@ def kubernetes_auth(**kwargs):
return {'role': kwargs['kubernetes_role'], 'jwt': jwt}
def client_cert_auth(**kwargs):
return {'name': kwargs.get('client_cert_role')}
def method_auth(**kwargs):
# get auth method specific params
request_kwargs = {'json': kwargs['auth_param'], 'timeout': 30}
@@ -193,13 +245,25 @@ def method_auth(**kwargs):
cacert = kwargs.get('cacert', None)
sess = requests.Session()
sess.mount(url, requests.adapters.HTTPAdapter(max_retries=5))
# Namespace support
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']
request_url = '/'.join([url, 'auth', auth_path, 'login']).rstrip('/')
if kwargs['auth_param'].get('username'):
request_url = request_url + '/' + (kwargs['username'])
with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert
resp = sess.post(request_url, **request_kwargs)
# TLS client certificate support
if kwargs.get('client_cert_public') and kwargs.get('client_cert_private'):
# Add client cert to requests Session before making call
with CertFiles(kwargs['client_cert_public'], key=kwargs['client_cert_private']) as client_cert:
sess.cert = client_cert
resp = sess.post(request_url, **request_kwargs)
else:
# Make call without client certificate
resp = sess.post(request_url, **request_kwargs)
resp.raise_for_status()
token = resp.json()['auth']['client_token']
return token
@@ -220,6 +284,7 @@ def kv_backend(**kwargs):
}
sess = requests.Session()
sess.mount(url, requests.adapters.HTTPAdapter(max_retries=5))
sess.headers['Authorization'] = 'Bearer {}'.format(token)
# Compatibility header for older installs of Hashicorp Vault
sess.headers['X-Vault-Token'] = token
@@ -265,6 +330,8 @@ def kv_backend(**kwargs):
if secret_key:
try:
if (secret_key != 'data') and (secret_key not in json['data']) and ('data' in json['data']):
return json['data']['data'][secret_key]
return json['data'][secret_key]
except KeyError:
raise RuntimeError('{} is not present at {}'.format(secret_key, secret_path))
@@ -288,6 +355,7 @@ def ssh_backend(**kwargs):
request_kwargs['json']['valid_principals'] = kwargs['valid_principals']
sess = requests.Session()
sess.mount(url, requests.adapters.HTTPAdapter(max_retries=5))
sess.headers['Authorization'] = 'Bearer {}'.format(token)
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']

View File

@@ -1,7 +1,10 @@
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
from thycotic.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
try:
from delinea.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
except ImportError:
from thycotic.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
tss_inputs = {
'fields': [
@@ -50,8 +53,10 @@ tss_inputs = {
def tss_backend(**kwargs):
if 'domain' in kwargs:
authorizer = DomainPasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'], kwargs['domain'])
if kwargs.get("domain"):
authorizer = DomainPasswordGrantAuthorizer(
base_url=kwargs['server_url'], username=kwargs['username'], domain=kwargs['domain'], password=kwargs['password']
)
else:
authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'])
secret_server = SecretServer(kwargs['server_url'], authorizer)

View File

@@ -87,7 +87,7 @@ class RecordedQueryLog(object):
)
log.commit()
log.execute(
'INSERT INTO queries (pid, version, argv, time, sql, explain, bt) ' 'VALUES (?, ?, ?, ?, ?, ?, ?);',
'INSERT INTO queries (pid, version, argv, time, sql, explain, bt) VALUES (?, ?, ?, ?, ?, ?, ?);',
(os.getpid(), version, ' '.join(sys.argv), seconds, sql, explain, bt),
)
log.commit()

View File

@@ -1,6 +1,7 @@
import os
import psycopg2
import psycopg
import select
from copy import deepcopy
from contextlib import contextmanager
@@ -40,8 +41,12 @@ def get_task_queuename():
class PubSub(object):
def __init__(self, conn):
def __init__(self, conn, select_timeout=None):
self.conn = conn
if select_timeout is None:
self.select_timeout = 5
else:
self.select_timeout = select_timeout
def listen(self, channel):
with self.conn.cursor() as cur:
@@ -55,25 +60,62 @@ class PubSub(object):
with self.conn.cursor() as cur:
cur.execute('SELECT pg_notify(%s, %s);', (channel, payload))
def events(self, select_timeout=5, yield_timeouts=False):
@staticmethod
def current_notifies(conn):
"""
Altered version of .notifies method from psycopg library
This removes the outer while True loop so that we only process
queued notifications
"""
with conn.lock:
try:
ns = conn.wait(psycopg.generators.notifies(conn.pgconn))
except psycopg.errors._NO_TRACEBACK as ex:
raise ex.with_traceback(None)
enc = psycopg._encodings.pgconn_encoding(conn.pgconn)
for pgn in ns:
n = psycopg.connection.Notify(pgn.relname.decode(enc), pgn.extra.decode(enc), pgn.be_pid)
yield n
def events(self, yield_timeouts=False):
if not self.conn.autocommit:
raise RuntimeError('Listening for events can only be done in autocommit mode')
while True:
if select.select([self.conn], [], [], select_timeout) == NOT_READY:
if select.select([self.conn], [], [], self.select_timeout) == NOT_READY:
if yield_timeouts:
yield None
else:
self.conn.poll()
while self.conn.notifies:
yield self.conn.notifies.pop(0)
notification_generator = self.current_notifies(self.conn)
for notification in notification_generator:
yield notification
def close(self):
self.conn.close()
def create_listener_connection():
conf = deepcopy(settings.DATABASES['default'])
conf['OPTIONS'] = deepcopy(conf.get('OPTIONS', {}))
# Modify the application name to distinguish from other connections the process might use
conf['OPTIONS']['application_name'] = get_application_name(settings.CLUSTER_HOST_ID, function='listener')
# Apply overrides specifically for the listener connection
for k, v in settings.LISTENER_DATABASES.get('default', {}).items():
conf[k] = v
for k, v in settings.LISTENER_DATABASES.get('default', {}).get('OPTIONS', {}).items():
conf['OPTIONS'][k] = v
# Allow password-less authentication
if 'PASSWORD' in conf:
conf['OPTIONS']['password'] = conf.pop('PASSWORD')
connection_data = f"dbname={conf['NAME']} host={conf['HOST']} user={conf['USER']} port={conf['PORT']}"
return psycopg.connect(connection_data, autocommit=True, **conf['OPTIONS'])
@contextmanager
def pg_bus_conn(new_connection=False):
def pg_bus_conn(new_connection=False, select_timeout=None):
'''
Any listeners probably want to establish a new database connection,
separate from the Django connection used for queries, because that will prevent
@@ -85,13 +127,7 @@ def pg_bus_conn(new_connection=False):
'''
if new_connection:
conf = settings.DATABASES['default'].copy()
conf['OPTIONS'] = conf.get('OPTIONS', {}).copy()
# Modify the application name to distinguish from other connections the process might use
conf['OPTIONS']['application_name'] = get_application_name(settings.CLUSTER_HOST_ID, function='listener')
conn = psycopg2.connect(dbname=conf['NAME'], host=conf['HOST'], user=conf['USER'], password=conf['PASSWORD'], port=conf['PORT'], **conf['OPTIONS'])
# Django connection.cursor().connection doesn't have autocommit=True on by default
conn.set_session(autocommit=True)
conn = create_listener_connection()
else:
if pg_connection.connection is None:
pg_connection.connect()
@@ -99,7 +135,7 @@ def pg_bus_conn(new_connection=False):
raise RuntimeError('Unexpectedly could not connect to postgres for pg_notify actions')
conn = pg_connection.connection
pubsub = PubSub(conn)
pubsub = PubSub(conn, select_timeout=select_timeout)
yield pubsub
if new_connection:
conn.close()

View File

@@ -37,8 +37,14 @@ class Control(object):
def running(self, *args, **kwargs):
return self.control_with_reply('running', *args, **kwargs)
def cancel(self, task_ids, *args, **kwargs):
return self.control_with_reply('cancel', *args, extra_data={'task_ids': task_ids}, **kwargs)
def cancel(self, task_ids, with_reply=True):
if with_reply:
return self.control_with_reply('cancel', extra_data={'task_ids': task_ids})
else:
self.control({'control': 'cancel', 'task_ids': task_ids, 'reply_to': None}, extra_data={'task_ids': task_ids})
def schedule(self, *args, **kwargs):
return self.control_with_reply('schedule', *args, **kwargs)
@classmethod
def generate_reply_queue_name(cls):
@@ -52,14 +58,14 @@ class Control(object):
if not connection.get_autocommit():
raise RuntimeError('Control-with-reply messages can only be done in autocommit mode')
with pg_bus_conn() as conn:
with pg_bus_conn(select_timeout=timeout) as conn:
conn.listen(reply_queue)
send_data = {'control': command, 'reply_to': reply_queue}
if extra_data:
send_data.update(extra_data)
conn.notify(self.queuename, json.dumps(send_data))
for reply in conn.events(select_timeout=timeout, yield_timeouts=True):
for reply in conn.events(yield_timeouts=True):
if reply is None:
logger.error(f'{self.service} did not reply within {timeout}s')
raise RuntimeError(f"{self.service} did not reply within {timeout}s")

View File

@@ -1,57 +1,142 @@
import logging
import os
import time
from multiprocessing import Process
import yaml
from datetime import datetime
from django.conf import settings
from django.db import connections
from schedule import Scheduler
from django_guid import set_guid
from django_guid.utils import generate_guid
from awx.main.dispatch.worker import TaskWorker
from awx.main.utils.db import set_connection_name
logger = logging.getLogger('awx.main.dispatch.periodic')
class Scheduler(Scheduler):
def run_continuously(self):
idle_seconds = max(1, min(self.jobs).period.total_seconds() / 2)
class ScheduledTask:
"""
Class representing schedules, very loosely modeled after python schedule library Job
the idea of this class is to:
- only deal in relative times (time since the scheduler global start)
- only deal in integer math for target runtimes, but float for current relative time
def run():
ppid = os.getppid()
logger.warning('periodic beat started')
Missed schedule policy:
Invariant target times are maintained, meaning that if interval=10s offset=0
and it runs at t=7s, then it calls for next run in 3s.
However, if a complete interval has passed, that is counted as a missed run,
and missed runs are abandoned (no catch-up runs).
"""
set_connection_name('periodic') # set application_name to distinguish from other dispatcher processes
def __init__(self, name: str, data: dict):
# parameters need for schedule computation
self.interval = int(data['schedule'].total_seconds())
self.offset = 0 # offset relative to start time this schedule begins
self.index = 0 # number of periods of the schedule that has passed
while True:
if os.getppid() != ppid:
# if the parent PID changes, this process has been orphaned
# via e.g., segfault or sigkill, we should exit too
pid = os.getpid()
logger.warning(f'periodic beat exiting gracefully pid:{pid}')
raise SystemExit()
try:
for conn in connections.all():
# If the database connection has a hiccup, re-establish a new
# connection
conn.close_if_unusable_or_obsolete()
set_guid(generate_guid())
self.run_pending()
except Exception:
logger.exception('encountered an error while scheduling periodic tasks')
time.sleep(idle_seconds)
# parameters that do not affect scheduling logic
self.last_run = None # time of last run, only used for debug
self.completed_runs = 0 # number of times schedule is known to run
self.name = name
self.data = data # used by caller to know what to run
process = Process(target=run)
process.daemon = True
process.start()
@property
def next_run(self):
"Time until the next run with t=0 being the global_start of the scheduler class"
return (self.index + 1) * self.interval + self.offset
def due_to_run(self, relative_time):
return bool(self.next_run <= relative_time)
def expected_runs(self, relative_time):
return int((relative_time - self.offset) / self.interval)
def mark_run(self, relative_time):
self.last_run = relative_time
self.completed_runs += 1
new_index = self.expected_runs(relative_time)
if new_index > self.index + 1:
logger.warning(f'Missed {new_index - self.index - 1} schedules of {self.name}')
self.index = new_index
def missed_runs(self, relative_time):
"Number of times job was supposed to ran but failed to, only used for debug"
missed_ct = self.expected_runs(relative_time) - self.completed_runs
# if this is currently due to run do not count that as a missed run
if missed_ct and self.due_to_run(relative_time):
missed_ct -= 1
return missed_ct
def run_continuously():
scheduler = Scheduler()
for task in settings.CELERYBEAT_SCHEDULE.values():
apply_async = TaskWorker.resolve_callable(task['task']).apply_async
total_seconds = task['schedule'].total_seconds()
scheduler.every(total_seconds).seconds.do(apply_async)
scheduler.run_continuously()
class Scheduler:
def __init__(self, schedule):
"""
Expects schedule in the form of a dictionary like
{
'job1': {'schedule': timedelta(seconds=50), 'other': 'stuff'}
}
Only the schedule nearest-second value is used for scheduling,
the rest of the data is for use by the caller to know what to run.
"""
self.jobs = [ScheduledTask(name, data) for name, data in schedule.items()]
min_interval = min(job.interval for job in self.jobs)
num_jobs = len(self.jobs)
# this is intentionally oppioniated against spammy schedules
# a core goal is to spread out the scheduled tasks (for worker management)
# and high-frequency schedules just do not work with that
if num_jobs > min_interval:
raise RuntimeError(f'Number of schedules ({num_jobs}) is more than the shortest schedule interval ({min_interval} seconds).')
# even space out jobs over the base interval
for i, job in enumerate(self.jobs):
job.offset = (i * min_interval) // num_jobs
# internally times are all referenced relative to startup time, add grace period
self.global_start = time.time() + 2.0
def get_and_mark_pending(self):
relative_time = time.time() - self.global_start
to_run = []
for job in self.jobs:
if job.due_to_run(relative_time):
to_run.append(job)
logger.debug(f'scheduler found {job.name} to run, {relative_time - job.next_run} seconds after target')
job.mark_run(relative_time)
return to_run
def time_until_next_run(self):
relative_time = time.time() - self.global_start
next_job = min(self.jobs, key=lambda j: j.next_run)
delta = next_job.next_run - relative_time
if delta <= 0.1:
# careful not to give 0 or negative values to the select timeout, which has unclear interpretation
logger.warning(f'Scheduler next run of {next_job.name} is {-delta} seconds in the past')
return 0.1
elif delta > 20.0:
logger.warning(f'Scheduler next run unexpectedly over 20 seconds in future: {delta}')
return 20.0
logger.debug(f'Scheduler next run is {next_job.name} in {delta} seconds')
return delta
def debug(self, *args, **kwargs):
data = dict()
data['title'] = 'Scheduler status'
now = datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S UTC')
start_time = datetime.fromtimestamp(self.global_start).strftime('%Y-%m-%d %H:%M:%S UTC')
relative_time = time.time() - self.global_start
data['started_time'] = start_time
data['current_time'] = now
data['current_time_relative'] = round(relative_time, 3)
data['total_schedules'] = len(self.jobs)
data['schedule_list'] = dict(
[
(
job.name,
dict(
last_run_seconds_ago=round(relative_time - job.last_run, 3) if job.last_run else None,
next_run_in_seconds=round(job.next_run - relative_time, 3),
offset_in_seconds=job.offset,
completed_runs=job.completed_runs,
missed_runs=job.missed_runs(relative_time),
),
)
for job in sorted(self.jobs, key=lambda job: job.interval)
]
)
return yaml.safe_dump(data, default_flow_style=False, sort_keys=False)

View File

@@ -339,6 +339,17 @@ class AutoscalePool(WorkerPool):
# but if the task takes longer than the time defined here, we will force it to stop here
self.task_manager_timeout = settings.TASK_MANAGER_TIMEOUT + settings.TASK_MANAGER_TIMEOUT_GRACE_PERIOD
# initialize some things for subsystem metrics periodic gathering
# the AutoscalePool class does not save these to redis directly, but reports via produce_subsystem_metrics
self.scale_up_ct = 0
self.worker_count_max = 0
def produce_subsystem_metrics(self, metrics_object):
metrics_object.set('dispatcher_pool_scale_up_events', self.scale_up_ct)
metrics_object.set('dispatcher_pool_active_task_count', sum(len(w.managed_tasks) for w in self.workers))
metrics_object.set('dispatcher_pool_max_worker_count', self.worker_count_max)
self.worker_count_max = len(self.workers)
@property
def should_grow(self):
if len(self.workers) < self.min_workers:
@@ -406,16 +417,16 @@ class AutoscalePool(WorkerPool):
# the task manager to never do more work
current_task = w.current_task
if current_task and isinstance(current_task, dict):
endings = ['tasks.task_manager', 'tasks.dependency_manager', 'tasks.workflow_manager']
endings = ('tasks.task_manager', 'tasks.dependency_manager', 'tasks.workflow_manager')
current_task_name = current_task.get('task', '')
if any(current_task_name.endswith(e) for e in endings):
if current_task_name.endswith(endings):
if 'started' not in current_task:
w.managed_tasks[current_task['uuid']]['started'] = time.time()
age = time.time() - current_task['started']
w.managed_tasks[current_task['uuid']]['age'] = age
if age > self.task_manager_timeout:
logger.error(f'{current_task_name} has held the advisory lock for {age}, sending SIGTERM to {w.pid}')
os.kill(w.pid, signal.SIGTERM)
logger.error(f'{current_task_name} has held the advisory lock for {age}, sending SIGUSR1 to {w.pid}')
os.kill(w.pid, signal.SIGUSR1)
for m in orphaned:
# if all the workers are dead, spawn at least one
@@ -443,7 +454,12 @@ class AutoscalePool(WorkerPool):
idx = random.choice(range(len(self.workers)))
return idx, self.workers[idx]
else:
return super(AutoscalePool, self).up()
self.scale_up_ct += 1
ret = super(AutoscalePool, self).up()
new_worker_ct = len(self.workers)
if new_worker_ct > self.worker_count_max:
self.worker_count_max = new_worker_ct
return ret
def write(self, preferred_queue, body):
if 'guid' in body:

View File

@@ -73,15 +73,15 @@ class task:
return cls.apply_async(args, kwargs)
@classmethod
def apply_async(cls, args=None, kwargs=None, queue=None, uuid=None, **kw):
def get_async_body(cls, args=None, kwargs=None, uuid=None, **kw):
"""
Get the python dict to become JSON data in the pg_notify message
This same message gets passed over the dispatcher IPC queue to workers
If a task is submitted to a multiprocessing pool, skipping pg_notify, this might be used directly
"""
task_id = uuid or str(uuid4())
args = args or []
kwargs = kwargs or {}
queue = queue or getattr(cls.queue, 'im_func', cls.queue)
if not queue:
msg = f'{cls.name}: Queue value required and may not be None'
logger.error(msg)
raise ValueError(msg)
obj = {'uuid': task_id, 'args': args, 'kwargs': kwargs, 'task': cls.name, 'time_pub': time.time()}
guid = get_guid()
if guid:
@@ -89,6 +89,16 @@ class task:
if bind_kwargs:
obj['bind_kwargs'] = bind_kwargs
obj.update(**kw)
return obj
@classmethod
def apply_async(cls, args=None, kwargs=None, queue=None, uuid=None, **kw):
queue = queue or getattr(cls.queue, 'im_func', cls.queue)
if not queue:
msg = f'{cls.name}: Queue value required and may not be None'
logger.error(msg)
raise ValueError(msg)
obj = cls.get_async_body(args=args, kwargs=kwargs, uuid=uuid, **kw)
if callable(queue):
queue = queue()
if not is_testing():
@@ -116,4 +126,5 @@ class task:
setattr(fn, 'name', cls.name)
setattr(fn, 'apply_async', cls.apply_async)
setattr(fn, 'delay', cls.delay)
setattr(fn, 'get_async_body', cls.get_async_body)
return fn

View File

@@ -7,18 +7,21 @@ import signal
import sys
import redis
import json
import psycopg2
import psycopg
import time
from uuid import UUID
from queue import Empty as QueueEmpty
from datetime import timedelta
from django import db
from django.conf import settings
from awx.main.dispatch.pool import WorkerPool
from awx.main.dispatch.periodic import Scheduler
from awx.main.dispatch import pg_bus_conn
from awx.main.utils.common import log_excess_runtime
from awx.main.utils.db import set_connection_name
import awx.main.analytics.subsystem_metrics as s_metrics
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
@@ -63,10 +66,12 @@ class AWXConsumerBase(object):
def control(self, body):
logger.warning(f'Received control signal:\n{body}')
control = body.get('control')
if control in ('status', 'running', 'cancel'):
if control in ('status', 'schedule', 'running', 'cancel'):
reply_queue = body['reply_to']
if control == 'status':
msg = '\n'.join([self.listening_on, self.pool.debug()])
if control == 'schedule':
msg = self.scheduler.debug()
elif control == 'running':
msg = []
for worker in self.pool.workers:
@@ -84,24 +89,20 @@ class AWXConsumerBase(object):
if task_ids and not msg:
logger.info(f'Could not locate running tasks to cancel with ids={task_ids}')
with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
if reply_queue is not None:
with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
elif control == 'reload':
for worker in self.pool.workers:
worker.quit()
else:
logger.error('unrecognized control message: {}'.format(control))
def process_task(self, body):
def dispatch_task(self, body):
"""This will place the given body into a worker queue to run method decorated as a task"""
if isinstance(body, dict):
body['time_ack'] = time.time()
if 'control' in body:
try:
return self.control(body)
except Exception:
logger.exception(f"Exception handling control message: {body}")
return
if len(self.pool):
if "uuid" in body and body['uuid']:
try:
@@ -115,15 +116,24 @@ class AWXConsumerBase(object):
self.pool.write(queue, body)
self.total_messages += 1
def process_task(self, body):
"""Routes the task details in body as either a control task or a task-task"""
if 'control' in body:
try:
return self.control(body)
except Exception:
logger.exception(f"Exception handling control message: {body}")
return
self.dispatch_task(body)
@log_excess_runtime(logger)
def record_statistics(self):
if time.time() - self.last_stats > 1: # buffer stat recording to once per second
try:
self.redis.set(f'awx_{self.name}_statistics', self.pool.debug())
self.last_stats = time.time()
except Exception:
logger.exception(f"encountered an error communicating with redis to store {self.name} statistics")
self.last_stats = time.time()
self.last_stats = time.time()
def run(self, *args, **kwargs):
signal.signal(signal.SIGINT, self.stop)
@@ -142,29 +152,75 @@ class AWXConsumerRedis(AWXConsumerBase):
def run(self, *args, **kwargs):
super(AWXConsumerRedis, self).run(*args, **kwargs)
self.worker.on_start()
logger.info(f'Callback receiver started with pid={os.getpid()}')
db.connection.close() # logs use database, so close connection
while True:
logger.debug(f'{os.getpid()} is alive')
time.sleep(60)
class AWXConsumerPG(AWXConsumerBase):
def __init__(self, *args, **kwargs):
def __init__(self, *args, schedule=None, **kwargs):
super().__init__(*args, **kwargs)
self.pg_max_wait = settings.DISPATCHER_DB_DOWNTOWN_TOLLERANCE
self.pg_max_wait = getattr(settings, 'DISPATCHER_DB_DOWNTOWN_TOLLERANCE', settings.DISPATCHER_DB_DOWNTIME_TOLERANCE)
# if no successful loops have ran since startup, then we should fail right away
self.pg_is_down = True # set so that we fail if we get database errors on startup
self.pg_down_time = time.time() - self.pg_max_wait # allow no grace period
self.last_cleanup = time.time()
init_time = time.time()
self.pg_down_time = init_time - self.pg_max_wait # allow no grace period
self.last_cleanup = init_time
self.subsystem_metrics = s_metrics.DispatcherMetrics(auto_pipe_execute=False)
self.last_metrics_gather = init_time
self.listen_cumulative_time = 0.0
if schedule:
schedule = schedule.copy()
else:
schedule = {}
# add control tasks to be ran at regular schedules
# NOTE: if we run out of database connections, it is important to still run cleanup
# so that we scale down workers and free up connections
schedule['pool_cleanup'] = {'control': self.pool.cleanup, 'schedule': timedelta(seconds=60)}
# record subsystem metrics for the dispatcher
schedule['metrics_gather'] = {'control': self.record_metrics, 'schedule': timedelta(seconds=20)}
self.scheduler = Scheduler(schedule)
def record_metrics(self):
current_time = time.time()
self.pool.produce_subsystem_metrics(self.subsystem_metrics)
self.subsystem_metrics.set('dispatcher_availability', self.listen_cumulative_time / (current_time - self.last_metrics_gather))
self.subsystem_metrics.pipe_execute()
self.listen_cumulative_time = 0.0
self.last_metrics_gather = current_time
def run_periodic_tasks(self):
self.record_statistics() # maintains time buffer in method
"""
Run general periodic logic, and return maximum time in seconds before
the next requested run
This may be called more often than that when events are consumed
so this should be very efficient in that
"""
try:
self.record_statistics() # maintains time buffer in method
except Exception as exc:
logger.warning(f'Failed to save dispatcher statistics {exc}')
if time.time() - self.last_cleanup > 60: # same as cluster_node_heartbeat
# NOTE: if we run out of database connections, it is important to still run cleanup
# so that we scale down workers and free up connections
self.pool.cleanup()
self.last_cleanup = time.time()
for job in self.scheduler.get_and_mark_pending():
if 'control' in job.data:
try:
job.data['control']()
except Exception:
logger.exception(f'Error running control task {job.data}')
elif 'task' in job.data:
body = self.worker.resolve_callable(job.data['task']).get_async_body()
# bypasses pg_notify for scheduled tasks
self.dispatch_task(body)
if self.pg_is_down:
logger.info('Dispatcher listener connection established')
self.pg_is_down = False
self.listen_start = time.time()
return self.scheduler.time_until_next_run()
def run(self, *args, **kwargs):
super(AWXConsumerPG, self).run(*args, **kwargs)
@@ -180,17 +236,21 @@ class AWXConsumerPG(AWXConsumerBase):
if init is False:
self.worker.on_start()
init = True
# run_periodic_tasks run scheduled actions and gives time until next scheduled action
# this is saved to the conn (PubSub) object in order to modify read timeout in-loop
conn.select_timeout = self.run_periodic_tasks()
# this is the main operational loop for awx-manage run_dispatcher
for e in conn.events(yield_timeouts=True):
self.listen_cumulative_time += time.time() - self.listen_start # for metrics
if e is not None:
self.process_task(json.loads(e.payload))
self.run_periodic_tasks()
self.pg_is_down = False
conn.select_timeout = self.run_periodic_tasks()
if self.should_stop:
return
except psycopg2.InterfaceError:
except psycopg.InterfaceError:
logger.warning("Stale Postgres message bus connection, reconnecting")
continue
except (db.DatabaseError, psycopg2.OperationalError):
except (db.DatabaseError, psycopg.OperationalError):
# If we have attained stady state operation, tolerate short-term database hickups
if not self.pg_is_down:
logger.exception(f"Error consuming new events from postgres, will retry for {self.pg_max_wait} s")
@@ -199,6 +259,12 @@ class AWXConsumerPG(AWXConsumerBase):
current_downtime = time.time() - self.pg_down_time
if current_downtime > self.pg_max_wait:
logger.exception(f"Postgres event consumer has not recovered in {current_downtime} s, exiting")
# Sending QUIT to multiprocess queue to signal workers to exit
for worker in self.pool.workers:
try:
worker.quit()
except Exception:
logger.exception(f"Error sending QUIT to worker {worker}")
raise
# Wait for a second before next attempt, but still listen for any shutdown signals
for i in range(10):
@@ -210,6 +276,12 @@ class AWXConsumerPG(AWXConsumerBase):
except Exception:
# Log unanticipated exception in addition to writing to stderr to get timestamps and other metadata
logger.exception('Encountered unhandled error in dispatcher main loop')
# Sending QUIT to multiprocess queue to signal workers to exit
for worker in self.pool.workers:
try:
worker.quit()
except Exception:
logger.exception(f"Error sending QUIT to worker {worker}")
raise
@@ -232,8 +304,8 @@ class BaseWorker(object):
break
except QueueEmpty:
continue
except Exception as e:
logger.error("Exception on worker {}, restarting: ".format(idx) + str(e))
except Exception:
logger.exception("Exception on worker {}, reconnecting: ".format(idx))
continue
try:
for conn in db.connections.all():

View File

@@ -72,7 +72,7 @@ class CallbackBrokerWorker(BaseWorker):
def __init__(self):
self.buff = {}
self.redis = redis.Redis.from_url(settings.BROKER_URL)
self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False)
self.subsystem_metrics = s_metrics.CallbackReceiverMetrics(auto_pipe_execute=False)
self.queue_pop = 0
self.queue_name = settings.CALLBACK_QUEUE
self.prof = AWXProfiler("CallbackBrokerWorker")
@@ -191,7 +191,9 @@ class CallbackBrokerWorker(BaseWorker):
e._retry_count = retry_count
# special sanitization logic for postgres treatment of NUL 0x00 char
if (retry_count == 1) and isinstance(exc_indv, ValueError) and ("\x00" in e.stdout):
# This used to check the class of the exception but on the postgres3 upgrade it could appear
# as either DataError or ValueError, so now lets just try if its there.
if (retry_count == 1) and ("\x00" in e.stdout):
e.stdout = e.stdout.replace("\x00", "")
if retry_count >= self.INDIVIDUAL_EVENT_RETRIES:

View File

@@ -5,6 +5,7 @@
import copy
import json
import re
import sys
import urllib.parse
from jinja2 import sandbox, StrictUndefined
@@ -67,10 +68,60 @@ def __enum_validate__(validator, enums, instance, schema):
Draft4Validator.VALIDATORS['enum'] = __enum_validate__
import logging
logger = logging.getLogger('awx.main.fields')
class JSONBlob(JSONField):
# Cringe... a JSONField that is back ended with a TextField.
# This field was a legacy custom field type that tl;dr; was a TextField
# Over the years, with Django upgrades, we were able to go to a JSONField instead of the custom field
# However, we didn't want to have large customers with millions of events to update from text to json during an upgrade
# So we keep this field type as backended with TextField.
def get_internal_type(self):
return "TextField"
# postgres uses a Jsonb field as the default backend
# with psycopg2 it was using a psycopg2._json.Json class internally
# with psycopg3 it uses a psycopg.types.json.Jsonb class internally
# The binary class was not compatible with a text field, so we are going to override these next two methods and ensure we are using a string
def from_db_value(self, value, expression, connection):
if value is None:
return value
if isinstance(value, str):
try:
return json.loads(value)
except Exception as e:
logger.error(f"Failed to load JSONField {self.name}: {e}")
return value
def get_db_prep_value(self, value, connection, prepared=False):
if not prepared:
value = self.get_prep_value(value)
try:
# Null characters are not allowed in text fields and JSONBlobs are JSON data but saved as text
# So we want to make sure we strip out any null characters also note, these "should" be escaped by the dumps process:
# >>> my_obj = { 'test': '\x00' }
# >>> import json
# >>> json.dumps(my_obj)
# '{"test": "\\u0000"}'
# But just to be safe, lets remove them if they are there. \x00 and \u0000 are the same:
# >>> string = "\x00"
# >>> "\u0000" in string
# True
dumped_value = json.dumps(value)
if "\x00" in dumped_value:
dumped_value = dumped_value.replace("\x00", '')
return dumped_value
except Exception as e:
logger.error(f"Failed to dump JSONField {self.name}: {e} value: {value}")
return value
# Based on AutoOneToOneField from django-annoying:
# https://bitbucket.org/offline/django-annoying/src/a0de8b294db3/annoying/fields.py
@@ -356,11 +407,13 @@ class SmartFilterField(models.TextField):
# https://docs.python.org/2/library/stdtypes.html#truth-value-testing
if not value:
return None
value = urllib.parse.unquote(value)
try:
SmartFilter().query_from_string(value)
except RuntimeError as e:
raise models.base.ValidationError(e)
# avoid doing too much during migrations
if 'migrate' not in sys.argv:
value = urllib.parse.unquote(value)
try:
SmartFilter().query_from_string(value)
except RuntimeError as e:
raise models.base.ValidationError(e)
return super(SmartFilterField, self).get_prep_value(value)
@@ -800,7 +853,7 @@ class CredentialTypeInjectorField(JSONSchemaField):
def validate_env_var_allowed(self, env_var):
if env_var.startswith('ANSIBLE_'):
raise django_exceptions.ValidationError(
_('Environment variable {} may affect Ansible configuration so its ' 'use is not allowed in credentials.').format(env_var),
_('Environment variable {} may affect Ansible configuration so its use is not allowed in credentials.').format(env_var),
code='invalid',
params={'value': env_var},
)

View File

@@ -0,0 +1,53 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved
from django.core.management.base import BaseCommand
from awx.main.models import Instance, ReceptorAddress
def add_address(**kwargs):
try:
instance = Instance.objects.get(hostname=kwargs.pop('instance'))
kwargs['instance'] = instance
if kwargs.get('canonical') and instance.receptor_addresses.filter(canonical=True).exclude(address=kwargs['address']).exists():
print(f"Instance {instance.hostname} already has a canonical address, skipping")
return False
# if ReceptorAddress already exists with address, just update
# otherwise, create new ReceptorAddress
addr, _ = ReceptorAddress.objects.update_or_create(address=kwargs.pop('address'), defaults=kwargs)
print(f"Successfully added receptor address {addr.get_full_address()}")
return True
except Exception as e:
print(f"Error adding receptor address: {e}")
return False
class Command(BaseCommand):
"""
Internal controller command.
Register receptor address to an already-registered instance.
"""
help = "Add receptor address to an instance."
def add_arguments(self, parser):
parser.add_argument('--instance', dest='instance', required=True, type=str, help="Instance hostname this address is added to")
parser.add_argument('--address', dest='address', required=True, type=str, help="Receptor address")
parser.add_argument('--port', dest='port', type=int, help="Receptor listener port")
parser.add_argument('--websocket_path', dest='websocket_path', type=str, default="", help="Path for websockets")
parser.add_argument('--is_internal', action='store_true', help="If true, address only resolvable within the Kubernetes cluster")
parser.add_argument('--protocol', type=str, default='tcp', choices=['tcp', 'ws', 'wss'], help="Protocol to use for the Receptor listener")
parser.add_argument('--canonical', action='store_true', help="If true, address is the canonical address for the instance")
parser.add_argument('--peers_from_control_nodes', action='store_true', help="If true, control nodes will peer to this address")
def handle(self, **options):
address_options = {
k: options[k]
for k in ('instance', 'address', 'port', 'websocket_path', 'is_internal', 'protocol', 'peers_from_control_nodes', 'canonical')
if options[k]
}
changed = add_address(**address_options)
if changed:
print("(changed: True)")

View File

@@ -23,7 +23,10 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove activity stream events more than N days old')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would ' 'be removed)')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)')
parser.add_argument(
'--batch-size', dest='batch_size', type=int, default=500, metavar='X', help='Remove activity stream events in batch of X events. Defaults to 500.'
)
def init_logging(self):
log_levels = dict(enumerate([logging.ERROR, logging.INFO, logging.DEBUG, 0]))
@@ -48,7 +51,7 @@ class Command(BaseCommand):
else:
pks_to_delete.add(asobj.pk)
# Cleanup objects in batches instead of deleting each one individually.
if len(pks_to_delete) >= 500:
if len(pks_to_delete) >= self.batch_size:
ActivityStream.objects.filter(pk__in=pks_to_delete).delete()
n_deleted_items += len(pks_to_delete)
pks_to_delete.clear()
@@ -63,4 +66,5 @@ class Command(BaseCommand):
self.days = int(options.get('days', 30))
self.cutoff = now() - datetime.timedelta(days=self.days)
self.dry_run = bool(options.get('dry_run', False))
self.batch_size = int(options.get('batch_size', 500))
self.cleanup_activitystream()

View File

@@ -1,22 +1,22 @@
from awx.main.models import HostMetric
from django.core.management.base import BaseCommand
from django.conf import settings
from awx.main.tasks.host_metrics import HostMetricTask
class Command(BaseCommand):
"""
Run soft-deleting of HostMetrics
This command provides cleanup task for HostMetric model.
There are two modes, which run in following order:
- soft cleanup
- - Perform soft-deletion of all host metrics last automated 12 months ago or before.
This is the same as issuing a DELETE request to /api/v2/host_metrics/N/ for all host metrics that match the criteria.
- - updates columns delete, deleted_counter and last_deleted
- hard cleanup
- - Permanently erase from the database all host metrics last automated 36 months ago or before.
This operation happens after the soft deletion has finished.
"""
help = 'Run soft-deleting of HostMetrics'
def add_arguments(self, parser):
parser.add_argument('--months-ago', type=int, dest='months-ago', action='store', help='Threshold in months for soft-deleting')
help = 'Run soft and hard-deletion of HostMetrics'
def handle(self, *args, **options):
months_ago = options.get('months-ago') or None
if not months_ago:
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_THRESHOLD', 12)
HostMetric.cleanup_task(months_ago)
HostMetricTask().cleanup(soft_threshold=settings.CLEANUP_HOST_METRICS_SOFT_THRESHOLD, hard_threshold=settings.CLEANUP_HOST_METRICS_HARD_THRESHOLD)

View File

@@ -9,6 +9,7 @@ import re
# Django
from django.apps import apps
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction, connection
from django.db.models import Min, Max
@@ -17,10 +18,7 @@ from django.utils.timezone import now
# AWX
from awx.main.models import Job, AdHocCommand, ProjectUpdate, InventoryUpdate, SystemJob, WorkflowJob, Notification
def unified_job_class_to_event_table_name(job_class):
return f'main_{job_class().event_class.__name__.lower()}'
from awx.main.utils import unified_job_class_to_event_table_name
def partition_table_name(job_class, dt):
@@ -152,7 +150,10 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove jobs/updates executed more than N days ago. Defaults to 90.')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would ' 'be removed)')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)')
parser.add_argument(
'--batch-size', dest='batch_size', type=int, default=100000, metavar='X', help='Remove jobs in batch of X jobs. Defaults to 100000.'
)
parser.add_argument('--jobs', dest='only_jobs', action='store_true', default=False, help='Remove jobs')
parser.add_argument('--ad-hoc-commands', dest='only_ad_hoc_commands', action='store_true', default=False, help='Remove ad hoc commands')
parser.add_argument('--project-updates', dest='only_project_updates', action='store_true', default=False, help='Remove project updates')
@@ -198,18 +199,58 @@ class Command(BaseCommand):
delete_meta.delete_jobs()
return (delete_meta.jobs_no_delete_count, delete_meta.jobs_to_delete_count)
def _cascade_delete_job_events(self, model, pk_list):
def has_unpartitioned_table(self, model):
tblname = unified_job_class_to_event_table_name(model)
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM pg_tables WHERE tablename = '_unpartitioned_{tblname}';")
row = cursor.fetchone()
if row is None:
return False
return True
def _delete_unpartitioned_table(self, model):
"If the unpartitioned table is no longer necessary, it will drop the table"
tblname = unified_job_class_to_event_table_name(model)
if not self.has_unpartitioned_table(model):
self.logger.debug(f'Table _unpartitioned_{tblname} does not exist, you are fully migrated.')
return
with connection.cursor() as cursor:
# same as UnpartitionedJobEvent.objects.aggregate(Max('created'))
cursor.execute(f'SELECT MAX("_unpartitioned_{tblname}"."created") FROM "_unpartitioned_{tblname}";')
row = cursor.fetchone()
last_created = row[0]
if last_created:
self.logger.info(f'Last event created in _unpartitioned_{tblname} was {last_created.isoformat()}')
else:
self.logger.info(f'Table _unpartitioned_{tblname} has no events in it')
if (last_created is None) or (last_created < self.cutoff):
self.logger.warning(
f'Dropping table _unpartitioned_{tblname} since no records are newer than {self.cutoff}\n'
'WARNING - this will happen in a separate transaction so a failure will not roll back prior cleanup'
)
with connection.cursor() as cursor:
cursor.execute(f'DROP TABLE _unpartitioned_{tblname};')
def _delete_unpartitioned_events(self, model, pk_list):
"If unpartitioned job events remain, it will cascade those from jobs in pk_list"
tblname = unified_job_class_to_event_table_name(model)
rel_name = model().event_parent_key
# Bail if the unpartitioned table does not exist anymore
if not self.has_unpartitioned_table(model):
return
# Table still exists, delete individual unpartitioned events
if pk_list:
with connection.cursor() as cursor:
tblname = unified_job_class_to_event_table_name(model)
self.logger.debug(f'Deleting {len(pk_list)} events from _unpartitioned_{tblname}, use a longer cleanup window to delete the table.')
pk_list_csv = ','.join(map(str, pk_list))
rel_name = model().event_parent_key
cursor.execute(f"DELETE FROM _unpartitioned_{tblname} WHERE {rel_name} IN ({pk_list_csv})")
cursor.execute(f"DELETE FROM _unpartitioned_{tblname} WHERE {rel_name} IN ({pk_list_csv});")
def cleanup_jobs(self):
batch_size = 100000
# Hack to avoid doing N+1 queries as each item in the Job query set does
# an individual query to get the underlying UnifiedJob.
Job.polymorphic_super_sub_accessors_replaced = True
@@ -224,13 +265,14 @@ class Command(BaseCommand):
deleted = 0
info = qs.aggregate(min=Min('id'), max=Max('id'))
if info['min'] is not None:
for start in range(info['min'], info['max'] + 1, batch_size):
qs_batch = qs.filter(id__gte=start, id__lte=start + batch_size)
for start in range(info['min'], info['max'] + 1, self.batch_size):
qs_batch = qs.filter(id__gte=start, id__lte=start + self.batch_size)
pk_list = qs_batch.values_list('id', flat=True)
_, results = qs_batch.delete()
deleted += results['main.Job']
self._cascade_delete_job_events(Job, pk_list)
# Avoid dropping the job event table in case we have interacted with it already
self._delete_unpartitioned_events(Job, pk_list)
return skipped, deleted
@@ -253,7 +295,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(AdHocCommand, pk_list)
self._delete_unpartitioned_events(AdHocCommand, pk_list)
skipped += AdHocCommand.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -281,7 +323,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(ProjectUpdate, pk_list)
self._delete_unpartitioned_events(ProjectUpdate, pk_list)
skipped += ProjectUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -309,7 +351,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(InventoryUpdate, pk_list)
self._delete_unpartitioned_events(InventoryUpdate, pk_list)
skipped += InventoryUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -333,7 +375,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(SystemJob, pk_list)
self._delete_unpartitioned_events(SystemJob, pk_list)
skipped += SystemJob.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -378,12 +420,12 @@ class Command(BaseCommand):
skipped += Notification.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@transaction.atomic
def handle(self, *args, **options):
self.verbosity = int(options.get('verbosity', 1))
self.init_logging()
self.days = int(options.get('days', 90))
self.dry_run = bool(options.get('dry_run', False))
self.batch_size = int(options.get('batch_size', 100000))
try:
self.cutoff = now() - datetime.timedelta(days=self.days)
except OverflowError:
@@ -405,19 +447,29 @@ class Command(BaseCommand):
del s.receivers[:]
s.sender_receivers_cache.clear()
for m in model_names:
if m not in models_to_cleanup:
continue
with transaction.atomic():
for m in models_to_cleanup:
skipped, deleted = getattr(self, 'cleanup_%s' % m)()
skipped, deleted = getattr(self, 'cleanup_%s' % m)()
func = getattr(self, 'cleanup_%s_partition' % m, None)
if func:
skipped_partition, deleted_partition = func()
skipped += skipped_partition
deleted += deleted_partition
func = getattr(self, 'cleanup_%s_partition' % m, None)
if func:
skipped_partition, deleted_partition = func()
skipped += skipped_partition
deleted += deleted_partition
if self.dry_run:
self.logger.log(99, '%s: %d would be deleted, %d would be skipped.', m.replace('_', ' '), deleted, skipped)
else:
self.logger.log(99, '%s: %d deleted, %d skipped.', m.replace('_', ' '), deleted, skipped)
if self.dry_run:
self.logger.log(99, '%s: %d would be deleted, %d would be skipped.', m.replace('_', ' '), deleted, skipped)
else:
self.logger.log(99, '%s: %d deleted, %d skipped.', m.replace('_', ' '), deleted, skipped)
# Deleting unpartitioned tables cannot be done in same transaction as updates to related tables
if not self.dry_run:
with transaction.atomic():
for m in models_to_cleanup:
unified_job_class_name = m[:-1].title().replace('Management', 'System').replace('_', '')
unified_job_class = apps.get_model('main', unified_job_class_name)
try:
unified_job_class().event_class
except (NotImplementedError, AttributeError):
continue # no need to run this for models without events
self._delete_unpartitioned_table(unified_job_class)

View File

@@ -44,7 +44,7 @@ class Command(BaseCommand):
'- To list all (now deprecated) custom virtual environments run:',
'awx-manage list_custom_venvs',
'',
'- To export the contents of a (deprecated) virtual environment, ' 'run the following command while supplying the path as an argument:',
'- To export the contents of a (deprecated) virtual environment, run the following command while supplying the path as an argument:',
'awx-manage export_custom_venv /path/to/venv',
'',
'- Run these commands with `-q` to remove tool tips.',

View File

@@ -13,7 +13,7 @@ class Command(BaseCommand):
Deprovision a cluster node
"""
help = 'Remove instance from the database. ' 'Specify `--hostname` to use this command.'
help = 'Remove instance from the database. Specify `--hostname` to use this command.'
def add_arguments(self, parser):
parser.add_argument('--hostname', dest='hostname', type=str, help='Hostname used during provisioning')

View File

@@ -0,0 +1,195 @@
import json
import os
import sys
import re
from typing import Any
from django.core.management.base import BaseCommand
from django.conf import settings
from awx.conf import settings_registry
class Command(BaseCommand):
help = 'Dump the current auth configuration in django_ansible_base.authenticator format, currently supports LDAP and SAML'
DAB_SAML_AUTHENTICATOR_KEYS = {
"SP_ENTITY_ID": True,
"SP_PUBLIC_CERT": True,
"SP_PRIVATE_KEY": True,
"ORG_INFO": True,
"TECHNICAL_CONTACT": True,
"SUPPORT_CONTACT": True,
"SP_EXTRA": False,
"SECURITY_CONFIG": False,
"EXTRA_DATA": False,
"ENABLED_IDPS": True,
"CALLBACK_URL": False,
}
DAB_LDAP_AUTHENTICATOR_KEYS = {
"SERVER_URI": True,
"BIND_DN": False,
"BIND_PASSWORD": False,
"CONNECTION_OPTIONS": False,
"GROUP_TYPE": True,
"GROUP_TYPE_PARAMS": True,
"GROUP_SEARCH": False,
"START_TLS": False,
"USER_DN_TEMPLATE": True,
"USER_ATTR_MAP": True,
"USER_SEARCH": False,
}
def is_enabled(self, settings, keys):
missing_fields = []
for key, required in keys.items():
if required and not settings.get(key):
missing_fields.append(key)
if missing_fields:
return False, missing_fields
return True, None
def get_awx_ldap_settings(self) -> dict[str, dict[str, Any]]:
awx_ldap_settings = {}
for awx_ldap_setting in settings_registry.get_registered_settings(category_slug='ldap'):
key = awx_ldap_setting.removeprefix("AUTH_LDAP_")
value = getattr(settings, awx_ldap_setting, None)
awx_ldap_settings[key] = value
grouped_settings = {}
for key, value in awx_ldap_settings.items():
match = re.search(r'(\d+)', key)
index = int(match.group()) if match else 0
new_key = re.sub(r'\d+_', '', key)
if index not in grouped_settings:
grouped_settings[index] = {}
grouped_settings[index][new_key] = value
if new_key == "GROUP_TYPE" and value:
grouped_settings[index][new_key] = type(value).__name__
if new_key == "SERVER_URI" and value:
value = value.split(", ")
grouped_settings[index][new_key] = value
if type(value).__name__ == "LDAPSearch":
data = []
data.append(value.base_dn)
data.append("SCOPE_SUBTREE")
data.append(value.filterstr)
grouped_settings[index][new_key] = data
return grouped_settings
def get_awx_saml_settings(self) -> dict[str, Any]:
awx_saml_settings = {}
for awx_saml_setting in settings_registry.get_registered_settings(category_slug='saml'):
awx_saml_settings[awx_saml_setting.removeprefix("SOCIAL_AUTH_SAML_")] = getattr(settings, awx_saml_setting, None)
return awx_saml_settings
def format_config_data(self, enabled, awx_settings, type, keys, name):
config = {
"type": f"ansible_base.authentication.authenticator_plugins.{type}",
"name": name,
"enabled": enabled,
"create_objects": True,
"users_unique": False,
"remove_users": True,
"configuration": {},
}
for k in keys:
v = awx_settings.get(k)
config["configuration"].update({k: v})
if type == "saml":
idp_to_key_mapping = {
"url": "IDP_URL",
"x509cert": "IDP_X509_CERT",
"entity_id": "IDP_ENTITY_ID",
"attr_email": "IDP_ATTR_EMAIL",
"attr_groups": "IDP_GROUPS",
"attr_username": "IDP_ATTR_USERNAME",
"attr_last_name": "IDP_ATTR_LAST_NAME",
"attr_first_name": "IDP_ATTR_FIRST_NAME",
"attr_user_permanent_id": "IDP_ATTR_USER_PERMANENT_ID",
}
for idp_name in awx_settings.get("ENABLED_IDPS", {}):
for key in idp_to_key_mapping:
value = awx_settings["ENABLED_IDPS"][idp_name].get(key)
if value is not None:
config["name"] = idp_name
config["configuration"].update({idp_to_key_mapping[key]: value})
return config
def add_arguments(self, parser):
parser.add_argument(
"output_file",
nargs="?",
type=str,
default=None,
help="Output JSON file path",
)
def handle(self, *args, **options):
try:
data = []
# dump SAML settings
awx_saml_settings = self.get_awx_saml_settings()
awx_saml_enabled, saml_missing_fields = self.is_enabled(awx_saml_settings, self.DAB_SAML_AUTHENTICATOR_KEYS)
if awx_saml_enabled:
awx_saml_name = awx_saml_settings["ENABLED_IDPS"]
data.append(
self.format_config_data(
awx_saml_enabled,
awx_saml_settings,
"saml",
self.DAB_SAML_AUTHENTICATOR_KEYS,
awx_saml_name,
)
)
else:
data.append({"SAML_missing_fields": saml_missing_fields})
# dump LDAP settings
awx_ldap_group_settings = self.get_awx_ldap_settings()
for awx_ldap_name, awx_ldap_settings in awx_ldap_group_settings.items():
awx_ldap_enabled, ldap_missing_fields = self.is_enabled(awx_ldap_settings, self.DAB_LDAP_AUTHENTICATOR_KEYS)
if awx_ldap_enabled:
data.append(
self.format_config_data(
awx_ldap_enabled,
awx_ldap_settings,
"ldap",
self.DAB_LDAP_AUTHENTICATOR_KEYS,
f"LDAP_{awx_ldap_name}",
)
)
else:
data.append({f"LDAP_{awx_ldap_name}_missing_fields": ldap_missing_fields})
# write to file if requested
if options["output_file"]:
# Define the path for the output JSON file
output_file = options["output_file"]
# Ensure the directory exists
os.makedirs(os.path.dirname(output_file), exist_ok=True)
# Write data to the JSON file
with open(output_file, "w") as f:
json.dump(data, f, indent=4)
self.stdout.write(self.style.SUCCESS(f"Auth config data dumped to {output_file}"))
else:
self.stdout.write(json.dumps(data, indent=4))
except Exception as e:
self.stdout.write(self.style.ERROR(f"An error occurred: {str(e)}"))
sys.exit(1)

View File

@@ -0,0 +1,9 @@
from django.core.management.base import BaseCommand
from awx.main.tasks.host_metrics import HostMetricSummaryMonthlyTask
class Command(BaseCommand):
help = 'Computing of HostMetricSummaryMonthly'
def handle(self, *args, **options):
HostMetricSummaryMonthlyTask().execute()

Some files were not shown because too many files have changed in this diff Show More