Compare commits

..

369 Commits

Author SHA1 Message Date
Seth Foster
99511efe81 bump pyasn1 (#16249)
Bump pyasn1 for CVE-2026-2349

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2026-02-05 14:12:59 -05:00
Rodrigo Toshiaki Horie
30bf910bd5 fix schema generator (#16265) 2026-02-04 20:54:28 -03:00
jessicamack
c9085e4b7f Update OpenAPI spec to improve descriptions and messages (#16260)
* Update OpenAPI spec

* lint fixes

* fix decorator for retrieve endpoints

* change decorator method

* fix import

* lint fix
2026-02-04 22:32:57 +00:00
Alan Rominger
5e93f60b9e AAP-41776 Enable new fancy asyncio metrics for dispatcherd (#16233)
* Enable new fancy asyncio metrics for dispatcherd

Remove old dispatcher metrics and patch in new data from local whatever

Update test fixture to new dispatcherd version

* Update dispatcherd again

* Handle node filter in URL, and catch more errors

* Add test for metric filter

* Split module for dispatcherd metrics
2026-02-04 15:28:34 -05:00
Rodrigo Toshiaki Horie
6a031158ce Fix OpenAPI schema enum values for CredentialType kind field (#16262)
The OpenAPI schema incorrectly showed all 12 credential type kinds as
valid for POST/PUT/PATCH operations, when only 'cloud' and 'net' are
allowed for custom credential types. This caused API clients and LLM
agents to receive HTTP 400 errors when attempting to create credential
types with invalid kind values.

Add postprocessing hook to filter CredentialTypeRequest and
PatchedCredentialTypeRequest schemas to only show 'cloud', 'net',
and null as valid enum values, matching the existing validation logic.

No API behavior changes - this is purely a documentation fix.

Co-authored-by: Claude <noreply@anthropic.com>
2026-02-04 16:39:03 -03:00
joeywashburn
749735b941 Standardize spelling of 'canceled' in wsrelay.py (#16178)
Changed two instances of 'cancelled' to 'canceled' in awx/main/wsrelay.py
to match AWX's standardized American English spelling convention.

- Updated log message in WebsocketRelayConnection.connect()
- Updated comment in WebSocketRelayManager.cleanup_offline_host()

Fixes #15177

Signed-off-by: Joey Washburn <joey@joeywashburn.com>
2026-02-04 12:29:45 -05:00
Chris Meyers
315f9c7eef Rename args var
* https://sonarcloud.io/project/issues?open=AZDmRbV12PiUXMD3dYmh&id=ansible_awx
2026-02-04 08:17:51 -05:00
Chris Meyers
00c0f7e8db add test 2026-02-03 16:12:22 -05:00
Chris Meyers
37ccbc28bd Harden log message output containing user input
* base64 encode user inputed url when logging so that newlines or other
  malicious payloads can't be injected into the log stream
2026-02-03 16:12:22 -05:00
Chris Meyers
63fafec76f Remove init return value
* https://sonarcloud.io/project/issues?open=AZDmRaaJ2PiUXMD3dXly&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
cba01339a1 Remove redunant role attribute
* https://sonarcloud.io/project/issues?open=AZDmRbYN2PiUXMD3dYo5&id=ansible_awx&tab=code
2026-02-03 16:12:00 -05:00
Chris Meyers
2622e9d295 Add alt text for awx logo
* https://sonarcloud.io/project/issues?open=AZDmRbYR2PiUXMD3dYo6&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
a6afec6ebb Add generic font family
* https://sonarcloud.io/project/issues?open=AZDmRbX42PiUXMD3dYo1&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
f406a377f7 Add generic font family
* https://sonarcloud.io/project/issues?open=AZDmRbX42PiUXMD3dYo0&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
adc3e35978 Add generic font family
* remove quotes so that it's treated as a generic font family
* https://sonarcloud.io/project/issues?open=AZDmRbX42PiUXMD3dYoz&id=ansible_awx&tab=code
2026-02-03 16:12:00 -05:00
Chris Meyers
838e67005c Remove duplicate css property
* Last one wins so remove the first one.
* https://sonarcloud.io/project/issues?open=AZpWSq7yO74rjWmAOcwf&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
e13fcfe29f Add alt text to 504 image
* https://sonarcloud.io/project/issues?open=AZDmRbXt2PiUXMD3dYor&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
0f4e91419a Add lang english tag to 504 page
* https://sonarcloud.io/project/issues?open=AZDmRbXt2PiUXMD3dYop&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
cca70b242a Add alt text to 502 image
* https://sonarcloud.io/project/issues?open=AZDmRbXx2PiUXMD3dYou&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
edf459f8ec Add language english to 502
* https://sonarcloud.io/project/issues?open=AZDmRbXx2PiUXMD3dYos&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
f4286216d6 Add doctype and lang
* https://sonarcloud.io/project/issues?open=AZDmRbX02PiUXMD3dYov&id=ansible_awx
* https://sonarcloud.io/project/issues?open=AZDmRbX02PiUXMD3dYow&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
0ab1fea731 Use replaceAll() for global regex
* https://sonarcloud.io/project/issues?open=AZlyqQeaRtfhwxlTsfP0&id=ansible_awx
* https://sonarcloud.io/project/issues?open=AZlyqQeaRtfhwxlTsfP1&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
e3ac581fdf Always use a tz aware timestamp
* https://sonarcloud.io/project/issues?open=AZDmRade2PiUXMD3dXnx&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
5aa3e8cf3b Make tz aware
* Note, this doesn't change the logic or output, but maybe makes
  someones life easier later.
* https://sonarcloud.io/project/issues?open=AZDmRade2PiUXMD3dXnx&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
8289003c0d Remove unreachable code path
* https://sonarcloud.io/project/issues?open=AZDmRaXd2PiUXMD3dXkN&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
125083538a Compare float to float
* https://sonarcloud.io/project/issues?open=AZDmRaXd2PiUXMD3dXkF&id=ansible_awx&tab=code
2026-02-03 16:12:00 -05:00
Chris Meyers
ed5ab8becd Remove unused variable
* https://sonarcloud.io/project/issues?open=AZL9AcQZ0bcYsK7qDrTo&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
fc0087f1b2 Add language to api stdout for translation helping
* https://sonarcloud.io/project/issues?open=AZDmRbV82PiUXMD3dYmx&id=ansible_awx&tab=code
2026-02-03 16:12:00 -05:00
Chris Meyers
cfc5ad9d91 Remove return value from __init__
* https://sonarcloud.io/project/issues?open=AZDmRaZX2PiUXMD3dXle&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
d929b767b6 Rename kwargs
* https://sonarcloud.io/project/issues?open=AZDmRbVW2PiUXMD3dYmW&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
5f434ac348 Rename exception args variable
* https://sonarcloud.io/project/issues?open=AZDmRbV12PiUXMD3dYmg&id=ansible_awx
* https://sonarcloud.io/project/issues?open=AZDmRaZX2PiUXMD3dXle&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
4de9c8356b Use fromkeys for constant
* https://sonarcloud.io/project/issues?open=AZeD0GsJyrLLb-kZUOoF&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
91118adbd3 Fix summary_dict None check
* https://sonarcloud.io/project/issues?open=AZDmRbWy2PiUXMD3dYoJ&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
25f538277a Fix init return
* __init__ shouldn't return
2026-02-03 16:12:00 -05:00
joeywashburn
82cb52d648 Sanitize SSH key whitespace to prevent validation errors (#16179)
Strip leading and trailing whitespace from SSH keys in validate_ssh_private_key()
to handle common copy-paste scenarios where hidden newlines cause base64 decoding
failures.

Changes:
- Added data.strip() in validate_ssh_private_key() before calling validate_pem()
- Added test_ssh_key_with_whitespace() to verify keys with leading/trailing
  newlines are properly sanitized and validated

This prevents the confusing "HTTP 500: Internal Server Error" and
"binascii.Error: Incorrect padding" errors when users paste SSH keys with
accidental whitespace.

Fixes #14219

Signed-off-by: Joey Washburn <joey@joeywashburn.com>
2026-02-02 11:16:28 -05:00
Peter Braun
f7958b93bd add deprecated fields to x-ai-description for credential post (#16255) 2026-01-29 18:17:31 +01:00
Alan Rominger
3d68ca848e Fix race condition of un-expired cache in local workers (#16256) 2026-01-29 11:31:06 -05:00
Adrià Sala
99dce79078 fix: add py311 to make version detection 2026-01-28 14:20:48 +01:00
Alan Rominger
271383d018 AAP-60470 Add dispatcherctl and dispatcherd commands as updated interface to dispatcherd lib (#16206)
* Add dispatcherctl command

* Add tests for dispatcherctl command

* Exit early if sqlite3

* Switch to dispatcherd mgmt cmd

* Move unwanted command options to run_dispatcher

* Add test for new stuff

* Update the SOS report status command

* make docs always reference new command

* Consistently error if given config file
2026-01-27 15:57:23 -05:00
Alan Rominger
1128ad5a57 AAP-64221 Fix broken cancel logic with dispatcherd (#16247)
* Fix broken cancel logic with dispatcherd

Update tests for UnifiedJob

Update test assertion

* Further simply cancel path
2026-01-27 14:39:08 -05:00
Seth Foster
823b736afe Remove unused INSIGHTS_OIDC_ENDPOINT (#16235)
This setting is set in defaults.py, but
currently not being used. More technically,
project_update.yml is not passing this value to
the insights.py action plugin. Therefore, we
can safely remove references to it.

insights.py already has a default oidc endpoint
defined for authentication.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2026-01-27 10:13:37 -05:00
Alan Rominger
f80bbc57d8 AAP-43117 Additional dispatcher removal simplifications and waiting reaper updates (#16243)
* Additional dispatcher removal simplifications and waiting repear updates

* Fix double call and logging message

* Implement bugbot comment, should reap running on lost instances

* Add test case for new pending behavior
2026-01-26 13:55:37 -05:00
TVo
12a7229ee9 Publish open api spec on AWX for community use (#16221)
* Added link and ref to openAPI spec for community

* Update docs/docsite/rst/contributor/openapi_link.rst

Co-authored-by: Don Naro <dnaro@redhat.com>

* add sphinxcontrib-redoc to requirements

* sphinxcontrib.redoc configuration

* create openapi directory and files

* update download script for both schema files

* suppress warning for redoc

* update labels

* fix extra closing parenthesis

* update schema url

* exclude doc config and download script

The Sphinx configuration (conf.py) and schema download script
(download-json.py) are not application logic and used only for building
documentation. Coverage requirements for these files are overkill.

* exclude only the sphinx config file

---------

Co-authored-by: Don Naro <dnaro@redhat.com>
2026-01-26 18:16:49 +00:00
Don Naro
ceed692354 change contact email address (#16245)
Use the ansible-community@redhat.com alias as the contact email address.
2026-01-26 17:28:00 +00:00
Jake Jackson
36a00ec46b AAP-58539 Move to dispatcherd (#16209)
* WIP First pass
* started removing feature flags and adjusting logic
* Add decorator
* moved to dispatcher decorator
* updated as many as I could find
* Keep callback receiver working
* remove any code that is not used by the call back receiver
* add back auto_max_workers
* added back get_auto_max_workers into common utils
* Remove control and hazmat (squash this not done)
* moved status out and deleted control as no longer needed
* removed unused imports
* adjusted test import to pull correct method
* fixed imports and addressed clusternode heartbeat test
* Update function comments
* Add back hazmat for config and remove baseworker
* added back hazmat per @alancoding feedback around config
* removed baseworker completely and refactored it into the callback
  worker
* Fix dispatcher run call and remove dispatch setting
* remove dispatcher mock publish setting
* Adjust heartbeat arg and more formatting
* fixed the call to cluster_node_heartbeat missing binder
* Fix attribute error in server logs
2026-01-23 20:49:32 +00:00
Alan Rominger
94d5769f32 Fix extremely flaky failure (#16161) 2026-01-23 10:05:44 -05:00
Alan Rominger
98430db58f Collect operator logs on timeout (#16239)
* Collect operator logs on timeout

* Set timeout back to prod value

* Add e to the bash with timeout block
2026-01-23 10:05:32 -05:00
Rodrigo Toshiaki Horie
acf8721a09 Enhance OpenAPI schema with AI descriptions and fix method names (#16228)
* Enhance OpenAPI schema with AI descriptions and fix method names

Add x-ai-description extensions to API endpoints for better AI agent
comprehension. Fix view method names to
ensure proper drf-spectacular schema generation.

* Enhance OpenAPI schema with AI descriptions and fix method names

Add x-ai-description extensions to API endpoints for better AI agent
comprehension. Fix view method names to
ensure proper drf-spectacular schema generation.
2026-01-21 16:53:19 -03:00
Hao Liu
a839ce8cb1 Update kubernetes python client to 35.0.0 from PyPI (#16237)
Remove transitive dependencies no longer needed by kubernetes 35.0.0

Removes google-auth and rsa which were transitive dependencies of the older
kubernetes client but are no longer required in v35.0.0.

Adds cachetools as a direct dependency since it's used by awx/conf/settings.py
for TTLCache (was previously a transitive dep of google-auth).
2026-01-20 17:31:18 -05:00
Hao Liu
543b2a66a3 Update kubernetes python client to 35.0.0 from PyPI (#16236)
- Move kubernetes from git-based install to PyPI (v35.0.0 now available)
- Remove urllib3 cap comment since kubernetes 35.0.0 no longer restricts it
- Update README.md upgrade blocker documentation
2026-01-20 16:20:41 -05:00
Alan Rominger
8c5cf49c23 Avoid errors installing with python 3.11 (#16231) 2026-01-20 10:24:41 -05:00
Peter Braun
80bb0c9862 remove artifacts from list endpoint (#16230) 2026-01-20 10:58:01 +01:00
Alan Rominger
b34ee01fb3 Slightly alter history to avoid having a Django 5 related migration (#16214)
* Slightly alter history to avoid having a Django 5 related migration

* Revert prior field states to be slightly more clear
2026-01-19 13:33:46 -05:00
Alan Rominger
dce5ac73c5 Apply new rules from black update (#16232) 2026-01-19 12:58:07 -05:00
PabloHiro
43a3a620e3 [AAP-43413] Removing hardcoded number of flags from feature flag test
Assited-by: Claude
2026-01-19 09:37:20 +01:00
PascalKont
051357e573 fixed description for option notification_templates_approvals in module organizations (#16170)
fixed module organizations description for option notification_templates_approvals

Co-authored-by: Pascal Kontschan <pascal.kontschan.extern@atruvia.de>
2026-01-15 11:48:27 -05:00
John Barker
75aba0f62d docs: migrate RTD URLs to docs.ansible.com (#16189)
* docs: update readthedocs.io URLs to docs.ansible.com equivalents

🤖 Generated with Claude Code
https://claude.ai/code

Co-Authored-By: Claude <noreply@anthropic.com>

* Update Bullhorn newsletter link in communication docs

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-15 12:27:47 +00:00
Hao Liu
fee71b8917 Replace pytz with standard library timezone (#16197)
Refactored code to use Python's built-in datetime.timezone and zoneinfo instead of pytz for timezone handling. This modernizes the codebase and removes the dependency on pytz, aligning with current best practices for timezone-aware datetime objects.
2026-01-09 16:05:08 -05:00
Hao Liu
dbe979b425 Add make targets for updating requirements (#16195)
Introduces new Makefile targets to update and upgrade requirements files using pip-compile, both directly and via docker-runner. These additions streamline dependency management for development and CI workflows.
2026-01-09 16:04:27 -05:00
Hao Liu
03d0ed882c Add kubernetes python client from git at release-34.0 (#16226)
Switch to git-based installation of kubernetes python client from
github.com/kubernetes-client/python at commit df31d90d6c910d6b5c883b98011c93421cac067d
(release-34.0 branch). This also allows removing the urllib3<2.4.0 upper bound
constraint that was previously required by kubernetes 34.1.0 from PyPI.
2026-01-09 20:16:34 +00:00
Hao Liu
b0debf93a0 Use dnf module for Node.js 18 instead of n version manager - damn it aarch64 (#16225)
Use dnf module for Node.js 18 instead of n version manager

The n version manager fails to extract Node.js archives due to very long
file paths in include/node/openssl/archs/ directories when running in
Docker BuildKit's overlay filesystem. This causes CI build failures with
tar "Cannot open: Invalid argument" errors.

Switch to installing Node.js 18 directly from CentOS Stream 9's module
stream which avoids the archive extraction issue entirely.
2026-01-09 18:37:34 +00:00
Hao Liu
d018096cae Fix devel awx, awx_devel, awx_kube_devel build (#16219)
* Fix ARM64 build failure by upgrading dev container Node.js to 18

Node.js 16.13.1 fails to extract on ARM64 in Docker BuildKit's
overlay filesystem during multi-arch builds. Upgrade to Node 18
which is already used by the UI builder stage and has proper
ARM64 support.

* Fix collectstatic failure by setting AWX_MODE=default

  AWX_MODE=defaults is an intentionally "invalid" environment name that:

  1. Loads only defaults.py - the base settings file without any environment-specific overrides (development_defaults.py, production_defaults.py, etc.)
  2. Bypasses production checks - since "production" not in "defaults", it skips the assertion that requires /etc/tower/settings.py to exist
  3. Bypasses development mode - since is_development_mode would be false

  This is perfect for collectstatic during container build because:
  - No database connection needed
  - No secret key needed (hence SKIP_SECRET_KEY_CHECK)
  - No PostgreSQL version check (hence SKIP_PG_VERSION_CHECK)
  - Just need minimal Django settings to collect static files
2026-01-09 12:07:23 -05:00
Alan Rominger
cfe0b367b5 Do not eat errors building images (#16216) 2026-01-09 02:48:34 +00:00
Alan Rominger
7d24bdbf13 Clear in-memory cache, suggested by bugbot (#16218)
* Clear in-memory cache, suggested by bugbot

* Clear the cache even harder than we were before

* Syntax bugbot
2026-01-08 16:03:29 -05:00
Alan Rominger
3cba5e1744 Cache juggling to help address test flake (#16217) 2026-01-08 14:23:01 -05:00
Hao Liu
10a2946f9f Fix requirement for python3.12 (#16215)
* Fix pip version constraint for Python 3.12 compatibility

Remove outdated pip<22.0 constraint that was a workaround for
pip-tools#1558. This issue was fixed in pip-tools 6.5.0+ and
the old constraint breaks Python 3.12 where pkgutil.ImpImporter
was removed.

* Update requirements.txt

* Fix license file inconsistencies with requirements

- Rename awx-plugins.interfaces.txt to awx-plugins-interfaces.txt
  to match the package name in requirements
- Remove backports-tarfile.txt and importlib-resources.txt as these
  packages are no longer in requirements

* Fix updater.sh for pip 25.3 normalized output format

Changes to requirements_git.txt:
- Update to PEP 440 format (name @ git+url) to match pip-compile output
- Normalize package names (hyphens instead of dots/underscores)
- Sort extras alphabetically with hyphens (e.g., jwt-consumer not jwt_consumer)
- Add documentation explaining format requirements

Changes to updater.sh:
- Escape BRE regex metacharacters in sed pattern to handle brackets in extras
- Change sed delimiter from ! to | to avoid conflict with comment text
- Add explicit return statements to functions
- Assign positional parameters to local variables
- Redirect error messages to stderr
- Replace backticks with $() for command substitution
- Pin pip to version 25.3

requirements.txt regenerated via updater.sh

* Normalize package names in requirements.in to match pip output

- prometheus_client -> prometheus-client
- setuptools_scm -> setuptools-scm
- dispatcherd[pg_notify] -> dispatcherd[pg-notify]

PEP 503 specifies that package names should use hyphens.

* Fix license files to match normalized package names

- Remove awx_plugins.interfaces.txt (duplicate of awx-plugins-interfaces.txt)
- Rename system-certifi.txt to certifi.txt to match package name
2026-01-08 14:21:11 -05:00
Hao Liu
049a4b6438 Remove graph_jobs management command and asciichartpy dependency (#16078)
Deleted the awx/main/management/commands/graph_jobs.py file and removed the asciichartpy package from requirements. This cleans up unused code and dependencies related to terminal job status graphing.
2026-01-07 20:53:30 -05:00
jessicamack
de86b93690 AAP-59874: Update to Python 3.12 (#16208)
* update to Python 3.12

* remove use of utcnow

* switch to timezone.utc

datetime.UTC is an alias of datetime.timezone.utc. if we're doing the double import for datetime it's more straightforward to just import timezone as well and get it directly

* debug python env version issue

* change python version

* pin to SHA and remove debug portion
2026-01-07 11:57:24 -05:00
Alan Rominger
48c7534b57 AAP-60452 Remove the dynamic log level filter for the dispatcherd main process (#16200)
* Remove the dynamic filter on dispatcher startup

Configure the dynamic logging level only on startup

* Special case for log level on settings change

* Add unit test for new behavior

* Add test for initial config

* Mark test django DB

* Do necessary requirement bump

* Delete cache in live test fixture
2026-01-02 15:45:06 -05:00
Alan Rominger
40059512d8 Bump requirement because version was yanked from PyPI (#16212) 2026-01-02 11:14:00 -05:00
Bryan Havenstein
e2c1c5116d AAP-58457 Update UT for removed IPv6 feature flag 2026-01-02 09:39:04 -05:00
Chris Meyers
41f1ffc1dd AAP-45541 Add test to recreate jobs/4075584/job_events/children_summary/ error (#16163)
* Add test to recreate the error

* Also begin to add detection for empty event

* Remove breakpoint

* fix: ignore events with missing event types

* run linter and apply changes

---------

Co-authored-by: AlanCoding <arominge@redhat.com>
Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-12-17 21:34:53 +01:00
Alan Rominger
d7eb714859 Remove custom docs endpoint in DAB now (#16204)
* Remove custom docs endpoint in DAB now
2025-12-16 13:57:27 -05:00
Alan Rominger
7a58377aff Update ENV pattern in Dockerfile (#16202) 2025-12-16 09:57:54 -05:00
Hao Liu
2fbfe4ca73 Fix __pycache__ directory removal in clean target (#16196)
Replaces the use of 'find -delete' with 'find -exec rm -rf {} +' to ensure all __pycache__ directories are properly removed during the clean process.
2025-12-16 09:00:38 -05:00
Hao Liu
04fadab253 Remove unused ANSIBLE_BASE_PERMISSION_MODEL setting (#16198)
Deleted the ANSIBLE_BASE_PERMISSION_MODEL configuration from defaults.py as it is no longer needed.
2025-12-16 08:57:14 -05:00
Alan Rominger
054f6032fd AAP-47956 Use pg_notify for cancel and debugging, abandon socket approach (#16199)
* Use pg_notify for cancel and debugging, abandon socket approach

* Bump dispatcherd for pg_notify chunking
2025-12-10 14:38:39 -05:00
Seth Foster
f935134a19 Unpin rsyslog in container (#16203)
docker buildx build fails with
"Error: Unable to find a match: rsyslog-8.2102.0-106.el9"

unpinning builds successfully for both arm64 and x86_64

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-12-10 14:32:01 -05:00
Hao Liu
b24156805a Upgrade to Django 5.2 LTS (#16185)
Upgrade to Django 5.2 LTS with compatibility fixes across fields, migrations, dispatch config, tests, and dev deps.

Dependencies:
- Upgrade django to 5.2.8 and relax requirements.in to >=5.2,<5.3.
- Bump django-debug-toolbar to >=6.0 for compatibility.

Backend:
- awx/conf/fields.py: switch URL TLD regex to use DomainNameValidator.ul in custom URLField.
- awx/main/management/commands/gather_analytics.py: use datetime.timezone.utc for naïve datetime handling.
- awx/main/dispatch/config.py: add mock_publish option; avoid DB access for test runs, set default max_workers, and support a noop broker.

Migrations (SQLite/Postgres compatibility):
- Add awx/main/migrations/_sqlite_helper.py with db-aware AlterIndexTogether/RenameIndex wrappers; consume in 0144_event_partitions.py and 0184_django_indexes.py.
- Update 0187_hop_nodes.py to use CheckConstraint(condition=...).
- Add 0205_alter_instance_peers_alter_job_hosts_and_more.py adjusting through_fields/relations on instance.peers, job.hosts, and role.ancestors.
- _dab_rbac.py: iterate roles with chunk_size=1000 for migration performance.

Tests:
Include hcp_terraform in default credential types in test_credential.py.
---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-12-03 14:22:52 -05:00
Elijah DeLee
711b018ae7 cache dashboard query (#16165)
This causes an expensive query and the view sometimes called excessively
by the UI.  Memoize per unique user and params (time period) for 15s.
2025-12-03 13:03:39 -05:00
Stevenson Michel
be30a75c4f Removal of Warning for Distro Deprecation (#16193)
remove warning for distro deprecation
2025-12-02 14:28:12 -05:00
Seth Foster
a20f299cd6 Add x-ai-description to schema (#16186)
Adding ansible_base.api_documentation
to the INSTALL_APPS which extends the schema
to include an LLM-friendly description
to each endpoint

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-12-01 19:45:28 -05:00
Lila Yasin
4f41b50a09 AAP-57817 Add Redis connection retry using redis-py 7.0+ built-in (#16176)
* AAP-57817 Add Redis connection retry using redis-py 7.0+ built-in mechanism

* Refactor Redis client helpers to use settings and eliminate code duplication

* Create awx/main/utils/redis.py and move Redis client functions to avoid circular imports

* Fix subsystem_metrics to share Redis connection pool between
  client and pipeline

* Cache Redis clients in RelayConsumer and RelayWebsocketStatsManager to avoid creating new connection pools on every call

* Add cap and base config

* Add Redis retry logic with exponential backoff to handle connection failures during long-running operations

* Add REDIS_BACKOFF_CAP and REDIS_BACKOFF_BASE settings to allow
  adjustment of retry timing in worst-case scenarios without code changes

* Simplify Redis retry tests by removing unnecessary reload logic
2025-12-01 09:08:47 -05:00
Rodrigo Toshiaki Horie
0d86874d5d Organize S3 schema uploads by product (awx/tower) (#16190)
Update schema upload workflows to organize S3 files by product name:
- Upload schemas to s3://awx-public-ci-files/{product}/{branch}/schema.json
- Update Makefile to download from product-specific paths for schema diff
- Update feature branch deletion to clean up from correct product path

This separates AWX and Tower schemas into distinct S3 folders.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 10:43:38 -03:00
Fabricio Aguiar
2b2f2b73ac Move to Runtime Platform Flags (#16148)
* move to platform flags

Signed-off-by: Fabricio Aguiar <fabricio.aguiar@gmail.com>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED

* SonarCloud analyzes files without coverage data.
2025-11-25 10:20:04 -05:00
Lila Yasin
e03beb4d54 Add hcp_terraform to list of expected cred types to fix failing api test CI Check (#16188)
* Add hcp_terraform to list of expected cred types to fix failing api test ci check
2025-11-24 13:09:04 -05:00
Chris Meyers
4db52e074b Fix collectstatic (#16162) 2025-11-21 15:19:48 -05:00
Lila Yasin
4e1911f7c4 Bump Django to 4.2.26 to agree with DAB changes (#16183) 2025-11-19 14:56:29 -05:00
Peter Braun
b02117979d AAP-29938 add force flag to refspec (#16173)
* add force flag to refspec

* Development of git --amend test

* Update awx/main/tests/live/tests/conftest.py

Co-authored-by: Alan Rominger <arominge@redhat.com>

---------

Co-authored-by: AlanCoding <arominge@redhat.com>
2025-11-13 14:51:23 +01:00
Seth Foster
2fa2cd8beb Add timeout and on duplicate to system tasks (#16169)
Modify the invocation of @task_awx to accept timeout and
on_duplicate keyword arguments. These arguments are
only used in the new dispatcher implementation.

Add decorator params:
- timeout
- on_duplicate

to tasks to ensure better recovery for
stuck or long-running processes.

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-11-12 23:18:57 -05:00
Rodrigo Toshiaki Horie
f81859510c Change Swagger UI endpoint from /api/swagger/ to /api/docs/ (#16172)
* Change Swagger UI endpoint from /api/swagger/ to /api/docs/

- Update URL pattern to use /docs/ instead of /swagger/
- Update API root response to show 'docs' key instead of 'swagger'
- Add authentication requirement for schema documentation endpoints
- Update contact email to controller-eng@redhat.com

The schema endpoints (/api/docs/, /api/schema/, /api/redoc/) now
require authentication to prevent unauthorized access to API
documentation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Require authentication for all schema endpoints including /api/schema/

Create custom view classes that enforce authentication for all schema
endpoints to prevent inconsistent access control where UI views required
authentication but the raw schema endpoint remained publicly accessible.

This ensures all schema endpoints (/api/schema/, /api/docs/, /api/redoc/)
consistently require authentication.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add unit tests for authenticated schema view classes

Add test coverage for the new AuthenticatedSpectacular* view classes
to ensure they properly require authentication.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* remove unused import

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-12 09:14:54 -03:00
Rodrigo Toshiaki Horie
335a4bbbc6 AAP-45927 Add drf-spectacular (#16154)
* AAP-45927 Add drf-spectacular

- Remove drf-yasg
- Add drf-spectacular

* move SPECTACULAR_SETTINGS from development_defaults.py to defaults.py

* move SPECTACULAR_SETTINGS from development_defaults.py to defaults.py

* Fix swagger tests: enable schema endpoints in all modes

Schema endpoints were restricted to development mode, causing
test_swagger_generation.py to fail. Made schema URLs available in
all modes and fixed deprecated Django warning filters in pytest.ini.

* remove swagger from Makefile

* remove swagger from Makefile

* change docker-compose-build-swagger to docker-compose-build-schema

* remove MODE

* remove unused import

* Update genschema to use drf-spectacular with awx-link dependency

- Add awx-link as dependency for genschema targets to ensure package metadata exists
- Remove --validate --fail-on-warn flags (schema needs improvements first)
- Add genschema-yaml target for YAML output
- Add schema.yaml to .gitignore

* Fix detect-schema-change to not fail on schema differences

Add '-' prefix to diff command so Make ignores its exit status.
diff returns exit code 1 when files differ, which is expected behavior
for schema change detection, not an error.

* Truncate schema diff summary to stay under GitHub's 1MB limit

Limit schema diff output in job summary to first 1000 lines to avoid
exceeding GitHub's 1MB step summary size limit. Add message indicating
when diff is truncated and direct users to job logs or artifacts for
full output.

* readd MODE

* add drf-spectacular to requirements.in and the requirements.txt generated from the script

* Add drf-spectacular BSD license file

Required for test_python_licenses test to pass now that drf-spectacular
is in requirements.txt.

* add licenses

* Add comprehensive unit tests for CustomAutoSchema

Adds 15 unit tests for awx/api/schema.py to improve SonarCloud test
coverage. Tests cover all code paths in CustomAutoSchema including:
- get_tags() method with various scenarios (swagger_topic, serializer
  Meta.model, view.model, exception handling, fallbacks, warnings)
- is_deprecated() method with different view configurations
- Edge cases and priority ordering

All tests passing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* remove unused imports

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-10 12:35:22 -03:00
Lila Yasin
5ea2fe65b0 Fix failing Collection CI Checks (#16157)
* Fix CI: Use Python 3.13 for ansible-test compatibility

  ansible-test only supports Python 3.11, 3.12, and 3.13.
  Changed collection-integration jobs from '3.x' to '3.13'
  to avoid using Python 3.14 which is not supported.

* Fix ansible-test Python version for CI integration tests

  ansible-test only supports Python 3.11, 3.12, and 3.13.
  Added ANSIBLE_TEST_PYTHON_VERSION variable to explicitly pass
  --python 3.13 flag to ansible-test integration command.

  This prevents ansible-test from auto-detecting and using
  Python 3.14.0, which is not supported.

* Fix CI: Execute ansible-test with Python 3.13 to avoid
  unsupported Python 3.14

* Fix CI: Use Python 3.13 across all jobs to avoid Python
  3.14 compatibility issues

* Fix CI: Use 'python' and 'ansible.test' module for Python
   3.13 compatibility

* Fix CI: Use 'python' instead of 'python3' for Python 3.13
   compatibility

* Fix CI: Ensure ansible-test uses Python 3.13 environment
  explicitly

* Fix: Remove silent failure check for ansible-core in test suite

* Fix CI: Export PYTHONPATH to make awxkit available to ansible-test

* Fix CI: Use 'python' in run_awx_devel to maintain Python
  3.13 environment

* Fix CI: Remove setup-python from awx_devel_image that was resetting Python 3.13 to 3.14
2025-11-06 09:22:44 -05:00
Daniel Finca Martínez
f3f10ae9ce [AAP-42616] Bump receptor collection version to 2.0.6 (#16156)
Bump receptor collection version to 2.0.6
2025-11-05 15:49:43 -07:00
Jake Jackson
5be4462395 Update sonar and CI (#16153)
* actually upload PR coverage reports and inject PR number if report is
  generated from a PR
* upload general report of devel on merge and make things kinda pretty
2025-11-03 14:42:55 +00:00
Jake Jackson
d1d3a3471b Add new api schema check workflow (#16143)
* add new file to separate out the schema check so that it is no longer
  part of CI check and won't cacuse the whole workflow to fail
* remove old API schema check from ci.yml
2025-10-27 11:59:40 -04:00
Alan Rominger
a53fdaddae Fix f-string in log that is broken (#16132) 2025-10-20 14:23:38 -04:00
Hao Liu
f72591195e Fix migration failure for 0200 (#16135)
Moved the AddField operation before the RunPython operations for 'rename_jts' and 'rename_projects' in migration 0200_template_name_constraint.py. This ensures the new 'org_unique' field exists before related data migrations are executed.

Fix
```
django.db.utils.ProgrammingError: column main_unifiedjobtemplate.org_unique does not exist
```

while applying migration 0200_template_name_constraint.py

when there's a job template or poject with duplicate name in the same org
2025-10-20 08:50:23 -04:00
TVo
0d9483b54c Added Django and API requirements to AWX Contributor Docs for POC (#16093)
* Requirements POC docs from Claude Code eval

* Removed unnecessary reference.

* Excluded custom DRF configurations per @AlanCoding

* Implement review changes from @chrismeyersfsu

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-10-16 10:38:37 -06:00
jessicamack
f3fd9945d6 Update dependencies (#16122)
* prometheus-client returns an additional value as of v.0.22.0

* add license, remove outdated ones, add new embedded sources

* update requirements and UPGRADE BLOCKERs in README
2025-10-15 15:55:21 +00:00
Jake Jackson
72a42f23d5 Remove 'UI' from PR template component options (#16123)
Removed 'UI' from the list of component names in the PR template.
2025-10-13 10:23:56 -04:00
Jake Jackson
309724b12b Add SonarQube Coverage and report generation (#16112)
* added sonar config file and started cleaning up
* we do not place the report at the root of the repo
* limit scope to only the awx directory and its contents
* update exclusions for things in awx/ that we don't want covered
2025-10-10 10:44:35 -04:00
Seth Foster
300605ff73 Make subscriptions credentials mutually exclusive (#16126)
settings.SUBSCRIPTIONS_USERNAME and
settings.SUBSCRIPTIONS_CLIENT_ID

should be mutually exclusive. This is because
the POST to api/v2/config/attach/ accepts only
a subscription_id, and infers which credentials to
use based on settings. If both are set, it is ambiguous
and can lead to unexpected 400s when attempting
to attach a license.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-10-09 16:57:58 -04:00
Jake Jackson
77fab1c534 Update dependabot to check python deps (#16127)
* add list item to check python deps daily
* move to weekly after we gain confidence
2025-10-09 19:40:34 +00:00
Chris Meyers
51b2524b25 Gracefully handle hostname change in metrics code
* Previously, we would error out because we assumed that when we got a
  metrics payload from redis, that there was data in it and it was for
  the current host.
* Now, we do not assume that since we got a metrics payload, that is
  well formed and for the current hostname because the hostname could
  have changed and we could have not yet collected metrics for the new
  host.
2025-10-09 14:08:01 -04:00
Chris Coutinho
612e8e7688 Fix duplicate metrics in AWX subsystem_metrics (#15964)
Separate out operation subsystem metrics to fix duplicate error

Remove unnecessary comments

Revert to single subsystem_metrics_* metric with labels

Format via black
2025-10-09 10:28:55 +02:00
Andrea Restle-Lay
0d18308112 [AAP-46830]: Fix AWX CLI authentication with AAP Gateway environments (#16113)
* migrate pr #16081 to new fork

* add test coverage - add clearer error messaging

* update tests to use monkeypatch
2025-10-02 10:06:10 -04:00
Chris Meyers
f51af03424 Create system_administrator rbac role in migration
* We had race conditions with the system_administrator role being
  created just-in-time. Instead of fixing the race condition(s), dodge
  them by ensuring the role always exists
2025-10-02 08:25:40 -04:00
Alan Rominger
622f6ea166 AAP-53980 Disconnect logic to fill in role parents (#15462)
* Disconnect logic to fill in role parents

Get tests passing hopefully

Whatever SonarCloud

* remove role parents/children endpoints and related views

* remove duplicate get_queryset method from RoleTeamsList

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-10-02 13:06:37 +02:00
Seth Foster
2729076f7f Add basic auth to subscription management API (#16103)
Allow users to do subscription management using
Red Hat username and password.

In basic auth case, the candlepin API
at subscriptions.rhsm.redhat.com will be used instead
of console.redhat.com.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-10-02 01:06:47 -04:00
Alan Rominger
6db08bfa4e Rewrite the s3 upload step to fix breakage with new Ansible version (#16111)
* Rewrite the s3 upload step to fix breakage with new Ansible version

* Use commit hash for security

* Add the public read flag
2025-09-30 15:47:54 -04:00
Stevenson Michel
ceed41d352 Sharing Credentials Across Organizations (#16106)
* Added tests for cross org sharing of credentials

* added negative testing for sharing of credentials

* added conditions and tests for roleteamslist regarding cross org credentials

* removed redundant codes

* made error message more articulated and specific
2025-09-30 10:44:27 -04:00
jessicamack
98697a8ce7 Fix Grafana notification bug (#16104)
* accept empty string for dashboard and panel IDs

* update grafana tests and add new one
2025-09-29 10:04:58 -04:00
Alan Rominger
f1edbd8ef5 Add npm cache path to fix UI building (push images job) (#16097)
* Add npm cache path to fix UI building

* skip version switch
2025-09-23 09:16:41 -04:00
Alan Rominger
d0a99c37c1 Use action before schema logic, fix failure to upload schema (#16099)
Use action before schema logic
2025-09-23 09:16:07 -04:00
Andrea Restle-Lay
1f06d1bb9a [AAP-44277] License module now validates API responses for subscription IDs. (Moved from Tower) (#16096)
* resolve bug and add simple unit tests

* Update awx_collection/plugins/modules/license.py

Co-authored-by: Andrew Potozniak <tyraziel@gmail.com>

---------

Co-authored-by: Andrew Potozniak <tyraziel@gmail.com>
2025-09-22 15:27:47 -04:00
Alan Rominger
873f5c0ecc Remove some attached methods from User model (#15325)
Remove archaic monkey patches (#15338)

Remove some attached methods from User model

Test user-org sublist URLs we did not test before
2025-09-22 14:19:08 -04:00
AlanCoding
b31da105ad Merge remote-tracking branch 'awx/devel' into merge_26_2 2025-09-18 16:36:03 -04:00
Dirk Jülich
a285843cf2 AAP-35227 Extend role_check.py to delete orphaned InstanceLink objects as well (#7105) 2025-09-18 16:13:06 -04:00
AlanCoding
dd02d56de6 Prefer devel setup.cfg and TODO marks for expected awx-plugin 2025-09-18 15:57:51 -04:00
AlanCoding
b1944ba676 Remove code intended to be removed 2025-09-18 14:41:44 -04:00
AlanCoding
24818b510d Re-apply PR 15862 2025-09-18 09:41:41 -04:00
AlanCoding
55a7591f89 Resolve actions conflicts and delete unwatned files
Bump migrations and delete some files

Resolve remaining conflicts

Fix requirements

Flake8 fixes

Prefer devel changes for schema

Use correct versions

Remove sso connected stuff

Update to modern actions and collection fixes

Remove unwated alias

Version problems in actions

Fix more versioning problems

Update warning string

Messed it up again

Shorten exception

More removals

Remove pbr license

Remove tests deleted in devel

Remove unexpected files

Remove some content missed in the rebase

Use sleep_task from devel

Restore devel live conftest file

Add in settings that got missed

Prefer devel version of collection test

Finish repairing .github path

Remove unintended test file duplication

Undo more unintended file additions
2025-09-17 10:23:19 -04:00
Alan Rominger
38f858303d Use DAB utility to sync RoleDefinition compat role create (#7104) 2025-09-17 10:23:19 -04:00
Stevenson Michel
0fa8135691 Fix Role Definition Reverse Sync (#7097)
* created manual sync for role definition

* made changes for only read role
2025-09-17 10:23:19 -04:00
TVo
e63eba247f AAP-37812 Added mention about setting correct env variable in cli usage (#16091)
Added mention about setting correct env variable in cli usage
2025-09-09 15:22:08 -06:00
AlanCoding
8fb6a3a633 Merge remote-tracking branch 'tower/test_stable-2.6' into merge_26_2 2025-09-04 23:06:53 -04:00
thedoubl3j
7dc4f149a7 Fix rebase merge conflicts
* had to rebase and accept both in some cases
* remove unused imports
2025-09-04 15:17:54 -04:00
John Westcott IV
2c96c48a5c feat: Add ORG_ADMINS_CAN_SEE_ALL_USERS and MANAGE_ORGANIZATION_AUTH to settings migration (#7075)
- Add ORG_ADMINS_CAN_SEE_ALL_USERS and MANAGE_ORGANIZATION_AUTH to the
  settings_to_migrate list in SettingsMigrator
- Create comprehensive unit tests for SettingsMigrator class with
  parameterized test cases
- Tests cover all migration scenarios including the new organizational
  settings
- Refactored tests use pytest.mark.parametrize for better maintainability
  and coverage

Co-authored-by: Claude <claude@anthropic.com>
2025-09-04 15:13:21 -04:00
Hao Liu
58dcd2f5dc Bump setuptools to 80.9.0 (#7076)
Updated setuptools version from 78.1.1 to 80.9.0 in Makefile, requirements.in, and requirements.txt to ensure compatibility and address any potential issues with older versions.
2025-09-04 15:13:21 -04:00
Peter Braun
25896a8772 Fix credential types no org (#7078)
* Allow creating galaxy credential types without an organization (#16077)

* remove requirement for galaxy credentials to belong to an organization

* remove organization check for galaxy credential type

* add functional test
2025-09-04 15:13:20 -04:00
Andrew Potozniak
d96727c3bd Remove 'Controller' from name (#7077) 2025-09-04 15:13:20 -04:00
Hao Liu
d8737435fa [stable-2.6] Bump dependency (#7070)
* Update Python dependencies

Relaxed or updated version constraints for several dependencies in requirements files and Makefile, including Cython, asciichartpy, msgpack, python-daemon, and pyyaml. These changes address build issues, remove unnecessary pins, and update to newer compatible versions.

* remove docutils license

* we no longer have this as a dep so we don't need to carry its license

* Update dependencies to address security vulnerabilities

Bumped versions of cryptography, protobuf, and idna in requirements to address CVE-2024-26130, CVE-2025-4565, and CVE-2024-3651. These updates improve security by resolving known vulnerabilities in the affected packages.

---------

Co-authored-by: thedoubl3j <jljacks93@gmail.com>
2025-09-04 15:13:20 -04:00
Madhu Kanoor
bb46268eec [AAP-52144] Remove AWX Prefix from the SAML migrator (#7072)
We were adding an AWX prefix to SAML migrator
2025-09-04 15:13:20 -04:00
Peter Braun
af2efec2b4 fix: do not create multiple mappers for lists of emails or usernames (#7063)
* fix: do not create multiple mappers for lists of emails or usernames

* fix: create multiple matchers, don't rely on matches_or

* fix tests

* truncate mapper names to a max of 128 chars

* better naming scheme for matchers
2025-09-04 15:13:20 -04:00
John Westcott IV
a7eb1ef763 [AAP-51531] Fix LDAP authentication mapping and bug in LDAP migration (#7061)
* Add LDAP support to gateway_mapping and expand test coverage

- Add new process_ldap_user_list function for LDAP group processing
- Add auth_type parameter to org_map_to_gateway_format and team_map_to_gateway_format
- Support both 'sso' and 'ldap' authentication types in mapping functions
- Fix syntax error and logic bug in existing code
- Add comprehensive unit tests for process_ldap_user_list function (13 test cases)
- Add unit tests for auth_type parameter functionality
- Update helper functions to support new auth_type parameter
- All tests pass and maintain backward compatibility

Technical changes:
- process_ldap_user_list handles None, boolean, string, and list inputs
- Proper type hints with mypy compatibility
- LDAP groups use 'has_or' trigger format vs SSO attribute matching
- Boolean True/False create Always/Never Allow triggers for LDAP
- Maintains proper ordering and mapper structure

Co-authored-by: Claude (Anthropic AI Assistant) <claude@anthropic.com>

* Fix empty list bug in process_ldap_user_list and add comprehensive tests

- Fix process_ldap_user_list to return empty list for empty input instead of creating invalid trigger
- Empty list [] now correctly returns no triggers instead of trigger with empty has_or array
- Add test case for empty list behavior in both LDAP and SSO functions
- Update existing test_empty_list to expect correct behavior (0 triggers)
- Maintain backward compatibility for all other input types
- Comprehensive testing confirms no regression in existing functionality

Bug Details:
- Before: process_ldap_user_list([]) returned [{'name': 'Match User Groups', 'trigger': {'groups': {'has_or': []}}}]
- After: process_ldap_user_list([]) returns [] (correct behavior)
- SSO function already handled this correctly

This prevents potential Gateway issues with empty has_or arrays and ensures logical consistency.

Co-authored-by: Claude (Anthropic AI Assistant) <claude@anthropic.com>

* Add comprehensive LDAP migrator tests and fix category handling

- Add comprehensive unit test suite for LDAPMigrator class (26 tests)
- Test LDAP configuration scenarios including multiple instances, mappings, and edge cases
- Add tests for mixed boolean/group mappings, special characters in org names, and empty configs
- Fix LDAP authenticator category to always be 'ldap' (not 'ldap<suffix>')
- Add auth_type='ldap' parameter to org_map_to_gateway_format and team_map_to_gateway_format calls
- Include AAP-51531 reference comments for specific test cases
- All tests passing (26/26)

Co-authored-by: Claude <claude@anthropic.com>

---------

Co-authored-by: Claude (Anthropic AI Assistant) <claude@anthropic.com>
2025-09-04 15:13:20 -04:00
Zack Kayyali
e9928ff513 Disable SAML Authenticator upon migrate (#7062) 2025-09-04 15:13:20 -04:00
John Westcott IV
df1c453c37 Fix type hints in gateway_mapping.py process_sso_user_list function (#7060)
- Added missing Pattern and Any imports from typing
- Fixed users parameter type hint to include Pattern[str]
- Simplified overly complex return type annotation to use Any
- Added proper type narrowing with isinstance() and cast()
- Resolved mypy errors about incompatible list item types

Co-authored-by: Claude (Anthropic AI Assistant) <claude@anthropic.com>
2025-09-04 15:13:19 -04:00
Madhu Kanoor
5a89d7bc29 fix: order of role and attr in saml user_flags (#7050)
https://issues.redhat.com/browse/AAP-51127

Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-09-04 15:13:19 -04:00
John Westcott IV
505ec560c8 feat: comprehensive refactor of SSO org/team mapping for Gateway authentication export (#7047)
This commit completely refactors how SSO organization and team mappings are processed
and exported for Gateway authentication, moving from a group-based approach to a more
flexible attribute-based system.

Key Changes:
- Introduced new process_sso_user_list() function for centralized user processing
- Enhanced boolean handling to support both native booleans and string representations
- Added email detection and regex pattern support for flexible user matching
- Refactored trigger generation from groups-based to attributes-based system

Gateway Mapping Enhancements (awx/main/utils/gateway_mapping.py):
- Added email regex detection for automatic email vs username classification
- Added pattern_to_slash_format() for regex pattern conversion
- Enhanced process_sso_user_list() with support for:
  - Boolean values: True/False and ["true"]/["false"]
  - String usernames and email addresses with automatic detection
  - Regex patterns with both username and email matching
  - Custom email_attr and username_attr parameters
- Refactored team_map_to_gateway_format() to use new processing system
- Refactored org_map_to_gateway_format() to use new processing system
- Changed trigger structure from {"groups": {"has_or": [...]}} to attribute-based triggers
- Improved naming convention to include trigger type in mapping names

Comprehensive Test Coverage (awx/main/tests/unit/utils/test_auth_migration.py):
- Added complete TestProcessSSOUserList class with 8 comprehensive test methods
- Enhanced TestOrgMapToGatewayFormat with string boolean and new functionality tests
- Enhanced TestTeamMapToGatewayFormat with string boolean and new functionality tests
- Added tests for email detection, regex patterns, and custom attributes
- Verified backward compatibility and integration functionality
- All existing tests updated to work with new attribute-based trigger system

Breaking Changes:
- Trigger structure changed from group-based to attribute-based
- Mapping names now include trigger descriptions for better clarity
- Function signatures updated to include email_attr and username_attr parameters

Co-Authored with Claude-4 via Cursor

Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-09-04 15:13:19 -04:00
Peter Braun
4f2d28db51 Aap 50951 (#7053)
* disable authenticators that require updating the redirect URL and add groups claim to AzureAD migrator

* update tests
2025-09-04 15:13:19 -04:00
Zack Kayyali
0b17007764 AAP-49910 - Delete legacy authenticator code 2025-09-04 15:13:19 -04:00
Stevenson Michel
dfad93cf4c Deprecate legacy OAuth2 Application feature (#7045)
* Marked APIs legacy OAuth applications as deprecated

* Readded deprecation

* Fixed linter

* Added more deprecated mark to Oauth2 Api apps

* Fixed deprecation errors

* Fix tests
2025-09-04 15:13:19 -04:00
Peter Braun
6bd7c3831f enable azure ad authenticator by default (#7043) 2025-09-04 15:13:19 -04:00
Seth Foster
7b56f23c0e Prevent remote sync if rbac sync is disabled (#7044)
Syncing from new rbac to old rbac locally calls the
disable_rbac_sync() context manager.

If rbac sync is disabled, we do not need to remote
sync, as we can assume the remote syncing already
occurred in the viewset.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-09-04 15:13:18 -04:00
Peter Braun
3e1b9b2c88 Improve redirect override (#7042)
* handle login redirect for the oidc migrator

* handle updating login override redirect centrally in the settings migrator

* update unit tests
2025-09-04 15:13:18 -04:00
Bruno Cesar Rocha
cf0bc16cf7 fix: tacacs+ -> TACACSPLUS (#7039)
* fix: tacacs+ -> TACACSPLUS

Gateway doesn't allow `+` to be used in slug.

AAP-50774

* Fixed assertion

---------

Co-authored-by: Andrew Potozniak <potozniak@redhat.com>
2025-09-04 15:13:18 -04:00
Andrew Potozniak
a0b6083d4e Slightly better error handling for non 200 status codes from Gateway. (#7038)
* Slightly better error handling for non 200 status codes from Gateway.

* Apply suggestion from @chrismeyersfsu

Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>

---------

Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>
2025-09-04 15:13:18 -04:00
Andrew Potozniak
d452098123 [AAP-50446] Error handling enhancements and GATEWAY_BASE_URL override (#7037)
* Added better error handling and messaging when the service token authentication is broken.  Allowed for GATEWAY_BASE_URL to override the service token's base url if it is set in the environment variables.
Co-Authored-By: Cursor (claude-4-sonnet)

* Removed GATEWAY_BASE_URL override for service token auth.
2025-09-04 15:13:18 -04:00
Alan Rominger
c5fb0c351d AAP-47283 [2.6] Unified display of RBAC & synchronization (#7001)
* Working branch for testing DAB RBAC changes

* AAP-48392 Handle DAB RBAC either before or after new type model (for merge) (#16045)

* Handle DAB RBAC either before or after new type model

* Translate CT to DAB CT

* Fix for rearrangement of post_migration methods

* Directly include RBAC service URLs

* Add a run before remote permission additions

* Sync old rbac to remote rbac (#7025)

Signed-off-by: Seth Foster <fosterbseth@gmail.com>

* Set DAB requirement back to devel

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2025-09-04 15:13:10 -04:00
Peter Braun
a3f2401740 feat: exit code 1 if any migration fails (#7036)
* feat: exit code 1 if any migration fails

* update tests

* remove unused variables
2025-09-04 15:03:59 -04:00
Peter Braun
ad461a3aab fix: inconsistent return values in github migrator (#7035)
* fix: inconsistent return values in github migrator

* feat: check setting value before updating and report correct status

* fix linter issues
2025-09-04 15:03:59 -04:00
Peter Braun
44c53b02ae Aap 49709 - settings migration (#7023)
* migrate settings using the existing authenticator framework

* add method to get settings value to gateway client

* add transformer functions for settings

* Switched back to PUT for settings updates

* Started wiring in testing changes

* Added settings_* aggregation results.  Added skip-github option.  Added tests.

Assisted-by: Cursor

* Added --skip-all-authenticators command line argument.  Added GoogleOAuth testing.  Added tests for skipping all authenticators.

Assisted-by: Cursor

* wip: migrate other missing settings

* update login_redirect_override in google_oauth2

* impement login redirect for azuread

* implement login redirect for github

* implement login redirect for saml

* set LOGIN_REDIRECT_OVERRIDE even if no authenticator matched

* extract logic for login redirect override to base class

* use urlparse to compare valid redirect urls

* Preserve the original query parameters

* Fix flake8 issues

* Preserve the query parameter in sso_login_url

Gateway sets the sso_login_url to

/api/gateway/social/login/aap-saml-keycloak/?idp=IdP

The idp needs to be preserved when creating the redirect

* Update awx/main/utils/gateway_client.py

Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>

* Update awx/main/management/commands/import_auth_config_to_gateway.py

Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>

* list of settings updated

* Update awx/main/utils/gateway_client.py

Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>

* Update awx/sso/utils/base_migrator.py

Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>

* fix tests

---------

Co-authored-by: Andrew Potozniak <potozniak@redhat.com>
Co-authored-by: Madhu Kanoor <mkanoor@redhat.com>
Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>
2025-09-04 15:03:59 -04:00
Bruno Rocha
58e237a09a fix: address reviewer comments 2025-09-04 15:03:59 -04:00
Bruno Cesar Rocha
052166df39 feat: OIDC Migrator (#7026)
AAP-48486

Add implementation to the existing oidc_migrator
2025-09-04 15:03:59 -04:00
Andrew Potozniak
b4ba7595c6 Unit Test Refactor for test_import_auth_config_to_gateway (#7024)
* Refactoring

Assisted-by: Cursor
2025-09-04 15:03:58 -04:00
Madhu Kanoor
c5211df9ca [AAP-48863] SAML Mapping migration (#7011)
This PR maps the

* SOCIAL_AUTH_SAML_TEAM_ATTR
* SOCIAL_AUTH_SAML_USER_FLAGS_BY_ATTR
* SOCIAL_AUTH_SAML_ORGANIZATION_ATTR

https://issues.redhat.com/browse/AAP-48863
2025-09-04 15:03:58 -04:00
Andrew Potozniak
5e0870a7ec AAP-48510 Enable Service Tokens with the Authentication Migration Management Command (#7017)
* Enabled Service Token Auth for Management Command import_auth_config_to_gateway

Co-authored-by: Peter Braun <pbraun@redhat.com>
Co-authored-by: Zack Kayyali <zkayyali@redhat.com>
Assisted-by: Cursor
2025-09-04 15:03:58 -04:00
jessicamack
0936b28f9b Migrate nested team memberships to direct team memberships (#7005)
* migrate team on team users

add setting to prevent team on team cases. remove tests that should fail now

* adjust tests for disallowing team on teams

* use RoleUserAssignment to retrieve users

* assign users with RoleUserAssignment instead

* fix broken test

* move methods out to utils file. add tests

* add missed positional arg

* test old rbac system also consolidates

* fix test
2025-09-04 15:03:58 -04:00
Fabricio Aguiar
8e58fee49c feat: Add migrator for Google OAuth2 authenticator (#7018)
Signed-off-by: Fabricio Aguiar <fabricio.aguiar@gmail.com>
2025-09-04 15:03:58 -04:00
Peter Braun
e746589019 Aap 49570 (#7022)
* consider global org and team maps for github authenticator

* consider global org and team maps for saml authenticator
2025-09-04 15:03:58 -04:00
Bruno Cesar Rocha
c4a6b28b87 feat: AAP-48499 TACACS+ authenticator migrator (#7014)
* feat: AAP-48499 TACACS+ authenticator migrator

Issue: AAP 48499

* enable by default
2025-09-04 15:03:58 -04:00
Bruno Cesar Rocha
abc4692231 feat: AAP-48498 RADIUS authenticator migrator (#7013)
* feat: AAP-48498 Radius authenticator migrator

Issue: AAP-48498

* fix: Namingm Style and tests

* enabled by default

* test: SECRET is now ignored unless --force is set
2025-09-04 15:03:57 -04:00
Peter Braun
ab9bde3698 add force flag to enforce updates even when authenticator already exists (#7015)
* add force flag to enforce updates even when authenticator already exists

* remove cleartext field

* update list of encrypted fields

* show updated and unchanged authenticators in report
2025-09-04 15:03:57 -04:00
Peter Braun
c5e55fe0f5 Aap 48489 (#7003)
* collect controller ldap configuration

* translate role mapping and submit ldap authenticator

* implement require and deny group mapping

* remove all references of awx in the naming

* fix linter issues

* address PR feedback

* update ldap authenticator naming

* update github authenticator naming

* assume that server_uri is always a string

* update order of evaluation for require and deny groups

* cleanup and move ldap related functions into the ldap migrator

* add skip option for saml

* update saml authenticator to new slug format

* update azuread authenticator to new slug format
2025-09-04 15:03:57 -04:00
John Westcott IV
6b2e9a66d5 Adding Azure AD export command (#7010) 2025-09-04 15:03:57 -04:00
Madhu Kanoor
512857c2a9 [AAP-48496] SAML Migration from Controller to Gateway (#6998)
This PR migrates the SAML configuration from the Controller
to the Gateway, it intentionally skips setting the CALLBACK_URL
so that the Gateway can fill in the appropriate URL.
2025-09-04 15:03:57 -04:00
Seth Foster
c2c0f2b828 [2.6] Remove controller specific role definitions (#7002)
Remove Controller specific roles

Removes
- Controller Organization Admin
- Controller Organization Member
- Controller Team Admin
- Controller Team Member
- Controller System Auditor

Going forward the platform role definitions
will be used, e.g. Organization Member

The migration will take care of any assignments
with those controller specific roles and use
the platform roles instead.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-09-04 15:03:57 -04:00
Jake Jackson
534549139c bump DAB dep to devel from stable-2.5 (#6988)
* update dab dependency for 2.6 development
2025-09-04 15:03:57 -04:00
Peter Braun
d98118a108 compare authenticators and mappers before recreating them (#6989)
* compare authenticators and mappers before recreating them

* add unit tests

* fix linter errors

* refactor and improve: better implementation for get_authenticator_by_slug and removal of redundant code

* add submit_authenticator method to handle create vs. update in a generic way

* remove unused import
2025-09-04 15:03:56 -04:00
Peter Braun
e4758e8b4b Split up migrators (#6986)
* split up migration into classes for each authenticator

* remove unused import

* remove unused code

* remove unused class
2025-09-04 15:03:56 -04:00
Hao Liu
46710c4d86 AAP-48070 Remove ALLOW_LOCAL_RESOURCE_MANAGEMENT setting and enable local resource management (#16033) (#6985)
Remove ALLOW_LOCAL_RESOURCE_MANAGEMENT setting and enable local resource management

This commit removes the ALLOW_LOCAL_RESOURCE_MANAGEMENT setting and all associated
functionality, making the behavior as if the setting is always enabled.

Changes:
- Remove ALLOW_LOCAL_RESOURCE_MANAGEMENT setting from defaults.py
- Remove @immutablesharedfields decorator and all related logic
- Remove decorator applications from Organization, Team, and User API views
- Remove role assignment restrictions in UserRolesList and RoleUsersList
- Remove test file for immutablesharedfields functionality
- Clean up unused imports

Result: Organizations, Teams, and Users can now always be created, modified,
and deleted via the API without platform ingress restrictions.
2025-09-04 15:03:54 -04:00
Hao Liu
b70e884484 AAP-47495 Hide CSRF_TRUSTED_ORIGINS (#16035) (#6984)
Hide CSRF_TRUSTED_ORIGINS
2025-09-04 15:02:40 -04:00
Peter Braun
05b6f4fcb9 Aap 47760 - initial auth migration management command (#6981)
* wip: management command for authenticator export to GateWay

* wip: implement ldap auth config migration

* refactor: split concerns into gathering config and converting / recreating config

* refactor: dry run by default

* use the authenticator slug for idempotency

* move to correct utils path

* use env vars instead of flags, fix linter errors

* remove unused import
2025-09-04 15:02:38 -04:00
Peter Braun
243e27c7a9 Aap 49452 - support CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX in awxkit (#16085)
* fix: awxkit should honor CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX if defined

* add unit tests

* update tests
2025-09-03 15:22:38 -04:00
Dan Leehr
7fe525a533 Fix issue with some modules not honoring Controller API prefix (#16080)
* Fix issue where export module does not honor CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX

* Add unit test and handle leading/trailing slashes

* Reformat

* Refactor for clarity

* Remove unused import
2025-09-03 14:58:07 -04:00
Stevenson Michel
c36ce902db AAP-42929 : Retrieval of Projects of a Team and Teams of a Project (#7086)
* Fixed merge conflicts

* fix linters

* Added test for projectTeamsList
2025-09-03 14:05:17 -04:00
Lila Yasin
44e9dee9c7 [Bug Fix 4.6] AAP-49077 Task stdout escapes quotes twice only with Controller API api/v2/jobs/{id}/stdout/?format=txt (#7071)
* Move logic to unified job model instead of view

* Refine logic to only apply to double escaped characters to prevent touching unicord chars

* Refine logic to only apply to stdout so that it does not impact webhook notifications

* Revise naming to reflect correction to escapes, not just escape quotes

* Update code comments to reflect fixing double escapes vs double escaped quotes specifically

* Add regex for 5 most common python escape chars to make fix more robust
2025-09-02 14:49:13 -04:00
Dan Leehr
51eb109dbe Fix issue with some modules not honoring Controller API prefix (#16080)
* Fix issue where export module does not honor CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX

* Add unit test and handle leading/trailing slashes

* Reformat

* Refactor for clarity

* Remove unused import
2025-09-02 17:48:24 +02:00
Peter Braun
5ca76f3d64 Aap 49452 - support CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX in awxkit (#16085)
* fix: awxkit should honor CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX if defined

* add unit tests

* update tests
2025-09-02 14:47:32 +02:00
jessicamack
e3a9d9fbe8 [AAP-51443]CVE-2025-48432 (#7073)
* bump Django version to patch with additional hardening
2025-08-29 15:57:16 -04:00
Peter Braun
8b13c75f2e Allow creating galaxy credential types without an organization (#16077) (#7074)
* remove requirement for galaxy credentials to belong to an organization

* remove organization check for galaxy credential type
2025-08-28 15:15:36 +02:00
Jake Jackson
36ec5efc88 update work flow to actually fail (#7069)
* the workflow has been failing silently without catching a merge
  conflict. this removes the fail pretty logic previously implemented.
* just fail if a merge conflict is encountered
2025-08-21 18:49:54 +00:00
Lila Yasin
4e332ac2c7 AAP-45933 [2.5 Backport] AAP-4865 bug fact storage (#6945)
* Revise start_fact_cache and finish_fact_cache to use JSON file (#15970)

* Revise start_fact_cache and finish_fact_cache to use JSON file with host list inside it

* Revise artifacts path to be relative to the job private_data_dir

* Update calls to start_fact_cache and finish_fact_cache to agree with new reference to artifacts_dir

* Prevents unnecessary updates to ansible_facts_modified, fixing timestamp-related test failures.

* Import bulk_update_sorted_by_id

* Removed assert that calls ansible_facts_new which was removed in the backported pr

* Add import of Host back
2025-08-20 10:22:15 -04:00
Lila Yasin
b730bfa193 Continue work on collection ci (#16071)
* Fix some patterns in collection test playbooks

* Revert change to ansible.builtin.user

* Revert change to WFJT for dup label error

* Add error handling and fix references

* Add back lookup organization

* Fix all remainingfailing syntax in workflow_job_template

* Allow creating galaxy credential types without an organization (#16077)

* remove requirement for galaxy credentials to belong to an organization

* remove organization check for galaxy credential type

---------

Co-authored-by: AlanCoding <arominge@redhat.com>
Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-08-20 10:19:53 -04:00
Jake Jackson
8fe4223eac [AAP-47384] CVE 2025 47273 (#7054)
* Update requirements for setuptools

* first pass and need to commit

* update makefile and run updater script

* updated makefile per readme
* ran updater script

* Patch irc backend to avoid namespace collision w/ jaraco

When importing the IRC backend, jaraco resolves to
the version vendored inside setuptools:

1) importing irc backend…
irc_backend ERROR: ModuleNotFoundError("No module named 'jaraco.stream'")

2) sys.modules['jaraco'] after failure:
present: True
type: <class 'module'>
__file__: /var/lib/awx/venv/awx/lib64/python3.11/site-packages/setuptools/_vendor/jaraco/__init__.py
__path__: ['/var/lib/awx/venv/awx/lib64/python3.11/site-packages/setuptools/_vendor/jaraco']
__spec__: ModuleSpec(name='jaraco',
loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f006a0eccd0>,
origin='/var/lib/awx/venv/awx/lib64/python3.11/site-packages/setuptools/_vendor/jaraco/__init__.py',
submodule_search_locations=['/var/lib/awx/venv/awx/lib64/python3.11/site-packages/setuptools/_vendor/jaraco'])

Since setuptools does not vendor jaraco.stream, it blew up. This patch ensures
jaraco.stream gets imported *before* attempting to import the irc modules.

* Revert "[4.6][dependency] CVE 2025 47273 (#7020)" (#7027)

This reverts commit e8b2920aec.

* reformatted irc backend with black

* ran black to fix linting issues

* Reapply "[4.6][dependency] CVE 2025 47273 (#7020)" (#7027)

This reverts commit 0c6df9b13398a93569fae7558e1a0e72cbe8fb6c.

* add flake8 ignore since jaraco.stream is needed

* jaraco.stream is not directly called in the file but is needed by irc
  so ignore the linter failure

---------

Co-authored-by: Shane McDonald <me@shanemcd.com>
2025-08-19 15:59:24 +00:00
Peter Braun
461678df08 Allow creating galaxy credential types without an organization (#16077)
* remove requirement for galaxy credentials to belong to an organization

* remove organization check for galaxy credential type
2025-08-18 14:21:24 +02:00
Peter Braun
e8c4b302ad remove requirement for galaxy credentials to belong to an organization (#16075) (#7066) 2025-08-15 16:27:22 -04:00
Chris Meyers
e82de50edb Fix controller_oauthtoken regression and more
* aap_token now functions like controller_oauthtoken
* lookup('awx.awx.controller_api', ...) fixed
2025-08-15 10:00:37 -04:00
Robin Bobbitt
11f31ef796 AAP-43883: clear cached LICENSE setting on change (#16065) (#7064)
* clear LICENSE from cache on change



* Adds tests for license cache clearing

Generated by Cursor (claude-4-sonnet)



* test fixes

Generated with Cursor (claude-4-sonnet)



---------

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>
Co-authored-by: Jake Jackson <jljacks93@gmail.com>
2025-08-14 14:02:34 -04:00
Peter Braun
09b539bc34 remove requirement for galaxy credentials to belong to an organization (#16075) 2025-08-14 14:50:40 +00:00
Robin Bobbitt
9033e829fe fixes UnboundLocalError in POST /attach (#16062) (#7059)
* fixes UnboundLocalError in POST /attach
* bust cache for credentials before attaching subscription
---------

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>
2025-08-14 09:56:25 -04:00
Elyézer Rezende
4757785016 Pin ansible-core for collection tests (#7030)
Signed-off-by: Elyézer Rezende <elyezermr@gmail.com>
2025-08-12 14:43:52 -04:00
Zack Kayyali
902f2634a6 AAP-49910 - Delete legacy authenticator code 2025-08-11 11:25:50 -04:00
Robin Bobbitt
793c85ef24 AAP-43883: clear cached LICENSE setting on change (#16065)
* clear LICENSE from cache on change

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>

* Adds tests for license cache clearing

Generated by Cursor (claude-4-sonnet)

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>

* test fixes

Generated with Cursor (claude-4-sonnet)

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>

---------

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>
Co-authored-by: Jake Jackson <jljacks93@gmail.com>
2025-08-07 03:00:41 +00:00
Robin Bobbitt
290dec8bf8 fixes UnboundLocalError in POST /attach (#16062)
* fixes UnboundLocalError in POST /attach

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>

* bust cache for credentials before attaching subscription

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>

---------

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>
2025-08-06 21:24:57 +00:00
Lila Yasin
80f9f87181 Bug fix for AAP-47771 data migration update (#16058)
* Bug fix for AAP-47771 this data migration updates existing CredentialType entries
in the database and changes the kind from github_app to github_app_lookup

* Combine migration 0203 into 0202

* Add test to ensure reconciliation issue has been resolved
2025-08-06 15:17:53 -04:00
Lila Yasin
cd12f4dcac Update Collections Syntax to get Collection related CI Checks Passing (#16061)
* Fix collection task breaking collection ci checks

* Patch ansible.module_utils.basic._ANSIBLE_PROFILE directly

* Conditionalize other santity assertions

* Remove added blank lines and identifier from Fail if absent and no identifier set
2025-08-06 14:56:21 -04:00
Jake Jackson
3ccc5e5f2c add stable to release workflows
* we changed branch naming schema so adding in the new name
2025-07-24 15:54:19 -04:00
Jake Jackson
550ae51aec Revert "[4.6][dependency] CVE 2025 47273 (#7020)" (#7027)
This reverts commit e8b2920aec.
2025-07-23 13:22:25 -04:00
Jake Jackson
e8b2920aec [4.6][dependency] CVE 2025 47273 (#7020)
* Update requirements for setuptools

* first pass and need to commit

* update makefile and run updater script
2025-07-22 15:21:06 -04:00
Alan Rominger
7977e8639c Use full slug in DAB RBAC test (#16053) 2025-07-14 11:14:34 -04:00
Jake Jackson
03cd450669 [AAP-47877] Backport collection updates (#6992)
* Update collection args (#16025)

* update collection arguments

* Add integration testing for new param

* fix: sanity check failures

---------

Co-authored-by: Sean Sullivan <ssulliva@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>

* update formatting for sanity testing

* fixing indentation for sanity suite

* adjust tests to use new token name

* update tests to use aap_token instead of controller_oauthtoken

* add back aliases for backward compat

* we have integration tests that still leverage the old token name
* while we can rename these, this tells me that customers might still
  have them in the wild and breaking them in a z stream is no bueno

* revert alias changes

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
Co-authored-by: Sean Sullivan <ssulliva@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-07-10 10:14:40 -04:00
Jake Jackson
1d4b555a2c Update feature_branch_sync.yml (#7006)
fix typo in workflow title
2025-07-10 02:37:35 +00:00
Luis Villa
69df7d0e27 [AAP-48771]wfjt migration to catch renaming (#6991)
* wfjt migration to catch renaming

* Added rename_wfjt function to template constraint migration
* Add test to add duplicate names and verify that the duplicates are renamed

* move object creation

* add missing rename_wfjt operation

* fix linter issues

* fix tox issues

* test manually and move operation

* added back credential type validation code
2025-07-09 15:51:55 -04:00
Alan Rominger
bf0567ca41 AAP-48392 Handle DAB RBAC either before or after new type model (for merge) (#16045)
* Handle DAB RBAC either before or after new type model

* Translate CT to DAB CT

* Fixes for content type switch

* Use more compatible coding pattern

* Deeper purge of content_type_id

* revert, turns out that did not work

* More content type replacements

* Revert changes to serializer

* Revert another content_type change

* Fix for rearrangement of post_migration methods

* Remove thing I am not going to do

* Revert branch pin that was temporary
2025-07-02 14:28:43 -04:00
Jake Jackson
ec0732ce94 AAP-48139 add branch sync between release_4.6 and stable-2.6 (#6982)
* add branch sync between release_4.6 and stable-2.6

* add a new workflow to force push commits in release_4.6 to
  stable-2.6

* Update workflow to use matrix keyword


---------

Co-authored-by: Jake Jackson
2025-06-30 19:56:08 -04:00
Hao Liu
d6482d3898 AAP-48070 Remove ALLOW_LOCAL_RESOURCE_MANAGEMENT setting and enable local resource management (#16033)
Remove ALLOW_LOCAL_RESOURCE_MANAGEMENT setting and enable local resource management

This commit removes the ALLOW_LOCAL_RESOURCE_MANAGEMENT setting and all associated
functionality, making the behavior as if the setting is always enabled.

Changes:
- Remove ALLOW_LOCAL_RESOURCE_MANAGEMENT setting from defaults.py
- Remove @immutablesharedfields decorator and all related logic
- Remove decorator applications from Organization, Team, and User API views
- Remove role assignment restrictions in UserRolesList and RoleUsersList
- Remove test file for immutablesharedfields functionality
- Clean up unused imports

Result: Organizations, Teams, and Users can now always be created, modified,
and deleted via the API without platform ingress restrictions.
2025-06-30 10:15:26 -04:00
Hao Liu
20b203ea8e AAP-47495 Hide CSRF_TRUSTED_ORIGINS (#16035)
Hide CSRF_TRUSTED_ORIGINS
2025-06-30 09:58:19 -04:00
jessicamack
1afd23043d Remove api version from hardcoded inventory url (#16039) (#6980)
* update url endpoints

* reformat line for length
2025-06-25 22:53:03 +02:00
jessicamack
1330a1b353 Remove api version from hardcoded inventory url (#16039)
* update url endpoints

* reformat line for length
2025-06-25 21:54:21 +02:00
Matthew Sandoval
11a9a2b066 Pin receptorctl 1.5.7 (#6979) 2025-06-24 19:48:55 +00:00
Lila Yasin
5752c7a8e2 [2.5 Backport] AAP-46038 database deadlock (#6947)
Sort both bulk updates and add batch size to facts bulk update to resolve deadlock issue

Update tests to expect batch_size to agree with changes

Add utility method to bulk update and sort hosts and applied that to the appropriate locations

Update functional tests to use bulk_update_sorted_by_id since update_hosts has been deleted

Add comment NOSONAR to get rid of Sonarqube warning since this is just a test and it's not actually a security issue

Fix failing test test_finish_job_fact_cache_clear & test_finish_job_fact_cache_with_existing_data

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
Co-authored-by: Seth Foster <fosterbseth@gmail.com>
2025-06-16 15:32:55 -04:00
Alan Rominger
3d027bafd0 AAP-44233 Create credential types in new migration step (#6969)
* Update database to credential types in new migration file

* bump migration

* Add assertion

* Pre-delete credentials so we test recreation
2025-06-11 16:26:42 -04:00
Jake Jackson
ee19ee0c10 Update workflow to allow the workflow to write (#6975)
* Update the workflow to allow the action to write our branches from it.
* Also added username and email as git by default will want to know who
  is performing the action (edge case). Using github actions bot is
standard practice
2025-06-11 20:07:38 +00:00
Alan Rominger
f1e5cadce7 🧪 Delegate artifact merge and garbage collection to GH (#16019) (#6973)
* 🧪 Unpersist Git creds @ cov combine job

This is one of the things Zizmor [[1]] warns about.

[1]: https://docs.zizmor.sh

* 🧪 Download all coverage artifacts in one go

* 🧪 Delegate artifact garbage collection to GH

This is implemented by setting the retention days input to 1 on the
initial upload.

Co-authored-by: 🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко) <webknjaz@redhat.com>
2025-06-10 16:54:59 -04:00
Satoe Imaishi
a238c5dd09 Bump django to 4.2.21 (#6964) 2025-06-10 10:11:43 -04:00
Jake Jackson
d26c7fedb8 Add workflow to rebase release branches (#6968)
* Adds a workflow that rebases release_4.6 onto release_4.6-next
2025-06-10 10:11:43 -04:00
Jiří Jeřábek (Jiri Jerabek)
f4347d05a9 cherry-pick 222f387 to release_4.6 (#6971) 2025-06-10 10:11:42 -04:00
Hao Liu
4eefce622d Fixes pytest CI error (#6970)
```
  /var/lib/awx/venv/awx/lib64/python3.11/site-packages/_pytest/python.py:163:
  PytestReturnNotNoneWarning: Expected None, but
  awx/main/tests/unit/test_tasks.py::TestJobCredentials::test_custom_environment_injectors_with_boolean_extra_vars
  returned ['successful', 0], which will be an error in a future version
  of pytest.  Did you mean to use `assert` instead of `return`?
```

* Dug into the git blame for this one
  060585434a is the commit for any
  historians. It was wrongfully carried over from a mock pexpect
  implementation. Our new tests are nice. They don't go as far as trying
  to run the task so they do not need to mock pexpect. That is why it is
  safe to remove this code without finding it a new home.

Co-authored-by: Chris Meyers <chris.meyers.fsu@gmail.com>
2025-06-10 10:11:42 -04:00
TVo
57b8773613 [2.5/4.6 Backport] AAP-40782 Reduce queued stuck jobs (#6962)
* [2.5/4.6 Backport] AAP-40782 Reduce queued stuck jobs

* [2.5/4.6 Backport] AAP-40782 Reduce queued stuck jobs

* Incrp'd review feedback from @AlanCoding

* Reformatted 4 files per CI-check for api-linters
2025-06-10 10:11:42 -04:00
Alan Rominger
d0776dabdf AAP-32143 Make the JT name uniqueness enforced at the database level (#15956) (#6958)
* Make the JT name uniqueness enforced at the database level

* Forgot demo project fixture

* New approach, done by adding a new field

* Update for linters and failures

* Fix logical error in migration test

* Revert some test changes based on review comment

* Do not rename first template, add test

* Avoid name-too-long rename errors

* Insert migration into place

* Move existing files with git

* Bump migrations of existing

* Update migration test

* Awkward bump

* Fix migration file link

* update test reference again
2025-06-10 10:11:42 -04:00
Peter Braun
2d730abb82 fix: allow unknown keyword arguments (#6972)
Co-authored-by: Peter Braun <pbranu@redhat.com>
2025-06-10 10:11:42 -04:00
Seth Foster
8896f75f9b Restore basic auth for subscriptions API (#6961)
When POSTing to console.redhat.com, fallback
to using basic auth method if OAUTH via
service accounts fails

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-06-10 10:11:42 -04:00
Mauricio Magnani Jr
bb6bf33b9e fix: ensure temp files are cleaned up after failed HCC (#6952) 2025-05-21 13:18:24 -04:00
Dirk Jülich
5cf3a09163 AAP-17690 Inventory variables sourced from git project are not getting deleted after being removed from source (#15928) (#6946)
* Delete existing all-group vars on inventory sync (with overwrite-vars=True) instead of merging them.

* Implementation of inv var handling with file as db.

* Improve serialization to file of inv vars for src update

* Include inventory-level variable editing into inventory source update handling

* Add group vars to inventory source update handling

* Add support for overwrite_vars to new inventory source handling

* Persist inventory var history in the database instead of a file.

* Remove logging which was needed during development.

* Remove further debugging code and improve comments

* Move special handling for user edits of variables into serializers

* Relate the inventory variable history model to its inventory

* Allow for inventory variables to have the value 'None'

* Fix KeyError in new inventory variable handling

* Add unique-together constraint for new model InventoryGroupVariablesWithHistory

* Use only one special invsrc_id for initial update and manual updates

* Fix internal server error when creating a new inventory

* Print the empty string for a variable with value 'None'

* Fix comment which incorrectly states old behaviour

* Fix inventory_group_variables_update tests which did not take the new handling of None into account

* Allow any type for Ansible-core variable values

* Refactor misleading method names

* Fix internal server error when savig vars from group form

* Remove superfluous json conversion in front of JSONField

* Call variable update from create/update instead from validate

* Use group_id instead of group_name in model InventoryGroupVariablesWithHistory

* Disable new variable update handling for all regular (non-'all') groups

* Add live test to verify AAP-17690 (inv var deleted from source)

* Add functional tests to verify inventory variables update logic

* Fix migration which was corrupted by a rebase

* Add a more complex live test and resolve linter complaints

* Force overwrite_vars=False for updates from source on all-group

* Change behavior with respect to overwrite_vars
2025-05-19 21:38:33 +02:00
Seth Foster
54db6c792b Revert "unpin sqlparse dependency (#6911)" (#6950)
This reverts commit 3e122778e4.
2025-05-16 22:53:54 +00:00
Peter Braun
3e122778e4 unpin sqlparse dependency (#6911)
* unpin sqlparse dependency

* remove sqlparse license
2025-05-15 15:51:34 -04:00
Elijah DeLee
f98b2e2455 introduce age for workers and mandatory retirement
Retire workers after a certain age, allowing them to finish their
current task if they are not idle.

This mitigates any issues like memory leaks in long running workers,
especially if systems stay busy for months at a time.

Introduce new optional setting WORKER_MAX_LIFETIME_SECONDS, defaulting to 4 hours
2025-05-15 14:15:50 -04:00
Seth Foster
12dcc10416 [4.6] Update subscription API to use service accounts (#6927)
* Update subscription API to use service accounts

Update code to pull subscriptions from
console.redhat.com instead of
subscription.rhsm.redhat.com

Uses service account client ID and client secret
instead of username/password, which is being
deprecated in July 2025.

Additional changes:

- In awx.awx.subscriptions module, use new service
account params rather than old basic auth params

- Update awx.awx.license module to use subscription_id
instead of pool_id. This is due to using a different API,
which identifies unique subscriptions by subscriptionID
instead of pool ID.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Chris Meyers <chris.meyers.fsu@gmail.com>
Co-authored-by: Peter Braun <pbraun@redhat.com>

* fix token name

Signed-off-by: Seth Foster <fosterbseth@gmail.com>

* Fix Subscriptions credentials fallback

Ensure service account authentication is being used
when falling back to using SUBSCRIPTIONS_CLIENT_ID.

Additional change:
Subscription data can return two types of capacities:
Sockets and Nodes

For determining overall capacity
if capacity name is Nodes:
  capacity quantity x subscription quantity
if capacity name is Sockets:
  capacity quantity / 2 (minimum of 1) x subscription quantity

Signed-off-by: Seth Foster <fosterbseth@gmail.com>

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Chris Meyers <chris.meyers.fsu@gmail.com>
Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-05-13 15:41:35 -04:00
Mauricio Magnani Jr
6bd39aea4b OPA server hostname with https or http results in connection errors (#6921)
Signed-off-by: Mauricio Magnani <magnani@redhat.com>
2025-05-13 14:41:29 -04:00
Hao Liu
b7a3c6b025 Delete UI test from CI (#6831)
In release_4.6 we no longer ship UI in AAP 2.5 so there's no reason to waste time on CI test for UIs
2025-05-13 14:40:07 -04:00
jessicamack
ba7ee23298 [backport][4.6] Update Azure Key Vault plugin to use Managed Identity (#6939)
* Bug on file name. Commiting to remove it.

* Update azure_kv plugin to use ManagedIdentity. Add testing.
2025-05-13 10:05:24 -04:00
Jake Jackson
825a48bb32 [4.6] Insights Credential Help Text Update (#6937)
* Update help text for insights cred

* update help text for the insights cred per new mock ups
2025-05-08 10:45:14 -04:00
thedoubl3j
eb6aebff00 Update logic to not over write ec2 replace
* fix replace logic so that we don't over write and stay only at vmware
  when ec2 is selected
* add an env.json for functional testing
2025-05-08 15:06:58 +02:00
thedoubl3j
60114ab929 add logic to replace cred name when using esxi
* similar to aws, allow the use of the standard vmware cred
2025-05-08 15:06:58 +02:00
Peter Braun
0e28d2590a fix: keep processing events, even if previous event data cannot be pa… (#15965) (#6922)
* fix: keep processing events, even if previous event data cannot be parsed

* change log level to warning
2025-05-05 13:26:42 +02:00
jessicamack
2bc08b421d Bring WFJT job access to parity with UnifiedJobAccess (#15344) (#6935)
* Bring WFJT job access to parity with UnifiedJobAccess

* Run linters

Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-04-30 13:43:47 -04:00
Matthew Sandoval
cae8a4e16c Pin receptorctl 1.5.5 (#6931) 2025-04-29 11:01:22 -07:00
Chris Meyers
41d3729501 Allow ipv6 address for TOWER_URL_BASE setting
* Django url validators support ipv6, our custom URLField
  allow_plain_hostname feature was messing with the hostname before it
  was passed to Django validators.
* This change dodges the allow_plan_hostname transformations if the
  value looks like it's an ipv6 one.
2025-04-29 09:55:16 -04:00
TVo
a3303bb74b Pin ansible runner241 and receptorctl154 (#6919)
* Updated pinned runner and receptorctl in controller.

* Bumped receptorctl to 1.5.4
2025-04-22 11:13:58 -07:00
Alan Rominger
6a10e0ea5c AAP-41139 [4.6][dependencies] Bump Django 2 minor versions (#6892)
* Initial requirement bump for Django CVE

* Run updater script
2025-04-16 14:41:05 -04:00
Bruno Rocha
6690d71357 [4.6][Backport][Feature] feat: Manage Django Settings with Dynaconf (#6910)
Dynaconf is being added from DAB factory to load Django Settings
2025-04-15 15:17:16 -04:00
Hao Liu
ae0a8a80eb [4.6] Fix Cert base authentication for OPA (#6909)
* Remove unused setting

* Fix mTLS auth to OPA server

- Workaround https://github.com/Turall/OPA-python-client/issues/32
- Add tests for `opa_cert_file` context manager
2025-04-15 11:07:33 -04:00
Hao Liu
4532c627e3 Update dependencies for DAB (#6914)
Co-authored-by: Satoe Imaishi <simaishi@redhat.com>
2025-04-15 09:55:45 -04:00
Lila Yasin
87cb6dc0b9 AAP-39365 facts are unintentionally deleted when the inventory is modified during a job execution (#15910) (#6905)
* Added test_jobs.py to the model unit test folder in orther to show the undesired behaviour with fact cache



* Added test_jobs.py to the model unit test folder in orther to show the undesired behaviour with fact cache



* Solved undesired behaviour with fact_cache



* Solved bug with slices



* Remove unused imports

Remove now unused line of code which was commented out by the contributor

Revert "Remove now unused line of code which was commented out by the contributor"

This reverts commit f1a056a2356d56bc7256957a18503bd14dcfd8aa.

* Add back line that had been commented out as this line makes hosts specific to the particular slice when applicable

Revise private_data_dir fixture to see if it improves code coverage

Checked out awx/main/tests/unit/models/test_jobs.py in devel to see if it resolves git diff issue

* Fix formatting in awx/main/tests/unit/models/test_jobs.py

Rename for loop from host in hosts to hosts in hosts_cahced and remove unneeded continue

Revise finish_fact_cache to utilize inventory rather than hosts

Remove local var hosts that was assigned but unused

Revert change in start_fact_cache hosts_cached back to hosts

Revise the way we are handling hosts_cached and joining the file

Revert "Revise the way we are handling hosts_cached and joining the file"

This reverts commit e6e3d2f09c1b79a9bce3647a72e3dad97fe0aed8.

Reapply "Revise the way we are handling hosts_cached and joining the file"

This reverts commit a42b7ae69133fee24d3a5f1b456d9c343d111df9.

Revert some of my changes to get back to a better working state

Rename for loop to host in hosts_cached and remove unneeded continue

Remove jobs job.get_hosts_for_fact_cache() from post run hook, fix if statement after continue block, and revise how we are calling hosts in finish for loop

Add test_invalid_host_facts to test_jobs to increase code coverage

Update method signature to use hosts_cached and updated other references to hosts in finish_facts_cached

Rename hosts iterator to hosts_cached to agree with naming elsewhere

Revise test_invalid_host_facts to get more code coverage

Revise test_invalid_host_facts to increase codecov

Revise test_pre_post_run_hook_facts_deleted_sliced to ensure we are hitting the assertionerror for code cov

Revise  mock_inventory.hosts. to hit assert failure

Add revision of hosts and facts to force failure to satisfy code cov

Fix failure in test_pre_post_run_hook_facts_deleted_sliced

Add back for loop to create failures and add assert to hit them

Remove hosts.iterator() from both start_fact_cache and finish_fact_cache

Remove unused import of Queryset to satisfy api-lint

Fix typo in docstring hasnot to has not

Move hosts_cached.append(host) to outer loop in start_fact_cache

Add class to help support cached hosts resolving host.name issue with hosts_cached

* Add live tests for ansible facts

Remove fixture needed for local work only maybe

Revert "Add class to help support cached hosts resolving host.name issue with hosts_cached"

This reverts commit 99d998cfb9960baafe887de80bd9b01af50513ec.

* Move hosts_cached.append(host) outside of try except

* Move hosts_cached.append(host) to the beginning of start_fact_cache

---------

Signed-off-by: onetti7 <davonebat@gmail.com>
Co-authored-by: onetti7 <davonebat@gmail.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-04-11 11:08:26 -04:00
Lila Yasin
bbcdef18a7 [2.5][Backport] AAP 30045 incorrect deprecation warning for awx.awx.schedule rrule (#6903)
* Make lookup plugins return lists to fix failures (#15625)

* Make lookup plugins return lists to fix failures

* Update unit tests

* Use lookup for test failures, update docs

* Grammar fix from review

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

---------

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

* Fix cherry-pick merge issue, should resolve failing linter

---------

Co-authored-by: Alan Rominger <arominge@redhat.com>
Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>
2025-04-11 11:02:45 -04:00
Jake Jackson
d35d7f62ec Esxi inventory plugin (#6895)
* Add in ESXI plugin as a choice

* added in vmware esxi as an inventory source
* made a migration that may not be needed but will need to circle back

* black formatting

* linter fixes that I missed in the first commit, squash

* Update esxi to use_fqcn to true

* added use_fqcn on the esxi cred to true to correctly lay down
  collection name

* add fqcn true

* updated vmware esxi to use true for fqcn

* Update defaults and re-order migrations

* updated defaults to add correct env var to get empty
* re-ordered migrations to be in line with others

* Add condition to replace vmware_esxi cred

* replace direct name match with vmware cred since source supports old
  cred

* add skeleton test

* quick pass, needs more
* squash this

* Add tests for creating inventory ESXI source

* add test case to test creating an inventory with different cred type
  to source name

* update test and linting

* added correct cred return since esxi uses same cred

* assert on status code

* assert that we received a 204

* Added new folder for vmware_exsi and empty json file.

* Corrected the misspelling of folder name to 'esxi'

* fixed misspelling for `vmware_`

---------

Co-authored-by: Thanhnguyet Vo <thavo@redhat.com>
2025-04-10 14:32:17 -04:00
Lila Yasin
9d9c125e47 [4.6][Backport][Feature] feat: 38589 GitHub App Authentication (#15807) (#6887)
* feat: 38589 GitHub App Authentication (#15807)

* feat: 38589 GitHub App Authentication

Allows both git@<personal-token> and x-access-token@<github-access-token> when authenticating using git.
This allows GitHub App tokens to work without interfering with existing authentication types.

---------

Co-authored-by: Jake Jackson <thedoubl3j@Jakes-MacBook-Pro.local>

* revert change made to allow UI to accept x-access-token, just use htt… (#15851)

revert change made to allow UI to accept x-access-token, just use https:// instead

* Add Github dep for new cred support if used (#15850)

* Add pygithub for new app token support

* fixed git requirements file with new
* added new github dep and relevant deps it needs

* add required licenses

* Add artifacts to satisfy license check

* Remove duplicated license

---------

Co-authored-by: Andrea Restle-Lay <arestlel@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>

* Remove deps update it came with the cherry-pick and is not needed in this version

Remove unneeded deps updates from requirements.in

Remove point to awx-plugins as it is not needed in tower

* Add a credential plugin that uses GitHub Apps to get tokens

* Add github app tests

* Ran requirements updater script

Ran black on github_app_test to fix formatting issue

Add scm_github_app to managed credentials

Ran updater script to reflect new deps

Added github app info to def build_passwords in jobs.py, cred now appears in credential types

Update ManagedCredentialType for GitHub App to match what we have in awx-plugins

Update inputs in maManagedCredentialType to github_app_inputs to communicate with awx/main/credential_plugins/github_app.py

Revert incorrect change in ManagedCredentialType, change github_app_lookup to call inputs instead of github_app_inputs

Updated namespace to github_app_lookup to agree with nomenclature used in the rest of the implementation and to resolve failing API test

Remove import pointing to awx plugins and update to point to credential_plugins

Remove references to gh_app_plugin_mod and change to github_app

Remove from awx_plugins.interfaces._temporary_private_django_api import (  # noqa: WPS436 to resolve failing test

Remove flake8 typing & typing references that do not exist in this version of Tower

Remove references in jobs.py and  __init__.py since this is an external cred type and registered it in setup.cfg instead

Remove blank line

REvise name in cfg from github_app_lookup to github_app to see if that ifxes module not found error

Revise first declaration of github_app to agree with file name to see if that resolves issue

Rename line 174 to agree with what's in config

Fix reference to github_app_lookup to github_app

Linters compliaining about the github_app in __all__ not being defined, renamed to see if that aligns them

Fix naming in test to correspond to naming of cred type

Update naming to be more specific and add blank line to setup.cfg

Remove __all__ from githubapp.py to satisfy linters

Revert formatting change since it is not needed in this repository

* Add blank line at the end of requirements.in

---------

Co-authored-by: Andrea Restle-Lay <andrearestlelay@gmail.com>
Co-authored-by: Jake Jackson <thedoubl3j@Jakes-MacBook-Pro.local>
Co-authored-by: Jake Jackson <jljacks93@gmail.com>
Co-authored-by: Andrea Restle-Lay <arestlel@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-04-10 14:32:17 -04:00
Chris Meyers
5dd81a04ce Aap 41580 indirect host count wildcard query (#15893)
* Support <collection_namespace>.<collection_name>.* indirect host query
  to match ANY module in the <collection_namespace>.<collection_name>
* Add tests for new wildcard indirect host count
* error checking of ansible event name
* error checking of ansible event query
2025-04-10 14:32:15 -04:00
Dirk Jülich
e060e44b05 AAP-37381 Apply Django password validators correctly. (#6902)
* Move call to django_validate_password to the correct method were the user object is available.
* Added tests for the Django password validation functionality.
2025-04-02 16:45:58 +02:00
Chris Meyers
db5b6d0019 Add changelog to awx collection 2025-03-25 13:41:53 -04:00
Chris Meyers
a2c8ecb4e6 Bump awx collection ansible required version 2025-03-25 13:41:53 -04:00
Chris Meyers
277bc581e7 Remove coarse grain unused import
* It would seem that fine-grain noqa pylint ignores do the job and are
  already in place. Prefer that over the coarse entire file ignore.
2025-03-25 13:41:53 -04:00
Chris Meyers
ef89c59a13 Fix ansible-lint empty lines in module docstrings 2025-03-25 13:41:53 -04:00
Chris Meyers
5872a88a57 Fix ansible-lint truthy in module docstrings 2025-03-25 13:41:53 -04:00
Chris Meyers
7fdd15f115 Fix ansible-lint indentation in module docstrings 2025-03-25 13:41:53 -04:00
Peter Braun
353f0adf36 [4.6][backport] do not count dark hosts as updated (#6877)
* feat: do not count dark hosts as updated (#15872)

* feat: do not count dark hosts as updated

* update functional tests

* Fix test flake due to host metric id enumeration (#15875)

---------

Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-03-18 16:34:52 +01:00
Peter Braun
bdfd9dec74 update: use singular form ANSIBLE_COLLECTIONS_PATH (#15841) (#6876)
* update: use singular form ANSIBLE_COLLECTIONS_PATH

* update functional tests
2025-03-18 14:44:57 +01:00
Hao Liu
bad4e630ba Basic runtime enforcement of policy as code part 2 (#6875)
* Add `opa_query_path field` for Inventory, Organization and JobTemplate models (#6850)

Add `opa_query_path` model field to Inventory, Organizatio and JobTemplate. Add migration file and expose opa_query_path field in the related API serializers.

* Gather and evaluate `opa_query_path` fields and raise violation exceptions (#6864)

gather and evaluate all opa query related to a job execution during policy evaluation phase 

* Add OPA_AUTH_CUSTOM_HEADERS support (#6863)

* Extend policy input data serializers (#6890)

* Extend policy input data serializers

* Update help text for PaC related fields (#6891)

* Remove encrypted from OPA_AUTH_CUSTOMER_HEADER

Unable to encrypt a dict field

---------

Co-authored-by: Jiří Jeřábek (Jiri Jerabek) <Jerabekjirka@email.cz>
Co-authored-by: Alexander Saprykin <cutwatercore@gmail.com>
Co-authored-by: Tina Tien <98424339+tiyiprh@users.noreply.github.com>
2025-03-18 02:39:26 +00:00
Seth Foster
e9f2a14ebd Fix root container path for project updates in K8s
Modifies to_container_path to accept an optional
container_root parameter.

Normally this defaults to /runner, but in K8S
environments, project updates run from the
private_data_dir, e.g. /tmp/awx_1_123abc, not
/runner.

In that situation, we just pass in private_data_dir
as the container_root.

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2025-03-13 13:34:12 -04:00
Alan Rominger
01fae57de2 Fix indirect host counting task test race condition (#15871) 2025-03-12 13:50:19 -04:00
Alan Rominger
c7ac45717b Pin drf-yasg to make api-test pass (#15887)
Ping drf-yasg to make api-test pass
2025-03-12 13:50:19 -04:00
Alan Rominger
c7b6b43913 [4.6] Attempt to fix ui-lint check by clearing cache forcefully (main fork) (#6885)
* Try to make ui-lint check pass in 4.6

* Implement solution suggested by Kia
2025-03-11 11:52:30 -04:00
Alan Rominger
1e6a7c0749 Prevent system auditor from downloading install bundle (#6805) 2025-03-11 10:54:02 -04:00
Alan Rominger
b5bc85e639 AAP-41692 [4.6] Update jinja2 for CVE (#6881)
* Initial bump of jinja2 lib

* Run updater script
2025-03-11 10:45:37 -04:00
Jake Jackson
f04bf5ccf0 update recetpor (#6869)
* bumped receptor version to latest
2025-03-04 15:16:58 -05:00
Peter Braun
28712a4c6e fix: audit record name should not be the hostname (#15864) (#6866)
* fix: audit record name should not be the hostname

* fix: update tests
2025-03-03 16:49:42 +01:00
Hao Liu
698d769a7a [Policy as Code] Monkey patch opa_client.base.BaseClient (#6865)
Workaround bug described in https://github.com/Turall/OPA-python-client/issues/29
2025-02-26 21:09:51 +00:00
Alan Rominger
529ee73fcd [4.6] Backport the "live" tests (#6859)
* Create a new pytest folder for live system testing with normal services (#15688)

* PoC for running dev env tests

* Replace in github actions

* Move folder to better location

* Further streamlining of new test folders

* Consolidate fixture, add writeup docs

* Use star import

* Push the wait-for-job to the conftest

Fix misused project cache identifier (#15690)

Fix project cache identifiers for new updates

Finish test and discover viable solution

Add comment on related task code

AAP-37989 Tests for exclude list with multiple jobs (#15722)

* Tests for exclude list with multiple jobs

Create test for using manual & file projects (#15754)

* Create test for using a manual project

* Chang default project factory to git, remove project files monkeypatch

* skip update of factory project

* Initial file scaffolding for feature

* Fill in galaxy and names

* Add README, describe project folders and dependencies

Add ee cleanup tests

* Adds cleanup tests to the live test.

Fix rsyslog permission error in github ubuntu tests from apparmor (#15717)

* Add test to detect rsyslog config problems

* Get dmesg output

* Disable apparmor for rsyslogd

Make awx/main/tests/live dramatically faster (#15780)

* Make awx/main/tests/live dramatically faster

* Add new setting to exclude list

* Fix rebase issues

* Did not want to backport this
2025-02-25 15:22:38 -05:00
jessicamack
ba053dfb51 Ship analytics data using service account token (#15812) (#6856)
Use oidc client to ship analytics data
2025-02-25 14:42:17 -05:00
Alan Rominger
b351dfb102 Undo temporary DAB change for requirements generation (#6862) 2025-02-25 09:30:15 -05:00
Alan Rominger
b502a9444a [4.6 backport] Feature indirect host counting (#15802) (#6858)
* Feature indirect host counting (#15802)

* AAP-37282 Add parse JQ data and test it for a `job` object in isolation (#15774)

* Add jq dependency

* Add file in progress

* Add license for jq

* Write test and get it passing

* Successfully test collection of `event_query.yml` data (#15761)

* Callback plugin method from cmeyers adapted to global collection list

Get tests passing

Mild rebranding

Put behind feature flag, flip true in dev

Add noqa flag

* Add missing wait_for_events

* feat: try grabbing query files from artifacts directory (#15776)

* Contract changes for the event_query collection callback plugin (#15785)

* Minor import changes to collection processing in callback plugin

* Move agreed location of event_query file

* feat: remaining schema changes for indirect host audits (#15787)

* Re-organize test file and move artifacts processing logic to callback (#15784)

* Rename the indirect host counting test file

* Combine artifacts saving logic

* Connect host audit model to jq logic via new task

* Add unit tests for indirect host counting (#15792)

* Do not get django flags from database (#15794)

* Document, implement, and test remaining indirect host audit fields (#15796)

* Document, implement, and test remaining indirect host audit fields

* Fix hashing

* AAP-39559 Wait for all event processing to finish, add fallback task (#15798)

* Wait for all event processing to finish, add fallback task

* Add flag check to periodic task

* feat: cleanup of old indirect host audit records (#15800)

* By default, do not count indirect hosts (#15801)

* By default, do not count indirect hosts

* Fix copy paste goof

* Fix linter issue from base branch

* prevent multiple tasks from processing the same job events, prevent p… (#15805)

prevent multiple tasks from processing the same job events, prevent periodic task from spawning another task per job

* Fix typos and other bugs found by Pablo review

* fix: rely on resolved_action instead of task, adapt to proposed query… (#15815)

* fix: rely on resolved_action instead of task, adapt to proposed query structure

* tests: update indirect host tests

* update remaining queries to new format

* update live test

* Remove polling loop for job finishing event processing (#15811)

* Remove polling loop for job finishing event processing

* Make awx/main/tests/live dramatically faster (#15780)

* AAP-37282 Add parse JQ data and test it for a `job` object in isolation (#15774)

* Add jq dependency

* Add file in progress

* Add license for jq

* Write test and get it passing

* Successfully test collection of `event_query.yml` data (#15761)

* Callback plugin method from cmeyers adapted to global collection list

Get tests passing

Mild rebranding

Put behind feature flag, flip true in dev

Add noqa flag

* Add missing wait_for_events

* feat: try grabbing query files from artifacts directory (#15776)

* Contract changes for the event_query collection callback plugin (#15785)

* Minor import changes to collection processing in callback plugin

* Move agreed location of event_query file

* feat: remaining schema changes for indirect host audits (#15787)

* Re-organize test file and move artifacts processing logic to callback (#15784)

* Rename the indirect host counting test file

* Combine artifacts saving logic

* Connect host audit model to jq logic via new task

* Document, implement, and test remaining indirect host audit fields (#15796)

* AAP-39559 Wait for all event processing to finish, add fallback task (#15798)

* Wait for all event processing to finish, add fallback task

* Add flag check to periodic task

* feat: cleanup of old indirect host audit records (#15800)

* prevent multiple tasks from processing the same job events, prevent p… (#15805)

prevent multiple tasks from processing the same job events, prevent periodic task from spawning another task per job

* Remove polling loop for job finishing event processing (#15811)

* Make awx/main/tests/live dramatically faster (#15780)

* reorder migrations to allow indirect instances backport

* cleanup for rebase and merge into devel

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
Co-authored-by: jessicamack <jmack@redhat.com>
Co-authored-by: Peter Braun <pbranu@redhat.com>
2025-02-24 21:55:44 +00:00
Hao Liu
2d648d1225 [Feature][release_4.6] Policy as Code MVP part 1 (#6848) 2025-02-24 15:58:57 -05:00
Hao Liu
b8a1e90b06 [CI][release_4.6] Fix schema upload (#6840) 2025-02-18 09:19:50 -05:00
Hao Liu
c0b9d3f428 Switch to ssh for private git requirements (#6838) 2025-02-17 22:44:29 -05:00
Hao Liu
376a791052 [CI][release_4.6] backport push development image base on repo name (#6837)
* Publish image base on git repo name instead of hard coded to AWX (#15828)

* Fix git credential for devel_image build (#15834)

* Continue if pre-warm cache fail in container build (#15835)

* Use correct devel image for docker-compose (#15836)
2025-02-17 16:26:19 -05:00
TVo
cb2df43580 [4.6_Backport] Added helper method for fetching serviceaccount token (#6823) 2025-02-17 15:31:33 -05:00
Hao Liu
ccb6360a96 AAP-39778[Backport][release_4.6] Add DAB Feature Flag common API (#6833)
* [AAP-39138] - Add DAB Feature Flag common API (#15786)
* Update django-ansible-base reference to ansible-automation-platform/django-ansible-base@stable-2.5

---------

Co-authored-by: Zack Kayyali <zkayyali@redhat.com>
2025-02-12 15:47:06 -05:00
Hao Liu
397fb297bf Add ability to provide token for private repo for requirements_git in container build (#15831) (#6830)
Add ability to provide auth to private repo for requirements_git
2025-02-12 20:00:37 +00:00
Alan Rominger
63bb4d66ef Do not get django flags from database (#15794) (#6820) 2025-02-10 10:46:22 -05:00
Alan Rominger
7017c28706 Limit to python 3.12 for 4.6 branch (#6817) 2025-02-05 10:48:07 -05:00
Hao Liu
48ee5b05ee Set feature flag base on setting (#15808) (#6811) 2025-02-05 09:30:32 +01:00
Seth Foster
386f85c59f Fix rrule fast forwarding across DST boundaries (#6815)
Fixes an issue where schedules were not running at
the correct time.

Details:

DST is Daylights Saving Time

If the rrule dtstart is "in" a DST period (i.e., March to November)
and the current date is outside of the DST, then the fast forwarding
is not correct.

This is because datetime timedeltas do not honor DST boundaries

The Fix:

Convert the rrule dtstart to UTC before doing operations. Then,
convert back to the original timezone at the end.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-02-04 15:27:21 -05:00
Lila Yasin
b7b15584af Remove Docker Desktop if statement (#15778) (#6797)
Remove docker desktop if statement
2025-02-03 13:35:19 -05:00
Adrià Sala
148f28f448 fix: azure credential awxkit client_id collision 2025-02-03 17:22:05 +01:00
Peter Braun
26b6eac849 fix: compatibility with black v25+ (#15789) (#6803) 2025-02-03 16:02:20 +00:00
Peter Braun
18ea5cc561 Use upload artifact v4 (#6807)
unique-ify name

psh, who needs loops

Folder management

Extracts into current path

Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-02-03 15:10:55 +00:00
Lila Yasin
99b67f1e37 Add client_secret and client_id to credential_input_fields (#15734) (#6795) 2025-01-29 18:43:06 -05:00
Chris Meyers
cdd9e7263d Add insights service account support to collection 2025-01-24 16:29:18 -05:00
Seth Foster
edba126193 Add service account support to Insights credential
Adds fields client_id and client_secret which
will result in authentication via service account
on console.redhat.com

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-01-24 14:42:01 -05:00
Adrià Sala
22ecb2030c feat: support insights service account credentials for project update (AAP-37464) 2025-01-22 16:33:11 +01:00
Rodrigo Toshiaki Horie
2e8114394b [4.6][dependency] update django for CVE-2024-56374 (#6784) 2025-01-20 18:58:30 -03:00
Peter Braun
3268c9b5fe Add input_inventories to ordered_associations (#15710) (#6782)
Co-authored-by: rev3r4nt <rev3r4nt@gmail.com>
2025-01-15 13:46:38 +01:00
Alan Rominger
f7cda7696c Bugfix: adjust incorrectly passed keywords with exclude-strings argument (#15721) (#6777)
* Fix incorrectly passed keywords with exclude-strings arg to ansible-runner worker cleanup command



* Keep the quotes for each arg and adjust test_receptor

---------

Signed-off-by: Sasa Jovicic <jovicic.sasa@hotmail.com>
Co-authored-by: Sasa Jovicic <jovicic.sasa@hotmail.com>
2025-01-08 13:44:10 -05:00
Jake Jackson
a209751f22 Fix CVE-2024-56201 update jinja2 (#6778) 2025-01-08 13:42:42 -05:00
Alan Rominger
5944d041e6 AAP-36536 Send job_lifecycle logs to external loggers (#15701) (#6776)
* Send job_lifecycle logs to external loggers

* Include structured data in message

* Attach the organization_id of the job
2025-01-07 15:23:52 -05:00
TVo
9c732d2406 AAP-36522 Opened PR to test and fix deprecation error in tower 4.6 checks (#6772)
* Opened PR to test AAP-36522.

* Test pin specific ansible-core version in the run_awx_devel actions.

* Fixed syntax for ansible-core version.

* updated makefile to match syntax for ansible version

* Reverts makefile changes; fixed syntax for upgrade ansible-core action.
2025-01-03 12:01:21 -05:00
Lila Yasin
b215699586 🧪 Run sanity tests w/ ansible-test-gh-action (#15539) (#6773)
* 🧪 Run sanity tests w/ `ansible-test-gh-action`

* 🧪 Upload sanity results to unified dashboard

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk@sydorenko.org.ua>
2025-01-02 13:10:51 -05:00
Andrea Restle-Lay
38f72ac7ea host_metrics date fix to make summary dates (datetime.datetime) comparable to month: datetime.date (#15704) (#6770)
* host_metrics date fix

* AAP-36839 Remove excess comments

* fix extra date() conversion

* actual fix

* datetime is a library, use datetime.datetime

---------

Co-authored-by: Andrea Restle-Lay <arestlel@arestlel-thinkpadx1carbongen9.rht.csb>
2024-12-17 11:14:59 -05:00
Pablo H.
b361aef0fb chore: addressing CVE 2024-53908 (#6768) 2024-12-16 14:16:00 -05:00
Seth Foster
df79fa4ae1 bump grpcio CVE-2024-11407 (#6766)
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-12-16 13:23:24 -05:00
Seth Foster
9c2de6b535 Do not fast forward rrule if count is set (#6764)
Fixes a bug where a schedule that was created
to run only once will continue to run repeatedly.

e.g. an rrule with
dtstart 20240730; count 1; freq MINUTELY

This job will run on 20240730, and should never
run again.

However, the next time the schedule
update_computed_fields runs, the dtstart
will fast forward to today's date, and
next_run will be computed from that. This will trigger
the job to run again, which is not intended.

If count is set, we just should not fast forward the
rrule and always calculate next_run based on original
dtstart.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-12-10 16:14:46 -05:00
Peter Braun
0ce0023561 Fix: invalid response type on post request (#15609) (#6763) 2024-12-10 16:12:49 -05:00
Alan Rominger
f2ae68f302 Fix project cache identifiers for new updates (#6762)
Finish test and discover viable solution

Add comment on related task code
2024-12-10 15:23:54 -05:00
Alan Rominger
b3542c226d Fix error creating partition due to uncaught exception (#6761)
the primary fix is to simply add an exception class
  to those caught in the except block

This also adds live tests for the general scenario
  although this does not hit the new exception type
2024-12-10 15:22:09 -05:00
Peter Braun
56d3933154 feat: enable django flags support (#15660) (#6755)
* feat: enable django flags support

* add django flags license

* re-run updater script
2024-12-09 09:40:28 +01:00
Peter Braun
a1ec28aeb9 fix: reset state before evaluating named urls (#15683) (#6756) 2024-12-09 09:40:13 +01:00
Peter Braun
148afce455 deps: receptorctl v1.5.1 (#6760) 2024-12-06 16:12:58 +01:00
Hao Liu
f3b86b5193 Update defaults.py receptor typo (#15682) (#6754)
Fixing typo for  RECEPTOR_KEEP_WORK_ON_ERROR

Co-authored-by: linuxonfire <jaimescampositzel@gmail.com>
2024-12-05 09:35:21 -05:00
Hao Liu
82c967a66e Fix receptor work unit release after completion (#15679) (#6751)
Fix bug introduced by https://github.com/ansible/awx/pull/15392 that cause workunit to NOT be auto released after job completes
2024-12-03 12:28:55 -05:00
Peter Braun
8174a28716 update receptorctl to v1.5.0 (#6749) 2024-11-25 15:37:01 +01:00
Seth Foster
9c556db4c0 Make rrule fast forwarding stable (#15601) (#6744)
By stable, we mean future occurrences of the rrule
should be the same before and after the fast forward
operation.

The problem before was that we were fast forwarding to
7 days ago. For some rrules, this does not retain the old
occurrences. Thus, jobs would launch at unexpected times.

This change makes sure we fast forward in increments of
the rrule INTERVAL, thus the new dtstart should be in the
occurrence list of the old rrule.

Additionally, code is updated to fast forward
EXRULE (exclusion rules) in addition to RRULE

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: 🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>
2024-11-24 11:31:30 -05:00
Justin Downie
1d12f0c837 Back port for awx podAntiAffinity spec (#6733) 2024-11-22 16:15:26 -05:00
Satoe Imaishi
71a18c0d61 Bump uwsgi to 2.0.28 (#6736) 2024-11-22 10:54:52 -05:00
Hao Liu
c55fb369fa Update receptorctl to 1.4.11 (#6746) 2024-11-21 16:31:09 -05:00
Jake Jackson
2c3b4ff5d7 [4.6][dependency] update aiohttp to address vuln CVE-2024-52304 (#6740)
* update aiohttp to address vuln CVE-2024-52304

* add licenses for new deps
2024-11-21 16:21:34 -05:00
TVo
943964e14f 4.6_Backport changes made to aim.py from AWX. (#6739) 2024-11-13 12:16:25 -07:00
Alan Rominger
b97240417a Fix bug where unrelated jobs were linked as dependencies (#6735) 2024-11-06 14:41:57 -05:00
Kersom
6d959daca1 Revert "[4.6] Bump dependency (#6726)" (#6731)
This reverts commit 8fbe0c2b1f.
2024-10-31 11:36:22 -04:00
Kersom
8fbe0c2b1f [4.6] Bump dependency (#6726)
* Bump dependency

* Delete mini-create-react-context.txt

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-10-30 17:15:59 +00:00
David Newswanger
23528b7fef Add service backed SSO stage to saml pipeline (#6722) 2024-10-21 16:17:08 -04:00
Hao Liu
784ff3193d Pin DAB to 2024.10.17 (#6721) 2024-10-21 19:25:05 +00:00
Hao Liu
7972486594 Update receptorctl to 1.4.9 (#6718) 2024-10-17 11:27:21 -04:00
Chris Meyers
dbdbc7635a Redirect user to platform supported collection
* AAP 2.5 Controller 4.6 Org, User, and Team endpoints are restricted.
  When the user performs a restricted operation via the Controller
  collection, kindly notify them that they should be using the platform
  collection instead.
2024-10-17 11:02:19 -04:00
Rick Elrod
4820b084c1 Prettier DRF pages when using trusted proxy (#15579) (#6717)
This is a rather hacky, but fixes the DRF pages when going through a
trusted proxy.

Notably: This is meant to primarily fix the DRF pages on downstream
builds while leaving the upstream to function as-is.

When using a trusted proxy, the DRF login and logout endpoints now
redirect to the Platform login page (which respects ?next) and logout
endpoint respectively.

The CSS and JS is inlined because the trusted proxy might only proxy
to /api/ and not /static/ which is a harder problem to solve.

Signed-off-by: Rick Elrod <rick@elrod.me>
2024-10-16 13:58:01 -04:00
Hao Liu
d5388b3c56 Remove ui_next static file dir (#6715)
UI_NEXT is no longer being served in 4.6

Removing static file dir for ui-next fixes

```
WARNINGS:
?: (staticfiles.W004) The directory '/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/ui_next/build' in the STATICFILES_DIRS setting does not exist.
CommandError: Inventory with id = 1038181458411569545 cannot be found
```
2024-10-15 19:58:33 +00:00
Hao Liu
433974aea6 [4.6][CI]Fix CI for newer debian image (#15583) (#6716)
* Fix CI for newer debian image (#15583)

* Fix CI for newer debian image

Signed-off-by: Rick Elrod <rick@elrod.me>

* Missed one

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>

* Update ci.yml

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Rick Elrod <rick@elrod.me>
2024-10-15 15:33:55 -04:00
Tomas Z
d1c85dae4d Upgrade django and sqlparse to pickup CVE fixes (#6709) 2024-10-04 15:51:12 -04:00
Alan Rominger
534b0209f4 Fix 500 error due to None data in DAB response (#15562) (#6714)
* Fix 500 error due to None data in DAB response

* NOQA for flake8 failures
2024-10-02 16:12:30 -04:00
Hao Liu
46becf15e9 Switch DAB back to devel to (#6713)
Enable event 2 development
2024-10-01 20:11:04 +00:00
Hao Liu
02795c9ed9 Filter out ANSIBLE_BASE from job env var (#6710) 2024-09-27 19:42:44 +00:00
Hao Liu
6574cfe3a9 Pin dependencies to prepare for release_4.6 release tag (#6707)
* Pin deps to release prep
- ansible-runner@2.4.0
- receptorctl@1.4.8
- django-ansible-base@c8fbc1e345d4908cc97eaae20771238a5dd35aad
2024-09-19 16:22:18 +00:00
Jake Jackson
fafed924e3 rebase and merge conflict resolution (#6692) 2024-09-17 16:46:12 +00:00
Peter Braun
d2f3c02945 fix: maintain order of insertions into m2m relationship tables (#15536) (#6703) 2024-09-17 18:29:36 +02:00
Jake Jackson
eb4f3c2864 update urllib to fix CVE-2024-37891 (#6700) 2024-09-17 12:14:28 -04:00
Jake Jackson
bcd18e161c fix CVE-2024-21520 (#6687) 2024-09-16 16:04:11 -04:00
Chris Meyers
b62d0ff8e6 Make analytics job ts settings hidden
* There isn't a great reason to allow the UI to edit these meta-data
  fields that denote the last time an analytics job ran.
* The only reason I hesitate to mark them uneditable in the API is that
  they are useful to change in order to influence when the jobs run.
  Mostly for debug purposes or 1-off.
2024-09-16 14:14:01 -04:00
Alan Rominger
c33947af7f [AAP 2.5] DAB 9.4, enable service redirect auth (#6672)
* Update settings from DAB features

* Delay implementation of the reverse sync setting

* Move auth class to actual end of pipeline
2024-09-16 13:41:26 -04:00
Elijah DeLee
30b005aa9d catch harakiri graceful signal in middlware and log debug info (#6673)
Middleware is from django_ansible_base
2024-09-16 16:25:03 +00:00
Peter Braun
a1e3919b1f update remaining urls for new UI (#15529) (#6699) 2024-09-15 10:19:51 -04:00
Peter Braun
0a8e92cab7 fix workflow job url (#15522) (#6697) 2024-09-15 00:05:18 +02:00
Seth Foster
30e2c3a8cd Validate org-user membership from gateway (#15508) (#6698)
Adding credential and execution environment roles
validates that the user belongs to the same org
as the credential or EE.

In some situations, the user-org membership has not
yet been synced from gateway to controller.

In this case, controller will make a request to
gateway to check if the user is part of the org.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-09-14 16:01:04 +00:00
Hao Liu
aef3d8750b [4.6] Fix additional UI URL generated by API (#15517) (#15518) (#6695)
* Fix: change to url in platform ui (#15518)

* Fix instance UI URL generated by API (#15517)

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
2024-09-13 21:12:08 -04:00
Peter Braun
f799376b3d Add OPTIONAL_UI_URL_PREFIX (#15506) (#6694)
# Add a postfix to the UI URL patterns for UI URL generated by the API
# example if set to '' UI URL generated by the API for jobs would be $TOWER_URL/jobs
# example if set to 'execution' UI URL generated by the API for jobs would be $TOWER_URL/execution/jobs

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-09-13 20:06:40 +02:00
Peter Braun
96ec709e90 fix: avoid race conditions when removing multiple instance (#15495) (#6693)
* fix: avoid race conditions when removing multiple instance groups at once

* remove unused imports
2024-09-13 17:21:47 +02:00
Hao Liu
70f7ac72d4 Removes collection of unpartitioned_events table (#15501) (#6690)
Fixes: https://issues.redhat.com/browse/AAP-30995

Co-authored-by: Ladislav Smola <lsmola@redhat.com>
2024-09-11 18:39:41 +00:00
Seth Foster
4c9c22fea2 Translate new RBAC to old RBAC (#15490) (#6678)
User and Team assignments using the DAB
RBAC system will be translated back to the old
Role system.

This ensures better backward compatibility and
addresses some inconsistences in the UI that were
relying on older RBAC endpoints.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-09-10 18:26:43 -04:00
Peter Braun
9914229a5a Hide AUTOMATION_ANALYTICS_LAST_GATHER (#15497) (#6684) 2024-09-10 07:44:21 -04:00
Hao Liu
ce0d176508 Fix analytic ship (#6679)
REDHAT_USERNAME and REDHAT_PASSWORD are default to empty string instead of None
2024-09-10 09:46:53 +02:00
Elijah DeLee
059f52f314 Unpin django-ansible-base for now (#6681) 2024-09-09 21:51:20 +00:00
Hao Liu
446046c4bf Remove OpenSSL pin (#6683) 2024-09-09 17:27:09 -04:00
Seth Foster
17e01e0eb0 Rename System Auditor to Controller System Auditor (#15470) (#6677)
This is to emphasize that this role is specific
to controller component. That is, not an auditor
for the entire AAP platform.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-09-09 17:07:14 -04:00
Rick Elrod
5a4b789488 Don't reverse sync preload script data (#6644)
Signed-off-by: Rick Elrod <rick@elrod.me>
2024-09-09 13:04:01 -04:00
Peter Braun
6dfe2e3a9f fix: avoid calling undefined method for anonymous users (#15440) (#6676) 2024-09-06 17:58:10 +02:00
Hao Liu
01ea091e8a Fix subscription username password setting name (#6675)
used in analytic
2024-09-06 08:58:48 -04:00
Seth Foster
effbd0e416 Fix SAMLAuth backend to correctly return social auth pipeline results (#15457) (#6669)
Co-authored-by: David Newswanger <gamma.dave@gmail.com>
2024-09-03 10:55:05 -04:00
Seth Foster
2334211ba0 Only refresh session if updating own password (#15426) (#6653)
Fixes bug where creating a new user will
request a new awx_sessionid cookie, invalidating
the previous session.

Do not refresh session if updating or
creating a password for a different user.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-09-03 10:54:13 -04:00
Hao Liu
15e28371eb Prevent automountServiceAccountToken (#6638)
* Prevent job pod from mounting serviceaccount token

* Add serializer validation for cg pod_spec_override

Prevent automountServiceAccountToken to be set to true and provide an error message when automountServiceAccountToken is being set to true
2024-09-03 09:51:17 -04:00
Peter Braun
64d2e10dc2 Fallback to use subscription cred for analytic upload (#15479) (#6668)
* Fallback to use subscription cred for analytic

Fall back to use SUBSCRIPTION_USERNAME/PASSWORD to upload analytic to if REDHAT_USERNAME/PASSWORD are not set

* Improve error message

* Guard against request with no query or data

* Add test for _send_to_analytics

Focus on credentials

* Supress sonarcloud warning about password

* Add test for analytic ship

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-08-30 14:18:42 +02:00
Seth Foster
85bd7c3ca0 [4.6] Make controller specific team and org roles (#6662)
Adds the following managed Role Definitions

Controller Team Admin
Controller Team Member
Controller Organization Admin
Controller Organization Member

These have the same permission set as the
platform roles (without the Controller prefix)

Adding members to teams and orgs via the legacy RBAC system
will use these role definitions.

Other changes:
- Bump DAB to 2024.08.22
- Set ALLOW_LOCAL_ASSIGNING_JWT_ROLES to False in defaults.py.
This setting prevents assignments to the platform roles (e.g. Team Member).

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-08-26 16:31:42 -04:00
Alan Rominger
77e999f7c8 Rewrite more access logic in terms of permissions instead of roles (#15453) (#6661)
* Rewrite more access logic in terms of permissions instead of roles

* Cut down supported logic because that would not work anyway

* Remove methods not needed anymore

* Create managed roles in test before delegating permissions
2024-08-26 14:37:52 -04:00
Elijah DeLee
01aa760510 Guard around race condition (#15452) (#6658)
I had the luck of running into this race condition that broke my deployment. No instance was ever able to register because on running "awx-manage" in some check of a setting, it would end up failing here with

```
  File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/conf/license.py", line 10, in _get_validated_license_data
    return get_licenser().validate()
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/main/utils/licensing.py", line 453, in validate
    automated_since = int(Instance.objects.order_by('id').first().created.timestamp())
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'created'
```
2024-08-20 20:51:33 +00:00
Rick Elrod
16a4c66c73 Fix a test in preparation for syncing description
Refs ansible/django-ansible-base#447

Signed-off-by: Rick Elrod <rick@elrod.me>
2024-08-19 07:37:01 -05:00
Rick Elrod
9fa5be015c Bump DAB to 2024.8.19
Signed-off-by: Rick Elrod <rick@elrod.me>
2024-08-19 07:37:01 -05:00
Jake Jackson
8b293e7046 update django to 4.2.15 to address multiple CVEs (#6636) 2024-08-15 13:32:26 -04:00
Jake Jackson
467024bc54 fix CVE-2024-33663 and bring in updates for social-auth-app-django (#6634) 2024-08-15 13:32:09 -04:00
jessicamack
bdf3f81016 Unpin channels-redis (#15329) (#6647)
* unpin channels-redis

The bug that initially caused the upgrade block has been resolved https://github.com/django/channels_redis/issues/332

* replace aioredis Exception with a redis Exception

Version 4.0.0 of channel-redis migrated the underlying Redis library from aioredis to redis-py. The Exception has been changed to an equivalent

* remove unused license

* remove UPGRADE BLOCKER in README

* remove hiredis

it was an indirect dependency from aioredis which was removed

* remove unused license

* add back hiredis

it's potentially providing a performance boost. install explicitly as a part of redis. upgrade to more recent version

* remove UPGRADE BLOCKER for hiredis

it was also addressed as a part of this PR
2024-08-12 15:03:46 -04:00
jessicamack
4a5cfdc11d Remove 'AWX' from setting endpoint (#15432) (#6646)
* Remove AWX from display text

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-08-12 14:54:17 -04:00
419 changed files with 9495 additions and 5919 deletions

View File

@@ -1,3 +1,3 @@
# Community Code of Conduct
Please see the official [Ansible Community Code of Conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
Please see the official [Ansible Community Code of Conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html).

View File

@@ -13,7 +13,7 @@ body:
attributes:
label: Please confirm the following
options:
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html).
required: true
- label: I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
required: true

View File

@@ -5,7 +5,7 @@ contact_links:
url: https://github.com/ansible/awx#get-involved
about: For general debugging or technical support please see the Get Involved section of our readme.
- name: 📝 Ansible Code of Conduct
url: https://docs.ansible.com/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_template_chooser
url: https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_template_chooser
about: AWX uses the Ansible Code of Conduct; ❤ Be nice to other members of the community. ☮ Behave.
- name: 💼 For Enterprise
url: https://www.ansible.com/products/engine?utm_medium=github&utm_source=issue_template_chooser

View File

@@ -13,7 +13,7 @@ body:
attributes:
label: Please confirm the following
options:
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html).
required: true
- label: I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
required: true

View File

@@ -17,7 +17,6 @@ in as the first entry for your PR title.
##### COMPONENT NAME
<!--- Name of the module/plugin/module/task -->
- API
- UI
- Collection
- CLI
- Docs

View File

@@ -11,8 +11,6 @@ inputs:
runs:
using: composite
steps:
- uses: ./.github/actions/setup-python
- name: Set lower case owner name
shell: bash
run: echo "OWNER_LC=${OWNER,,}" >> $GITHUB_ENV

View File

@@ -36,7 +36,7 @@ runs:
- name: Upgrade ansible-core
shell: bash
run: python3 -m pip install --upgrade ansible-core
run: python -m pip install --upgrade ansible-core
- name: Install system deps
shell: bash

View File

@@ -8,3 +8,10 @@ updates:
labels:
- "docs"
- "dependencies"
- package-ecosystem: "pip"
directory: "requirements/"
schedule:
interval: "daily" #run daily until we trust it, then back this off to weekly
open-pull-requests-limit: 2
labels:
- "dependencies"

View File

@@ -70,10 +70,10 @@ Thank you for your submission and for supporting AWX!
- Hello, we'd love to help, but we need a little more information about the problem you're having. Screenshots, log outputs, or any reproducers would be very helpful.
### Code of Conduct
- Hello. Please keep in mind that Ansible adheres to a Code of Conduct in its community spaces. The spirit of the code of conduct is to be kind, and this is your friendly reminder to be so. Please see the full code of conduct here if you have questions: https://docs.ansible.com/ansible/latest/community/code_of_conduct.html
- Hello. Please keep in mind that Ansible adheres to a Code of Conduct in its community spaces. The spirit of the code of conduct is to be kind, and this is your friendly reminder to be so. Please see the full code of conduct here if you have questions: https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html
### EE Contents / Community General
- Hello. The awx-ee contains the collections and dependencies needed for supported AWX features to function. Anything beyond that (like the community.general package) will require you to build your own EE. For information on how to do that, see https://ansible-builder.readthedocs.io/en/stable/ \
- Hello. The awx-ee contains the collections and dependencies needed for supported AWX features to function. Anything beyond that (like the community.general package) will require you to build your own EE. For information on how to do that, see https://docs.ansible.com/projects/builder/en/stable/ \
\
The Ansible Community is looking at building an EE that corresponds to all of the collections inside the ansible package. That may help you if and when it happens; see https://github.com/ansible-community/community-topics/issues/31 for details.
@@ -88,7 +88,7 @@ The Ansible Community is looking at building an EE that corresponds to all of th
- Hello, we think your idea is good! Please consider contributing a PR for this following our contributing guidelines: https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md
### Receptor
- You can find the receptor docs here: https://receptor.readthedocs.io/en/latest/
- You can find the receptor docs here: https://docs.ansible.com/projects/receptor/en/latest/
- Hello, your issue seems related to receptor. Could you please open an issue in the receptor repository? https://github.com/ansible/receptor. Thanks!
### Ansible Engine not AWX

72
.github/workflows/api_schema_check.yml vendored Normal file
View File

@@ -0,0 +1,72 @@
---
name: API Schema Change Detection
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
CI_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DEV_DOCKER_OWNER: ${{ github.repository_owner }}
COMPOSE_TAG: ${{ github.base_ref || 'devel' }}
UPSTREAM_REPOSITORY_ID: 91594105
on:
pull_request:
branches:
- devel
- release_**
- feature_**
- stable-**
jobs:
api-schema-detection:
name: Detect API Schema Changes
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v4
with:
show-progress: false
fetch-depth: 0
- name: Build awx_devel image for schema check
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
private-github-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Detect API schema changes
id: schema-check
continue-on-error: true
run: |
AWX_DOCKER_ARGS='-e GITHUB_ACTIONS' \
AWX_DOCKER_CMD='make detect-schema-change SCHEMA_DIFF_BASE_BRANCH=${{ github.event.pull_request.base.ref }}' \
make docker-runner 2>&1 | tee schema-diff.txt
exit ${PIPESTATUS[0]}
- name: Add schema diff to job summary
if: always()
# show text and if for some reason, it can't be generated, state that it can't be.
run: |
echo "## API Schema Change Detection Results" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
if [ -f schema-diff.txt ]; then
if grep -q "^+" schema-diff.txt || grep -q "^-" schema-diff.txt; then
echo "### Schema changes detected" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
# Truncate to first 1000 lines to stay under GitHub's 1MB summary limit
TOTAL_LINES=$(wc -l < schema-diff.txt)
if [ $TOTAL_LINES -gt 1000 ]; then
echo "_Showing first 1000 of ${TOTAL_LINES} lines. See job logs or download artifact for full diff._" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
fi
echo '```diff' >> $GITHUB_STEP_SUMMARY
head -n 1000 schema-diff.txt >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
else
echo "### No schema changes detected" >> $GITHUB_STEP_SUMMARY
fi
else
echo "### Unable to generate schema diff" >> $GITHUB_STEP_SUMMARY
fi

View File

@@ -32,18 +32,9 @@ jobs:
- name: api-lint
command: /var/lib/awx/venv/awx/bin/tox -e linters
coverage-upload-name: ""
- name: api-swagger
command: /start_tests.sh swagger
coverage-upload-name: ""
- name: awx-collection
command: /start_tests.sh test_collection_all
coverage-upload-name: "awx-collection"
- name: api-schema
command: >-
/start_tests.sh detect-schema-change SCHEMA_DIFF_BASE_BRANCH=${{
github.event.pull_request.base.ref || github.ref_name
}}
coverage-upload-name: ""
steps:
- uses: actions/checkout@v4
@@ -63,6 +54,17 @@ jobs:
AWX_DOCKER_CMD='${{ matrix.tests.command }}'
make docker-runner
- name: Inject PR number into coverage.xml
if: >-
!cancelled()
&& github.event_name == 'pull_request'
&& steps.make-run.outputs.cov-report-files != ''
run: |
if [ -f "reports/coverage.xml" ]; then
sed -i '2i<!-- PR ${{ github.event.pull_request.number }} -->' reports/coverage.xml
echo "Injected PR number ${{ github.event.pull_request.number }} into coverage.xml"
fi
- name: Upload test coverage to Codecov
if: >-
!cancelled()
@@ -102,6 +104,14 @@ jobs:
}}
token: ${{ secrets.CODECOV_TOKEN }}
- name: Upload test artifacts
if: always()
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.tests.name }}-artifacts
path: reports/coverage.xml
retention-days: 5
- name: Upload awx jUnit test reports
if: >-
!cancelled()
@@ -132,7 +142,7 @@ jobs:
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
python-version: '3.13'
- uses: ./.github/actions/run_awx_devel
id: awx
@@ -172,14 +182,20 @@ jobs:
repository: ansible/awx-operator
path: awx-operator
- uses: ./awx/.github/actions/setup-python
- name: Setup python, referencing action at awx relative path
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065
with:
working-directory: awx
python-version: '3.12'
- name: Install playbook dependencies
run: |
python3 -m pip install docker
python -m pip install docker
- name: Check Python version
working-directory: awx
run: |
make print-PYTHON
- name: Build AWX image
working-directory: awx
run: |
@@ -191,27 +207,59 @@ jobs:
- name: Run test deployment with awx-operator
working-directory: awx-operator
id: awx_operator_test
timeout-minutes: 60
continue-on-error: true
run: |
python3 -m pip install -r molecule/requirements.txt
python3 -m pip install PyYAML # for awx/tools/scripts/rewrite-awx-operator-requirements.py
$(realpath ../awx/tools/scripts/rewrite-awx-operator-requirements.py) molecule/requirements.yml $(realpath ../awx)
ansible-galaxy collection install -r molecule/requirements.yml
sudo rm -f $(which kustomize)
make kustomize
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind -- --skip-tags=replicas
set +e
timeout 15m bash -elc '
python -m pip install -r molecule/requirements.txt
python -m pip install PyYAML # for awx/tools/scripts/rewrite-awx-operator-requirements.py
$(realpath ../awx/tools/scripts/rewrite-awx-operator-requirements.py) molecule/requirements.yml $(realpath ../awx)
ansible-galaxy collection install -r molecule/requirements.yml
sudo rm -f $(which kustomize)
make kustomize
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind -- --skip-tags=replicas
'
rc=$?
if [ $rc -eq 124 ]; then
echo "timed_out=true" >> "$GITHUB_OUTPUT"
fi
exit $rc
env:
AWX_TEST_IMAGE: local/awx
AWX_TEST_VERSION: ci
AWX_EE_TEST_IMAGE: quay.io/ansible/awx-ee:latest
STORE_DEBUG_OUTPUT: true
- name: Collect awx-operator logs on timeout
# Only run on timeout; normal failures should use molecule's built-in log collection.
if: steps.awx_operator_test.outputs.timed_out == 'true'
run: |
mkdir -p "$DEBUG_OUTPUT_DIR"
if command -v kind >/dev/null 2>&1; then
for cluster in $(kind get clusters 2>/dev/null); do
kind export logs "$DEBUG_OUTPUT_DIR/$cluster" --name "$cluster" || true
done
fi
if command -v kubectl >/dev/null 2>&1; then
kubectl get all -A -o wide > "$DEBUG_OUTPUT_DIR/kubectl-get-all.txt" || true
kubectl get pods -A -o wide > "$DEBUG_OUTPUT_DIR/kubectl-get-pods.txt" || true
kubectl describe pods -A > "$DEBUG_OUTPUT_DIR/kubectl-describe-pods.txt" || true
fi
docker ps -a > "$DEBUG_OUTPUT_DIR/docker-ps.txt" || true
- name: Upload debug output
if: failure()
if: always()
uses: actions/upload-artifact@v4
with:
name: awx-operator-debug-output
path: ${{ env.DEBUG_OUTPUT_DIR }}
- name: Fail awx-operator check if test deployment failed
if: steps.awx_operator_test.outcome != 'success'
run: exit 1
collection-sanity:
name: awx_collection sanity
runs-on: ubuntu-latest
@@ -280,7 +328,11 @@ jobs:
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
python-version: '3.13'
- name: Remove system ansible to avoid conflicts
run: |
python -m pip uninstall -y ansible ansible-core || true
- uses: ./.github/actions/run_awx_devel
id: awx
@@ -291,8 +343,9 @@ jobs:
- name: Install dependencies for running tests
run: |
python3 -m pip install -e ./awxkit/
python3 -m pip install -r awx_collection/requirements.txt
python -m pip install -e ./awxkit/
python -m pip install -r awx_collection/requirements.txt
hash -r # Rehash to pick up newly installed scripts
- name: Run integration tests
id: make-run
@@ -304,6 +357,7 @@ jobs:
echo 'password = password' >> ~/.tower_cli.cfg
echo 'verify_ssl = false' >> ~/.tower_cli.cfg
TARGETS="$(ls awx_collection/tests/integration/targets | grep '${{ matrix.target-regex.regex }}' | tr '\n' ' ')"
export PYTHONPATH="$(python -c 'import site; print(":".join(site.getsitepackages()))')${PYTHONPATH:+:$PYTHONPATH}"
make COLLECTION_VERSION=100.100.100-git COLLECTION_TEST_TARGET="--requirements $TARGETS" test_collection_integration
env:
ANSIBLE_TEST_PREFER_PODMAN: 1
@@ -358,10 +412,14 @@ jobs:
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
python-version: '3.13'
- name: Remove system ansible to avoid conflicts
run: |
python -m pip uninstall -y ansible ansible-core || true
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
run: python -m pip install --upgrade ansible-core
- name: Download coverage artifacts
uses: actions/download-artifact@v4
@@ -376,11 +434,12 @@ jobs:
mkdir -p ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage
cp -rv coverage/* ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
cd ~/.ansible/collections/ansible_collections/awx/awx
ansible-test coverage combine --requirements
ansible-test coverage html
hash -r # Rehash to pick up newly installed scripts
PATH="$(python -c 'import sys; import os; print(os.path.dirname(sys.executable))'):$PATH" ansible-test coverage combine --requirements
PATH="$(python -c 'import sys; import os; print(os.path.dirname(sys.executable))'):$PATH" ansible-test coverage html
echo '## AWX Collection Integration Coverage' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
ansible-test coverage report >> $GITHUB_STEP_SUMMARY
PATH="$(python -c 'import sys; import os; print(os.path.dirname(sys.executable))'):$PATH" ansible-test coverage report >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo >> $GITHUB_STEP_SUMMARY
echo '## AWX Collection Integration Coverage HTML' >> $GITHUB_STEP_SUMMARY

View File

@@ -10,6 +10,7 @@ on:
- devel
- release_*
- feature_*
- stable-*
jobs:
push-development-images:
runs-on: ubuntu-latest

View File

@@ -20,4 +20,4 @@ jobs:
run: |
ansible localhost -c local, -m command -a "{{ ansible_python_interpreter + ' -m pip install boto3'}}"
ansible localhost -c local -m aws_s3 \
-a "bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=delobj permission=public-read"
-a "bucket=awx-public-ci-files object=${{ github.event.repository.name }}/${GITHUB_REF##*/}/schema.json mode=delobj permission=public-read"

248
.github/workflows/sonarcloud_pr.yml vendored Normal file
View File

@@ -0,0 +1,248 @@
# SonarCloud Analysis Workflow for awx
#
# This workflow runs SonarCloud analysis triggered by CI workflow completion.
# It is split into two separate jobs for clarity and maintainability:
#
# FLOW: CI completes → workflow_run triggers this workflow → appropriate job runs
#
# JOB 1: sonar-pr-analysis (for PRs)
# - Triggered by: workflow_run (CI on pull_request)
# - Steps: Download coverage → Get PR info → Get changed files → Run SonarCloud PR analysis
# - Scans: All changed files in the PR (Python, YAML, JSON, etc.)
# - Quality gate: Focuses on new/changed code in PR only
#
# JOB 2: sonar-branch-analysis (for long-lived branches)
# - Triggered by: workflow_run (CI on push to devel)
# - Steps: Download coverage → Run SonarCloud branch analysis
# - Scans: Full codebase
# - Quality gate: Focuses on overall project health
#
# This ensures coverage data is always available from CI before analysis runs.
#
# What files are scanned:
# - All files in the repository that SonarCloud can analyze
# - Excludes: tests, scripts, dev environments, external collections (see sonar-project.properties)
# With much help from:
# https://community.sonarsource.com/t/how-to-use-sonarcloud-with-a-forked-repository-on-github/7363/30
# https://community.sonarsource.com/t/how-to-use-sonarcloud-with-a-forked-repository-on-github/7363/32
name: SonarCloud
on:
workflow_run: # This is triggered by CI being completed.
workflows:
- CI
types:
- completed
permissions: read-all
jobs:
sonar-pr-analysis:
name: SonarCloud PR Analysis
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.event == 'pull_request' &&
github.repository == 'ansible/awx'
steps:
- uses: actions/checkout@v4
# Download all individual coverage artifacts from CI workflow
- name: Download coverage artifacts
uses: dawidd6/action-download-artifact@246dbf436b23d7c49e21a7ab8204ca9ecd1fe615
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
workflow: CI
run_id: ${{ github.event.workflow_run.id }}
pattern: api-test-artifacts
# Extract PR metadata from workflow_run event
- name: Set PR metadata and prepare files for analysis
env:
COMMIT_SHA: ${{ github.event.workflow_run.head_sha }}
REPO_NAME: ${{ github.event.repository.full_name }}
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Find all downloaded coverage XML files
coverage_files=$(find . -name "coverage.xml" -type f | tr '\n' ',' | sed 's/,$//')
echo "Found coverage files: $coverage_files"
echo "COVERAGE_PATHS=$coverage_files" >> $GITHUB_ENV
# Extract PR number from first coverage.xml file found
first_coverage=$(find . -name "coverage.xml" -type f | head -1)
if [ -f "$first_coverage" ]; then
PR_NUMBER=$(grep -m 1 '<!-- PR' "$first_coverage" | awk '{print $3}' || echo "")
else
PR_NUMBER=""
fi
echo "🔍 SonarCloud Analysis Decision Summary"
echo "========================================"
echo "├── CI Event: ✅ Pull Request"
echo "├── PR Number from coverage.xml: #${PR_NUMBER:-<not found>}"
if [ -z "$PR_NUMBER" ]; then
echo "##[error]❌ FATAL: PR number not found in coverage.xml"
echo "##[error]This job requires a PR number to run PR analysis."
echo "##[error]The ci workflow should have injected the PR number into coverage.xml."
exit 1
fi
# Get PR metadata from GitHub API
PR_DATA=$(gh api "repos/$REPO_NAME/pulls/$PR_NUMBER")
PR_BASE=$(echo "$PR_DATA" | jq -r '.base.ref')
PR_HEAD=$(echo "$PR_DATA" | jq -r '.head.ref')
# Print summary
echo "🔍 SonarCloud Analysis Decision Summary"
echo "========================================"
echo "├── CI Event: ✅ Pull Request"
echo "├── PR Number: #$PR_NUMBER"
echo "├── Base Branch: $PR_BASE"
echo "├── Head Branch: $PR_HEAD"
echo "├── Repo: $REPO_NAME"
# Export to GitHub env for later steps
echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_ENV
echo "PR_BASE=$PR_BASE" >> $GITHUB_ENV
echo "PR_HEAD=$PR_HEAD" >> $GITHUB_ENV
echo "COMMIT_SHA=$COMMIT_SHA" >> $GITHUB_ENV
echo "REPO_NAME=$REPO_NAME" >> $GITHUB_ENV
# Get all changed files from PR (with error handling)
files=""
if [ -n "$PR_NUMBER" ]; then
if gh api repos/$REPO_NAME/pulls/$PR_NUMBER/files --jq '.[].filename' > /tmp/pr_files.txt 2>/tmp/pr_error.txt; then
files=$(cat /tmp/pr_files.txt)
else
echo "├── Changed Files: ⚠️ Could not fetch (likely test repo or PR not found)"
if [ -f coverage.xml ] && [ -s coverage.xml ]; then
echo "├── Coverage Data: ✅ Available"
else
echo "├── Coverage Data: ⚠️ Not available"
fi
echo "└── Result: ✅ Running SonarCloud analysis (full scan)"
# No files = no inclusions filter = full scan
exit 0
fi
else
echo "├── PR Number: ⚠️ Not available"
if [ -f coverage.xml ] && [ -s coverage.xml ]; then
echo "├── Coverage Data: ✅ Available"
else
echo "├── Coverage Data: ⚠️ Not available"
fi
echo "└── Result: ✅ Running SonarCloud analysis (full scan)"
exit 0
fi
# Get file extensions and count for summary
extensions=$(echo "$files" | sed 's/.*\.//' | sort | uniq | tr '\n' ',' | sed 's/,$//')
file_count=$(echo "$files" | wc -l)
echo "├── Changed Files: $file_count file(s) (.${extensions})"
# Check if coverage.xml exists and has content
if [ -f coverage.xml ] && [ -s coverage.xml ]; then
echo "├── Coverage Data: ✅ Available"
else
echo "├── Coverage Data: ⚠️ Not available (analysis will proceed without coverage)"
fi
# Prepare file list for Sonar
echo "All changed files in PR:"
echo "$files"
# Filter out files that are excluded by .coveragerc to avoid coverage conflicts
# This prevents SonarCloud from analyzing files that have no coverage data
if [ -n "$files" ]; then
# Filter out files matching .coveragerc omit patterns
filtered_files=$(echo "$files" | grep -v "settings/.*_defaults\.py$" | grep -v "settings/defaults\.py$" | grep -v "main/migrations/")
# Show which files were filtered out for transparency
excluded_files=$(echo "$files" | grep -E "(settings/.*_defaults\.py$|settings/defaults\.py$|main/migrations/)" || true)
if [ -n "$excluded_files" ]; then
echo "├── Filtered out (coverage-excluded): $(echo "$excluded_files" | wc -l) file(s)"
echo "$excluded_files" | sed 's/^/│ - /'
fi
if [ -n "$filtered_files" ]; then
inclusions=$(echo "$filtered_files" | tr '\n' ',' | sed 's/,$//')
echo "SONAR_INCLUSIONS=$inclusions" >> $GITHUB_ENV
echo "└── Result: ✅ Will scan these files (excluding coverage-omitted files): $inclusions"
else
echo "└── Result: ✅ All changed files are excluded by coverage config, running full SonarCloud analysis"
# Don't set SONAR_INCLUSIONS, let it scan everything per sonar-project.properties
fi
else
echo "└── Result: ✅ Running SonarCloud analysis"
fi
- name: Add base branch
if: env.PR_NUMBER != ''
run: |
gh pr checkout ${{ env.PR_NUMBER }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: SonarCloud Scan
uses: SonarSource/sonarqube-scan-action@fd88b7d7ccbaefd23d8f36f73b59db7a3d246602 # v6
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.CICD_ORG_SONAR_TOKEN_CICD_BOT }}
with:
args: >
-Dsonar.scm.revision=${{ env.COMMIT_SHA }}
-Dsonar.pullrequest.key=${{ env.PR_NUMBER }}
-Dsonar.pullrequest.branch=${{ env.PR_HEAD }}
-Dsonar.pullrequest.base=${{ env.PR_BASE }}
-Dsonar.python.coverage.reportPaths=${{ env.COVERAGE_PATHS }}
${{ env.SONAR_INCLUSIONS && format('-Dsonar.inclusions={0}', env.SONAR_INCLUSIONS) || '' }}
sonar-branch-analysis:
name: SonarCloud Branch Analysis
runs-on: ubuntu-latest
if: |
github.event_name == 'workflow_run' &&
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.event == 'push' &&
github.repository == 'ansible/awx'
steps:
- uses: actions/checkout@v4
# Download all individual coverage artifacts from CI workflow (optional for branch pushes)
- name: Download coverage artifacts
continue-on-error: true
uses: dawidd6/action-download-artifact@246dbf436b23d7c49e21a7ab8204ca9ecd1fe615
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
workflow: CI
run_id: ${{ github.event.workflow_run.id }}
pattern: api-test-artifacts
- name: Print SonarCloud Analysis Summary
env:
BRANCH_NAME: ${{ github.event.workflow_run.head_branch }}
run: |
# Find all downloaded coverage XML files
coverage_files=$(find . -name "coverage.xml" -type f | tr '\n' ',' | sed 's/,$//')
echo "Found coverage files: $coverage_files"
echo "COVERAGE_PATHS=$coverage_files" >> $GITHUB_ENV
echo "🔍 SonarCloud Analysis Summary"
echo "=============================="
echo "├── CI Event: ✅ Push (via workflow_run)"
echo "├── Branch: $BRANCH_NAME"
echo "├── Coverage Files: ${coverage_files:-none}"
echo "├── Python Changes: N/A (Full codebase scan)"
echo "└── Result: ✅ Proceed - \"Running SonarCloud analysis\""
- name: SonarCloud Scan
uses: SonarSource/sonarqube-scan-action@fd88b7d7ccbaefd23d8f36f73b59db7a3d246602 # v6
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.CICD_ORG_SONAR_TOKEN_CICD_BOT }}
with:
args: >
-Dsonar.scm.revision=${{ github.event.workflow_run.head_sha }}
-Dsonar.branch.name=${{ github.event.workflow_run.head_branch }}
${{ env.COVERAGE_PATHS && format('-Dsonar.python.coverage.reportPaths={0}', env.COVERAGE_PATHS) || '' }}

View File

@@ -85,9 +85,11 @@ jobs:
cp ../awx-logos/awx/ui/client/assets/* awx/ui/public/static/media/
- name: Setup node and npm for new UI build
uses: actions/setup-node@v2
uses: actions/setup-node@v4
with:
node-version: '18'
cache: 'npm'
cache-dependency-path: awx/awx/ui/**/package-lock.json
- name: Prebuild new UI for awx image (to speed up build process)
working-directory: awx

View File

@@ -11,6 +11,7 @@ on:
- devel
- release_**
- feature_**
- stable-**
jobs:
push:
runs-on: ubuntu-latest
@@ -23,35 +24,26 @@ jobs:
with:
show-progress: false
- uses: ./.github/actions/setup-python
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- uses: ./.github/actions/setup-ssh-agent
- name: Build awx_devel image to use for schema gen
uses: ./.github/actions/awx_devel_image
with:
ssh-private-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Pre-pull image to warm build cache
run: |
docker pull -q ghcr.io/${{ github.repository_owner }}/awx_devel:${GITHUB_REF##*/} || :
- name: Build image
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${GITHUB_REF##*/} make docker-compose-build
github-token: ${{ secrets.GITHUB_TOKEN }}
private-github-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Generate API Schema
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} \
COMPOSE_TAG=${{ github.base_ref || github.ref_name }} \
docker run -u $(id -u) --rm -v ${{ github.workspace }}:/awx_devel/:Z \
--workdir=/awx_devel ghcr.io/${{ github.repository_owner }}/awx_devel:${GITHUB_REF##*/} /start_tests.sh genschema
--workdir=/awx_devel `make print-DEVEL_IMAGE_NAME` /start_tests.sh genschema
- name: Upload API Schema
env:
AWS_ACCESS_KEY: ${{ secrets.AWS_ACCESS_KEY }}
AWS_SECRET_KEY: ${{ secrets.AWS_SECRET_KEY }}
AWS_REGION: 'us-east-1'
run: |
ansible localhost -c local, -m command -a "{{ ansible_python_interpreter + ' -m pip install boto3'}}"
ansible localhost -c local -m aws_s3 \
-a "src=${{ github.workspace }}/schema.json bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=put permission=public-read"
uses: keithweaver/aws-s3-github-action@4dd5a7b81d54abaa23bbac92b27e85d7f405ae53
with:
command: cp
source: ${{ github.workspace }}/schema.json
destination: s3://awx-public-ci-files/${{ github.event.repository.name }}/${{ github.ref_name }}/schema.json
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_KEY }}
aws_region: us-east-1
flags: --acl public-read --only-show-errors

1
.gitignore vendored
View File

@@ -1,6 +1,7 @@
# Ignore generated schema
swagger.json
schema.json
schema.yaml
reference-schema.json
# Tags

View File

@@ -7,7 +7,7 @@ build:
os: ubuntu-22.04
tools:
python: >-
3.11
3.12
commands:
- pip install --user tox
- python3 -m tox -e docs --notest -v

View File

@@ -31,7 +31,7 @@ Have questions about this document or anything not covered here? Create a topic
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see [git push docs](https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt).
- If submitting a large code change, it's a good idea to create a [forum topic tagged with 'awx'](https://forum.ansible.com/tag/awx), and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
## Setting up your development environment

View File

@@ -1,6 +1,6 @@
-include awx/ui/Makefile
PYTHON := $(notdir $(shell for i in python3.11 python3; do command -v $$i; done|sed 1q))
PYTHON := $(notdir $(shell for i in python3.12 python3.11 python3; do command -v $$i; done|sed 1q))
SHELL := bash
DOCKER_COMPOSE ?= docker compose
OFFICIAL ?= no
@@ -27,6 +27,8 @@ TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests
PARALLEL_TESTS ?= -n auto
# collection integration test directories (defaults to all)
COLLECTION_TEST_TARGET ?=
# Python version for ansible-test (must be 3.11, 3.12, or 3.13)
ANSIBLE_TEST_PYTHON_VERSION ?= 3.13
# args for collection install
COLLECTION_PACKAGE ?= awx
COLLECTION_NAMESPACE ?= awx
@@ -77,7 +79,7 @@ RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio
# These should be upgraded in the AWX and Ansible venv before attempting
# to install the actual requirements
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==70.3.0 setuptools_scm[toml]==8.1.0 wheel==0.45.1 cython==3.0.11
VENV_BOOTSTRAP ?= pip==25.3 setuptools==80.9.0 setuptools_scm[toml]==9.2.2 wheel==0.45.1 cython==3.1.3
NAME ?= awx
@@ -105,6 +107,8 @@ else
endif
.PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \
update_requirements upgrade_requirements update_requirements_dev \
docker_update_requirements docker_upgrade_requirements docker_update_requirements_dev \
develop refresh adduser migrate dbchange \
receiver test test_unit test_coverage coverage_html \
sdist \
@@ -144,7 +148,7 @@ clean-api:
rm -rf build $(NAME)-$(VERSION) *.egg-info
rm -rf .tox
find . -type f -regex ".*\.py[co]$$" -delete
find . -type d -name "__pycache__" -delete
find . -type d -name "__pycache__" -exec rm -rf {} +
rm -f awx/awx_test.sqlite3*
rm -rf requirements/vendor
rm -rf awx/projects
@@ -194,6 +198,36 @@ requirements_dev: requirements_awx requirements_awx_dev
requirements_test: requirements
## Update requirements files using pip-compile (run inside container)
update_requirements:
cd requirements && ./updater.sh run
## Upgrade all requirements to latest versions (run inside container)
upgrade_requirements:
cd requirements && ./updater.sh upgrade
## Update development requirements (run inside container)
update_requirements_dev:
cd requirements && ./updater.sh dev
## Update requirements using docker-runner
docker_update_requirements:
@echo "Running requirements updater..."
AWX_DOCKER_CMD='make update_requirements' $(MAKE) docker-runner
@echo "Requirements update complete!"
## Upgrade requirements using docker-runner
docker_upgrade_requirements:
@echo "Running requirements upgrader..."
AWX_DOCKER_CMD='make upgrade_requirements' $(MAKE) docker-runner
@echo "Requirements upgrade complete!"
## Update dev requirements using docker-runner
docker_update_requirements_dev:
@echo "Running dev requirements updater..."
AWX_DOCKER_CMD='make update_requirements_dev' $(MAKE) docker-runner
@echo "Dev requirements update complete!"
## "Install" awx package in development mode.
develop:
@if [ "$(VIRTUAL_ENV)" ]; then \
@@ -255,7 +289,7 @@ dispatcher:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_dispatcher
$(PYTHON) manage.py dispatcherd
## Run to start the zeromq callback receiver
receiver:
@@ -314,20 +348,17 @@ black: reports
@echo "fi" >> .git/hooks/pre-commit
@chmod +x .git/hooks/pre-commit
genschema: reports
$(MAKE) swagger PYTEST_ADDOPTS="--genschema --create-db "
mv swagger.json schema.json
swagger: reports
genschema: awx-link reports
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
(set -o pipefail && py.test $(COVERAGE_ARGS) $(PARALLEL_TESTS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs | tee reports/$@.report)
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo 'cov-report-files=reports/coverage.xml' >> "${GITHUB_OUTPUT}"; \
echo 'test-result-files=reports/junit.xml' >> "${GITHUB_OUTPUT}"; \
fi
$(MANAGEMENT_COMMAND) spectacular --format openapi-json --file schema.json
genschema-yaml: awx-link reports
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(MANAGEMENT_COMMAND) spectacular --format openapi --file schema.yaml
check: black
@@ -431,8 +462,8 @@ test_collection_sanity:
test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && \
ansible-test integration --coverage -vvv $(COLLECTION_TEST_TARGET) && \
ansible-test coverage xml --requirements --group-by command --group-by version
PATH="$$($(PYTHON) -c 'import sys; import os; print(os.path.dirname(sys.executable))'):$$PATH" ansible-test integration --python $(ANSIBLE_TEST_PYTHON_VERSION) --coverage -vvv $(COLLECTION_TEST_TARGET) && \
PATH="$$($(PYTHON) -c 'import sys; import os; print(os.path.dirname(sys.executable))'):$$PATH" ansible-test coverage xml --requirements --group-by command --group-by version
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo cov-report-files="$$(find "$(COLLECTION_INSTALL)/tests/output/reports/" -type f -name 'coverage=integration*.xml' -print0 | tr '\0' ',' | sed 's#,$$##')" >> "${GITHUB_OUTPUT}"; \
@@ -537,14 +568,16 @@ docker-compose-test: awx/projects docker-compose-sources
docker-compose-runtest: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /start_tests.sh
docker-compose-build-swagger: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 /start_tests.sh swagger
docker-compose-build-schema: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 make genschema
SCHEMA_DIFF_BASE_FOLDER ?= awx
SCHEMA_DIFF_BASE_BRANCH ?= devel
detect-schema-change: genschema
curl https://s3.amazonaws.com/awx-public-ci-files/$(SCHEMA_DIFF_BASE_BRANCH)/schema.json -o reference-schema.json
curl https://s3.amazonaws.com/awx-public-ci-files/$(SCHEMA_DIFF_BASE_FOLDER)/$(SCHEMA_DIFF_BASE_BRANCH)/schema.json -o reference-schema.json
# Ignore differences in whitespace with -b
diff -u -b reference-schema.json schema.json
# diff exits with 1 when files differ - capture but don't fail
-diff -u -b reference-schema.json schema.json
docker-compose-clean: awx/projects
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml rm -sf
@@ -577,7 +610,7 @@ docker-compose-build: Dockerfile.dev
docker-compose-buildx: Dockerfile.dev
- docker buildx create --name docker-compose-buildx
docker buildx use docker-compose-buildx
- docker buildx build \
docker buildx build \
--ssh default=$(SSH_AUTH_SOCK) \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \
@@ -637,7 +670,7 @@ awx-kube-build: Dockerfile
awx-kube-buildx: Dockerfile
- docker buildx create --name awx-kube-buildx
docker buildx use awx-kube-buildx
- docker buildx build \
docker buildx build \
--ssh default=$(SSH_AUTH_SOCK) \
--push \
--build-arg VERSION=$(VERSION) \
@@ -671,7 +704,7 @@ awx-kube-dev-build: Dockerfile.kube-dev
awx-kube-dev-buildx: Dockerfile.kube-dev
- docker buildx create --name awx-kube-dev-buildx
docker buildx use awx-kube-dev-buildx
- docker buildx build \
docker buildx build \
--ssh default=$(SSH_AUTH_SOCK) \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \

View File

@@ -1,4 +1,4 @@
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![codecov](https://codecov.io/github/ansible/awx/graph/badge.svg?token=4L4GSP9IAR)](https://codecov.io/github/ansible/awx) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX on the Ansible Forum](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://forum.ansible.com/tag/awx)
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![codecov](https://codecov.io/github/ansible/awx/graph/badge.svg?token=4L4GSP9IAR)](https://codecov.io/github/ansible/awx) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX on the Ansible Forum](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://forum.ansible.com/tag/awx)
[![Ansible Matrix](https://img.shields.io/badge/matrix-Ansible%20Community-blueviolet.svg?logo=matrix)](https://chat.ansible.im/#/welcome) [![Ansible Discourse](https://img.shields.io/badge/discourse-Ansible%20Community-yellowgreen.svg?logo=discourse)](https://forum.ansible.com)
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
@@ -18,7 +18,7 @@ AWX provides a web-based user interface, REST API, and task engine built on top
To install AWX, please view the [Install guide](./INSTALL.md).
To learn more about using AWX, view the [AWX docs site](https://ansible.readthedocs.io/projects/awx/en/latest/).
To learn more about using AWX, view the [AWX docs site](https://docs.ansible.com/projects/awx/en/latest/).
The AWX Project Frequently Asked Questions can be found [here](https://www.ansible.com/awx-project-faq).
@@ -41,11 +41,11 @@ If you're experiencing a problem that you feel is a bug in AWX or have ideas for
Code of Conduct
---------------
We require all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
We require all of our community members and contributors to adhere to the [Ansible code of conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
Get Involved
------------
We welcome your feedback and ideas via the [Ansible Forum](https://forum.ansible.com/tag/awx).
For a full list of all the ways to talk with the Ansible Community, see the [AWX Communication guide](https://ansible.readthedocs.io/projects/awx/en/latest/contributor/communication.html).
For a full list of all the ways to talk with the Ansible Community, see the [AWX Communication guide](https://docs.ansible.com/projects/awx/en/latest/contributor/communication.html).

View File

@@ -7,7 +7,6 @@ from rest_framework import serializers
# AWX
from awx.conf import fields, register, register_validate
register(
'SESSION_COOKIE_AGE',
field_class=fields.IntegerField,

View File

@@ -21,7 +21,7 @@ class NullFieldMixin(object):
"""
def validate_empty_values(self, data):
(is_empty_value, data) = super(NullFieldMixin, self).validate_empty_values(data)
is_empty_value, data = super(NullFieldMixin, self).validate_empty_values(data)
if is_empty_value and data is None:
return (False, data)
return (is_empty_value, data)

View File

@@ -161,16 +161,14 @@ def get_view_description(view, html=False):
def get_default_schema():
if settings.DYNACONF.is_development_mode:
from awx.api.swagger import schema_view
return schema_view
else:
return views.APIView.schema
# drf-spectacular is configured via REST_FRAMEWORK['DEFAULT_SCHEMA_CLASS']
# Just use the DRF default, which will pick up our CustomAutoSchema
return views.APIView.schema
class APIView(views.APIView):
schema = get_default_schema()
# Schema is inherited from DRF's APIView, which uses DEFAULT_SCHEMA_CLASS
# No need to override it here - drf-spectacular will handle it
versioning_class = URLPathVersioning
def initialize_request(self, request, *args, **kwargs):
@@ -766,7 +764,7 @@ class SubListCreateAttachDetachAPIView(SubListCreateAPIView):
return Response(status=status.HTTP_204_NO_CONTENT)
def unattach(self, request, *args, **kwargs):
(sub_id, res) = self.unattach_validate(request)
sub_id, res = self.unattach_validate(request)
if res:
return res
return self.unattach_by_id(request, sub_id)
@@ -844,7 +842,7 @@ class ResourceAccessList(ParentMixin, ListAPIView):
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
ancestors = set(RoleEvaluation.objects.filter(content_type_id=content_type.id, object_id=obj.id).values_list('role_id', flat=True))
qs = User.objects.filter(has_roles__in=ancestors) | User.objects.filter(is_superuser=True)
auditor_role = RoleDefinition.objects.filter(name="Controller System Auditor").first()
auditor_role = RoleDefinition.objects.filter(name="Platform Auditor").first()
if auditor_role:
qs |= User.objects.filter(role_assignments__role_definition=auditor_role)
return qs.distinct()
@@ -1025,6 +1023,9 @@ class GenericCancelView(RetrieveAPIView):
# In subclass set model, serializer_class
obj_permission_type = 'cancel'
def get(self, request, *args, **kwargs):
return super(GenericCancelView, self).get(request, *args, **kwargs)
@transaction.non_atomic_requests
def dispatch(self, *args, **kwargs):
return super(GenericCancelView, self).dispatch(*args, **kwargs)

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import MetricsView
urls = [re_path(r'^$', MetricsView.as_view(), name='metrics_view')]
__all__ = ['urls']

View File

@@ -111,7 +111,7 @@ class UnifiedJobEventPagination(Pagination):
def __init__(self, *args, **kwargs):
self.use_limit_paginator = False
self.limit_pagination = LimitPagination()
return super().__init__(*args, **kwargs)
super().__init__(*args, **kwargs)
def paginate_queryset(self, queryset, request, view=None):
if 'limit' in request.query_params:

View File

@@ -10,7 +10,7 @@ from rest_framework import permissions
# AWX
from awx.main.access import check_user_access
from awx.main.models import Inventory, UnifiedJob
from awx.main.models import Inventory, UnifiedJob, Organization
from awx.main.utils import get_object_or_400
logger = logging.getLogger('awx.api.permissions')
@@ -228,12 +228,19 @@ class InventoryInventorySourcesUpdatePermission(ModelAccessPermission):
class UserPermission(ModelAccessPermission):
def check_post_permissions(self, request, view, obj=None):
if not request.data:
return request.user.admin_of_organizations.exists()
return Organization.access_qs(request.user, 'change').exists()
elif request.user.is_superuser:
return True
raise PermissionDenied()
class IsSystemAdmin(permissions.BasePermission):
def has_permission(self, request, view):
if not (request.user and request.user.is_authenticated):
return False
return request.user.is_superuser
class IsSystemAdminOrAuditor(permissions.BasePermission):
"""
Allows write access only to system admin users.

119
awx/api/schema.py Normal file
View File

@@ -0,0 +1,119 @@
import warnings
from rest_framework.permissions import IsAuthenticated
from drf_spectacular.openapi import AutoSchema
from drf_spectacular.views import (
SpectacularAPIView,
SpectacularSwaggerView,
SpectacularRedocView,
)
def filter_credential_type_schema(
result,
generator, # NOSONAR
request, # NOSONAR
public, # NOSONAR
):
"""
Postprocessing hook to filter CredentialType kind enum values.
For CredentialTypeRequest and PatchedCredentialTypeRequest schemas (POST/PUT/PATCH),
filter the 'kind' enum to only show 'cloud' and 'net' values.
This ensures the OpenAPI schema accurately reflects that only 'cloud' and 'net'
credential types can be created or modified via the API, matching the validation
in CredentialTypeSerializer.validate().
Args:
result: The OpenAPI schema dict to be modified
generator, request, public: Required by drf-spectacular interface (unused)
Returns:
The modified OpenAPI schema dict
"""
schemas = result.get('components', {}).get('schemas', {})
# Filter CredentialTypeRequest (POST/PUT) - field is required
if 'CredentialTypeRequest' in schemas:
kind_prop = schemas['CredentialTypeRequest'].get('properties', {}).get('kind', {})
if 'enum' in kind_prop:
# Filter to only cloud and net (no None - field is required)
kind_prop['enum'] = ['cloud', 'net']
kind_prop['description'] = "* `cloud` - Cloud\\n* `net` - Network"
# Filter PatchedCredentialTypeRequest (PATCH) - field is optional
if 'PatchedCredentialTypeRequest' in schemas:
kind_prop = schemas['PatchedCredentialTypeRequest'].get('properties', {}).get('kind', {})
if 'enum' in kind_prop:
# Filter to only cloud and net (None allowed - field can be omitted in PATCH)
kind_prop['enum'] = ['cloud', 'net', None]
kind_prop['description'] = "* `cloud` - Cloud\\n* `net` - Network"
return result
class CustomAutoSchema(AutoSchema):
"""Custom AutoSchema to add swagger_topic to tags and handle deprecated endpoints."""
def get_tags(self):
tags = []
try:
if hasattr(self.view, 'get_serializer'):
serializer = self.view.get_serializer()
else:
serializer = None
except Exception:
serializer = None
warnings.warn(
'{}.get_serializer() raised an exception during '
'schema generation. Serializer fields will not be '
'generated for this view.'.format(self.view.__class__.__name__)
)
if hasattr(self.view, 'swagger_topic'):
tags.append(str(self.view.swagger_topic).title())
elif serializer and hasattr(serializer, 'Meta') and hasattr(serializer.Meta, 'model'):
tags.append(str(serializer.Meta.model._meta.verbose_name_plural).title())
elif hasattr(self.view, 'model'):
tags.append(str(self.view.model._meta.verbose_name_plural).title())
else:
tags = super().get_tags() # Use default drf-spectacular behavior
if not tags:
warnings.warn(f'Could not determine tags for {self.view.__class__.__name__}')
tags = ['api'] # Fallback to default value
return tags
def is_deprecated(self):
"""Return `True` if this operation is to be marked as deprecated."""
return getattr(self.view, 'deprecated', False)
class AuthenticatedSpectacularAPIView(SpectacularAPIView):
"""SpectacularAPIView that requires authentication."""
permission_classes = [IsAuthenticated]
class AuthenticatedSpectacularSwaggerView(SpectacularSwaggerView):
"""SpectacularSwaggerView that requires authentication."""
permission_classes = [IsAuthenticated]
class AuthenticatedSpectacularRedocView(SpectacularRedocView):
"""SpectacularRedocView that requires authentication."""
permission_classes = [IsAuthenticated]
# Schema view (returns OpenAPI schema JSON/YAML)
schema_view = AuthenticatedSpectacularAPIView.as_view()
# Swagger UI view
swagger_ui_view = AuthenticatedSpectacularSwaggerView.as_view(url_name='api:schema-json')
# ReDoc UI view
redoc_view = AuthenticatedSpectacularRedocView.as_view(url_name='api:schema-json')

View File

@@ -963,13 +963,13 @@ class UnifiedJobSerializer(BaseSerializer):
class UnifiedJobListSerializer(UnifiedJobSerializer):
class Meta:
fields = ('*', '-job_args', '-job_cwd', '-job_env', '-result_traceback', '-event_processing_finished')
fields = ('*', '-job_args', '-job_cwd', '-job_env', '-result_traceback', '-event_processing_finished', '-artifacts')
def get_field_names(self, declared_fields, info):
field_names = super(UnifiedJobListSerializer, self).get_field_names(declared_fields, info)
# Meta multiple inheritance and -field_name options don't seem to be
# taking effect above, so remove the undesired fields here.
return tuple(x for x in field_names if x not in ('job_args', 'job_cwd', 'job_env', 'result_traceback', 'event_processing_finished'))
return tuple(x for x in field_names if x not in ('job_args', 'job_cwd', 'job_env', 'result_traceback', 'event_processing_finished', 'artifacts'))
def get_types(self):
if type(self) is UnifiedJobListSerializer:
@@ -1230,7 +1230,7 @@ class OrganizationSerializer(BaseSerializer, OpaQueryPathMixin):
# to a team. This provides a hint to the ui so it can know to not
# display these roles for team role selection.
for key in ('admin_role', 'member_role'):
if key in summary_dict.get('object_roles', {}):
if summary_dict and key in summary_dict.get('object_roles', {}):
summary_dict['object_roles'][key]['user_only'] = True
return summary_dict
@@ -2165,13 +2165,13 @@ class BulkHostDeleteSerializer(serializers.Serializer):
attrs['hosts_data'] = attrs['host_qs'].values()
if len(attrs['host_qs']) == 0:
error_hosts = {host: "Hosts do not exist or you lack permission to delete it" for host in attrs['hosts']}
error_hosts = dict.fromkeys(attrs['hosts'], "Hosts do not exist or you lack permission to delete it")
raise serializers.ValidationError({'hosts': error_hosts})
if len(attrs['host_qs']) < len(attrs['hosts']):
hosts_exists = [host['id'] for host in attrs['hosts_data']]
failed_hosts = list(set(attrs['hosts']).difference(hosts_exists))
error_hosts = {host: "Hosts do not exist or you lack permission to delete it" for host in failed_hosts}
error_hosts = dict.fromkeys(failed_hosts, "Hosts do not exist or you lack permission to delete it")
raise serializers.ValidationError({'hosts': error_hosts})
# Getting all inventories that the hosts can be in
@@ -2839,7 +2839,7 @@ class ResourceAccessListElementSerializer(UserSerializer):
{
"role": {
"id": None,
"name": _("Controller System Auditor"),
"name": _("Platform Auditor"),
"description": _("Can view all aspects of the system"),
"user_capabilities": {"unattach": False},
},
@@ -3027,11 +3027,6 @@ class CredentialSerializer(BaseSerializer):
ret.remove(field)
return ret
def validate_organization(self, org):
if self.instance and (not self.instance.managed) and self.instance.credential_type.kind == 'galaxy' and org is None:
raise serializers.ValidationError(_("Galaxy credentials must be owned by an Organization."))
return org
def validate_credential_type(self, credential_type):
if self.instance and credential_type.pk != self.instance.credential_type.pk:
for related_objects in (
@@ -3107,9 +3102,6 @@ class CredentialSerializerCreate(CredentialSerializer):
if attrs.get('team'):
attrs['organization'] = attrs['team'].organization
if 'credential_type' in attrs and attrs['credential_type'].kind == 'galaxy' and list(owner_fields) != ['organization']:
raise serializers.ValidationError({"organization": _("Galaxy credentials must be owned by an Organization.")})
return super(CredentialSerializerCreate, self).validate(attrs)
def create(self, validated_data):
@@ -3535,7 +3527,7 @@ class JobRelaunchSerializer(BaseSerializer):
choices=NEW_JOB_TYPE_CHOICES,
write_only=True,
)
credential_passwords = VerbatimField(required=True, write_only=True)
credential_passwords = VerbatimField(required=False, write_only=True)
class Meta:
model = Job
@@ -6006,7 +5998,7 @@ class InstanceGroupSerializer(BaseSerializer):
if self.instance and not self.instance.is_container_group:
raise serializers.ValidationError(_('pod_spec_override is only valid for container groups'))
pod_spec_override_json = None
pod_spec_override_json = {}
# defect if the value is yaml or json if yaml convert to json
try:
# convert yaml to json

View File

@@ -1,55 +0,0 @@
import warnings
from rest_framework.permissions import AllowAny
from drf_yasg import openapi
from drf_yasg.inspectors import SwaggerAutoSchema
from drf_yasg.views import get_schema_view
class CustomSwaggerAutoSchema(SwaggerAutoSchema):
"""Custom SwaggerAutoSchema to add swagger_topic to tags."""
def get_tags(self, operation_keys=None):
tags = []
try:
if hasattr(self.view, 'get_serializer'):
serializer = self.view.get_serializer()
else:
serializer = None
except Exception:
serializer = None
warnings.warn(
'{}.get_serializer() raised an exception during '
'schema generation. Serializer fields will not be '
'generated for {}.'.format(self.view.__class__.__name__, operation_keys)
)
if hasattr(self.view, 'swagger_topic'):
tags.append(str(self.view.swagger_topic).title())
elif serializer and hasattr(serializer, 'Meta'):
tags.append(str(serializer.Meta.model._meta.verbose_name_plural).title())
elif hasattr(self.view, 'model'):
tags.append(str(self.view.model._meta.verbose_name_plural).title())
else:
tags = ['api'] # Fallback to default value
if not tags:
warnings.warn(f'Could not determine tags for {self.view.__class__.__name__}')
return tags
def is_deprecated(self):
"""Return `True` if this operation is to be marked as deprecated."""
return getattr(self.view, 'deprecated', False)
schema_view = get_schema_view(
openapi.Info(
title='AWX API',
default_version='v2',
description='AWX API Documentation',
terms_of_service='https://www.google.com/policies/terms/',
contact=openapi.Contact(email='contact@snippets.local'),
license=openapi.License(name='Apache License'),
),
public=True,
permission_classes=[AllowAny],
)

View File

@@ -1,6 +1,6 @@
{% if content_only %}<div class="nocode ansi_fore ansi_back{% if dark %} ansi_dark{% endif %}">{% else %}
<!DOCTYPE HTML>
<html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>{{ title }}</title>

View File

@@ -1,4 +1,4 @@
---
collections:
- name: ansible.receptor
version: 2.0.3
version: 2.0.6

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import ActivityStreamList, ActivityStreamDetail
urls = [
re_path(r'^$', ActivityStreamList.as_view(), name='activity_stream_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ActivityStreamDetail.as_view(), name='activity_stream_detail'),

View File

@@ -14,7 +14,6 @@ from awx.api.views import (
AdHocCommandStdout,
)
urls = [
re_path(r'^$', AdHocCommandList.as_view(), name='ad_hoc_command_list'),
re_path(r'^(?P<pk>[0-9]+)/$', AdHocCommandDetail.as_view(), name='ad_hoc_command_detail'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import AdHocCommandEventDetail
urls = [
re_path(r'^(?P<pk>[0-9]+)/$', AdHocCommandEventDetail.as_view(), name='ad_hoc_command_event_detail'),
]

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
import awx.api.views.analytics as analytics
urls = [
re_path(r'^$', analytics.AnalyticsRootView.as_view(), name='analytics_root_view'),
re_path(r'^authorized/$', analytics.AnalyticsAuthorizedView.as_view(), name='analytics_authorized'),

View File

@@ -16,7 +16,6 @@ from awx.api.views import (
CredentialExternalTest,
)
urls = [
re_path(r'^$', CredentialList.as_view(), name='credential_list'),
re_path(r'^(?P<pk>[0-9]+)/activity_stream/$', CredentialActivityStreamList.as_view(), name='credential_activity_stream_list'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import CredentialInputSourceDetail, CredentialInputSourceList
urls = [
re_path(r'^$', CredentialInputSourceList.as_view(), name='credential_input_source_list'),
re_path(r'^(?P<pk>[0-9]+)/$', CredentialInputSourceDetail.as_view(), name='credential_input_source_detail'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import CredentialTypeList, CredentialTypeDetail, CredentialTypeCredentialList, CredentialTypeActivityStreamList, CredentialTypeExternalTest
urls = [
re_path(r'^$', CredentialTypeList.as_view(), name='credential_type_list'),
re_path(r'^(?P<pk>[0-9]+)/$', CredentialTypeDetail.as_view(), name='credential_type_detail'),

View File

@@ -8,7 +8,6 @@ from awx.api.views import (
ExecutionEnvironmentActivityStreamList,
)
urls = [
re_path(r'^$', ExecutionEnvironmentList.as_view(), name='execution_environment_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ExecutionEnvironmentDetail.as_view(), name='execution_environment_detail'),

View File

@@ -18,7 +18,6 @@ from awx.api.views import (
GroupAdHocCommandsList,
)
urls = [
re_path(r'^$', GroupList.as_view(), name='group_list'),
re_path(r'^(?P<pk>[0-9]+)/$', GroupDetail.as_view(), name='group_detail'),

View File

@@ -18,7 +18,6 @@ from awx.api.views import (
HostAdHocCommandEventsList,
)
urls = [
re_path(r'^$', HostList.as_view(), name='host_list'),
re_path(r'^(?P<pk>[0-9]+)/$', HostDetail.as_view(), name='host_detail'),

View File

@@ -14,7 +14,6 @@ from awx.api.views import (
)
from awx.api.views.instance_install_bundle import InstanceInstallBundle
urls = [
re_path(r'^$', InstanceList.as_view(), name='instance_list'),
re_path(r'^(?P<pk>[0-9]+)/$', InstanceDetail.as_view(), name='instance_detail'),

View File

@@ -12,7 +12,6 @@ from awx.api.views import (
InstanceGroupObjectRolesList,
)
urls = [
re_path(r'^$', InstanceGroupList.as_view(), name='instance_group_list'),
re_path(r'^(?P<pk>[0-9]+)/$', InstanceGroupDetail.as_view(), name='instance_group_detail'),

View File

@@ -29,7 +29,6 @@ from awx.api.views import (
InventoryVariableData,
)
urls = [
re_path(r'^$', InventoryList.as_view(), name='inventory_list'),
re_path(r'^(?P<pk>[0-9]+)/$', InventoryDetail.as_view(), name='inventory_detail'),

View File

@@ -18,7 +18,6 @@ from awx.api.views import (
InventorySourceNotificationTemplatesSuccessList,
)
urls = [
re_path(r'^$', InventorySourceList.as_view(), name='inventory_source_list'),
re_path(r'^(?P<pk>[0-9]+)/$', InventorySourceDetail.as_view(), name='inventory_source_detail'),

View File

@@ -15,7 +15,6 @@ from awx.api.views import (
InventoryUpdateCredentialsList,
)
urls = [
re_path(r'^$', InventoryUpdateList.as_view(), name='inventory_update_list'),
re_path(r'^(?P<pk>[0-9]+)/$', InventoryUpdateDetail.as_view(), name='inventory_update_detail'),

View File

@@ -19,7 +19,6 @@ from awx.api.views import (
JobHostSummaryDetail,
)
urls = [
re_path(r'^$', JobList.as_view(), name='job_list'),
re_path(r'^(?P<pk>[0-9]+)/$', JobDetail.as_view(), name='job_detail'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import JobHostSummaryDetail
urls = [re_path(r'^(?P<pk>[0-9]+)/$', JobHostSummaryDetail.as_view(), name='job_host_summary_detail')]
__all__ = ['urls']

View File

@@ -23,7 +23,6 @@ from awx.api.views import (
JobTemplateCopy,
)
urls = [
re_path(r'^$', JobTemplateList.as_view(), name='job_template_list'),
re_path(r'^(?P<pk>[0-9]+)/$', JobTemplateDetail.as_view(), name='job_template_detail'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views.labels import LabelList, LabelDetail
urls = [re_path(r'^$', LabelList.as_view(), name='label_list'), re_path(r'^(?P<pk>[0-9]+)/$', LabelDetail.as_view(), name='label_detail')]
__all__ = ['urls']

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import NotificationList, NotificationDetail
urls = [
re_path(r'^$', NotificationList.as_view(), name='notification_list'),
re_path(r'^(?P<pk>[0-9]+)/$', NotificationDetail.as_view(), name='notification_detail'),

View File

@@ -11,7 +11,6 @@ from awx.api.views import (
NotificationTemplateCopy,
)
urls = [
re_path(r'^$', NotificationTemplateList.as_view(), name='notification_template_list'),
re_path(r'^(?P<pk>[0-9]+)/$', NotificationTemplateDetail.as_view(), name='notification_template_detail'),

View File

@@ -27,7 +27,6 @@ from awx.api.views.organization import (
)
from awx.api.views import OrganizationCredentialList
urls = [
re_path(r'^$', OrganizationList.as_view(), name='organization_list'),
re_path(r'^(?P<pk>[0-9]+)/$', OrganizationDetail.as_view(), name='organization_detail'),

View File

@@ -22,7 +22,6 @@ from awx.api.views import (
ProjectCopy,
)
urls = [
re_path(r'^$', ProjectList.as_view(), name='project_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ProjectDetail.as_view(), name='project_detail'),

View File

@@ -13,7 +13,6 @@ from awx.api.views import (
ProjectUpdateEventsList,
)
urls = [
re_path(r'^$', ProjectUpdateList.as_view(), name='project_update_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ProjectUpdateDetail.as_view(), name='project_update_detail'),

View File

@@ -8,7 +8,6 @@ from awx.api.views import (
ReceptorAddressDetail,
)
urls = [
re_path(r'^$', ReceptorAddressesList.as_view(), name='receptor_addresses_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ReceptorAddressDetail.as_view(), name='receptor_address_detail'),

View File

@@ -3,16 +3,13 @@
from django.urls import re_path
from awx.api.views import RoleList, RoleDetail, RoleUsersList, RoleTeamsList, RoleParentsList, RoleChildrenList
from awx.api.views import RoleList, RoleDetail, RoleUsersList, RoleTeamsList
urls = [
re_path(r'^$', RoleList.as_view(), name='role_list'),
re_path(r'^(?P<pk>[0-9]+)/$', RoleDetail.as_view(), name='role_detail'),
re_path(r'^(?P<pk>[0-9]+)/users/$', RoleUsersList.as_view(), name='role_users_list'),
re_path(r'^(?P<pk>[0-9]+)/teams/$', RoleTeamsList.as_view(), name='role_teams_list'),
re_path(r'^(?P<pk>[0-9]+)/parents/$', RoleParentsList.as_view(), name='role_parents_list'),
re_path(r'^(?P<pk>[0-9]+)/children/$', RoleChildrenList.as_view(), name='role_children_list'),
]
__all__ = ['urls']

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import ScheduleList, ScheduleDetail, ScheduleUnifiedJobsList, ScheduleCredentialsList, ScheduleLabelsList, ScheduleInstanceGroupList
urls = [
re_path(r'^$', ScheduleList.as_view(), name='schedule_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ScheduleDetail.as_view(), name='schedule_detail'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import SystemJobList, SystemJobDetail, SystemJobCancel, SystemJobNotificationsList, SystemJobEventsList
urls = [
re_path(r'^$', SystemJobList.as_view(), name='system_job_list'),
re_path(r'^(?P<pk>[0-9]+)/$', SystemJobDetail.as_view(), name='system_job_detail'),

View File

@@ -14,7 +14,6 @@ from awx.api.views import (
SystemJobTemplateNotificationTemplatesSuccessList,
)
urls = [
re_path(r'^$', SystemJobTemplateList.as_view(), name='system_job_template_list'),
re_path(r'^(?P<pk>[0-9]+)/$', SystemJobTemplateDetail.as_view(), name='system_job_template_detail'),

View File

@@ -15,7 +15,6 @@ from awx.api.views import (
TeamAccessList,
)
urls = [
re_path(r'^$', TeamList.as_view(), name='team_list'),
re_path(r'^(?P<pk>[0-9]+)/$', TeamDetail.as_view(), name='team_detail'),

View File

@@ -4,7 +4,6 @@
from __future__ import absolute_import, unicode_literals
from django.urls import include, re_path
from awx import MODE
from awx.api.generics import LoggedLoginView, LoggedLogoutView
from awx.api.views.root import (
ApiRootView,
@@ -148,21 +147,15 @@ v2_urls = [
app_name = 'api'
urlpatterns = [
re_path(r'^$', ApiRootView.as_view(), name='api_root_view'),
re_path(r'^(?P<version>(v2))/', include(v2_urls)),
re_path(r'^login/$', LoggedLoginView.as_view(template_name='rest_framework/login.html', extra_context={'inside_login_context': True}), name='login'),
re_path(r'^logout/$', LoggedLogoutView.as_view(next_page='/api/', redirect_field_name='next'), name='logout'),
# the docs/, schema-related endpoints used to be listed here but now exposed by DAB api_documentation app
]
if MODE == 'development':
# Only include these if we are in the development environment
from awx.api.swagger import schema_view
from awx.api.urls.debug import urls as debug_urls
from awx.api.urls.debug import urls as debug_urls
urlpatterns += [re_path(r'^debug/', include(debug_urls))]
urlpatterns += [
re_path(r'^swagger(?P<format>\.json|\.yaml)/$', schema_view.without_ui(cache_timeout=0), name='schema-json'),
re_path(r'^swagger/$', schema_view.with_ui('swagger', cache_timeout=0), name='schema-swagger-ui'),
re_path(r'^redoc/$', schema_view.with_ui('redoc', cache_timeout=0), name='schema-redoc'),
]
urlpatterns += [re_path(r'^debug/', include(debug_urls))]

View File

@@ -2,7 +2,6 @@ from django.urls import re_path
from awx.api.views.webhooks import WebhookKeyView, GithubWebhookReceiver, GitlabWebhookReceiver, BitbucketDcWebhookReceiver
urlpatterns = [
re_path(r'^webhook_key/$', WebhookKeyView.as_view(), name='webhook_key'),
re_path(r'^github/$', GithubWebhookReceiver.as_view(), name='webhook_receiver_github'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import WorkflowApprovalList, WorkflowApprovalDetail, WorkflowApprovalApprove, WorkflowApprovalDeny
urls = [
re_path(r'^$', WorkflowApprovalList.as_view(), name='workflow_approval_list'),
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowApprovalDetail.as_view(), name='workflow_approval_detail'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import WorkflowApprovalTemplateDetail, WorkflowApprovalTemplateJobsList
urls = [
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowApprovalTemplateDetail.as_view(), name='workflow_approval_template_detail'),
re_path(r'^(?P<pk>[0-9]+)/approvals/$', WorkflowApprovalTemplateJobsList.as_view(), name='workflow_approval_template_jobs_list'),

View File

@@ -14,7 +14,6 @@ from awx.api.views import (
WorkflowJobActivityStreamList,
)
urls = [
re_path(r'^$', WorkflowJobList.as_view(), name='workflow_job_list'),
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowJobDetail.as_view(), name='workflow_job_detail'),

View File

@@ -14,7 +14,6 @@ from awx.api.views import (
WorkflowJobNodeInstanceGroupsList,
)
urls = [
re_path(r'^$', WorkflowJobNodeList.as_view(), name='workflow_job_node_list'),
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowJobNodeDetail.as_view(), name='workflow_job_node_detail'),

View File

@@ -22,7 +22,6 @@ from awx.api.views import (
WorkflowJobTemplateLabelList,
)
urls = [
re_path(r'^$', WorkflowJobTemplateList.as_view(), name='workflow_job_template_list'),
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowJobTemplateDetail.as_view(), name='workflow_job_template_detail'),

View File

@@ -15,7 +15,6 @@ from awx.api.views import (
WorkflowJobTemplateNodeInstanceGroupsList,
)
urls = [
re_path(r'^$', WorkflowJobTemplateNodeList.as_view(), name='workflow_job_template_node_list'),
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowJobTemplateNodeDetail.as_view(), name='workflow_job_template_node_detail'),

File diff suppressed because it is too large Load Diff

View File

@@ -15,6 +15,8 @@ from rest_framework import status
from collections import OrderedDict
from ansible_base.lib.utils.schema import extend_schema_if_available
AUTOMATION_ANALYTICS_API_URL_PATH = "/api/tower-analytics/v1"
AWX_ANALYTICS_API_PREFIX = 'analytics'
@@ -38,6 +40,8 @@ class MissingSettings(Exception):
class GetNotAllowedMixin(object):
skip_ai_description = True
def get(self, request, format=None):
return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED)
@@ -46,7 +50,9 @@ class AnalyticsRootView(APIView):
permission_classes = (AnalyticsPermission,)
name = _('Automation Analytics')
swagger_topic = 'Automation Analytics'
resource_purpose = 'automation analytics endpoints'
@extend_schema_if_available(extensions={"x-ai-description": "A list of additional API endpoints related to analytics"})
def get(self, request, format=None):
data = OrderedDict()
data['authorized'] = reverse('api:analytics_authorized', request=request)
@@ -99,6 +105,8 @@ class AnalyticsGenericView(APIView):
return Response(response.json(), status=response.status_code)
"""
resource_purpose = 'base view for analytics api proxy'
permission_classes = (AnalyticsPermission,)
@staticmethod
@@ -257,67 +265,91 @@ class AnalyticsGenericView(APIView):
class AnalyticsGenericListView(AnalyticsGenericView):
resource_purpose = 'analytics api proxy list view'
@extend_schema_if_available(extensions={"x-ai-description": "Get analytics data from Red Hat Insights"})
def get(self, request, format=None):
return self._send_to_analytics(request, method="GET")
@extend_schema_if_available(extensions={"x-ai-description": "Post query to Red Hat Insights analytics"})
def post(self, request, format=None):
return self._send_to_analytics(request, method="POST")
@extend_schema_if_available(extensions={"x-ai-description": "Get analytics endpoint options"})
def options(self, request, format=None):
return self._send_to_analytics(request, method="OPTIONS")
class AnalyticsGenericDetailView(AnalyticsGenericView):
resource_purpose = 'analytics api proxy detail view'
@extend_schema_if_available(extensions={"x-ai-description": "Get specific analytics resource from Red Hat Insights"})
def get(self, request, slug, format=None):
return self._send_to_analytics(request, method="GET")
@extend_schema_if_available(extensions={"x-ai-description": "Post query for specific analytics resource to Red Hat Insights"})
def post(self, request, slug, format=None):
return self._send_to_analytics(request, method="POST")
@extend_schema_if_available(extensions={"x-ai-description": "Get options for specific analytics resource"})
def options(self, request, slug, format=None):
return self._send_to_analytics(request, method="OPTIONS")
@extend_schema_if_available(
extensions={'x-ai-description': 'Check if the user has access to Red Hat Insights'},
)
class AnalyticsAuthorizedView(AnalyticsGenericListView):
name = _("Authorized")
resource_purpose = 'red hat insights authorization status'
class AnalyticsReportsList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Reports")
swagger_topic = "Automation Analytics"
resource_purpose = 'automation analytics reports'
class AnalyticsReportDetail(AnalyticsGenericDetailView):
name = _("Report")
resource_purpose = 'automation analytics report detail'
class AnalyticsReportOptionsList(AnalyticsGenericListView):
name = _("Report Options")
resource_purpose = 'automation analytics report options'
class AnalyticsAdoptionRateList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Adoption Rate")
resource_purpose = 'automation analytics adoption rate data'
class AnalyticsEventExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Event Explorer")
resource_purpose = 'automation analytics event explorer data'
class AnalyticsHostExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Host Explorer")
resource_purpose = 'automation analytics host explorer data'
class AnalyticsJobExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Job Explorer")
resource_purpose = 'automation analytics job explorer data'
class AnalyticsProbeTemplatesList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Probe Templates")
resource_purpose = 'automation analytics probe templates'
class AnalyticsProbeTemplateForHostsList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Probe Template For Hosts")
resource_purpose = 'automation analytics probe templates for hosts'
class AnalyticsRoiTemplatesList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("ROI Templates")
resource_purpose = 'automation analytics roi templates'

View File

@@ -1,5 +1,7 @@
from collections import OrderedDict
from ansible_base.lib.utils.schema import extend_schema_if_available
from django.utils.translation import gettext_lazy as _
from rest_framework.permissions import IsAuthenticated
@@ -30,6 +32,7 @@ class BulkView(APIView):
]
allowed_methods = ['GET', 'OPTIONS']
@extend_schema_if_available(extensions={"x-ai-description": "Retrieves a list of available bulk actions"})
def get(self, request, format=None):
'''List top level resources'''
data = OrderedDict()
@@ -45,11 +48,13 @@ class BulkJobLaunchView(GenericAPIView):
serializer_class = serializers.BulkJobLaunchSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
@extend_schema_if_available(extensions={"x-ai-description": "Get information about bulk job launch endpoint"})
def get(self, request):
data = OrderedDict()
data['detail'] = "Specify a list of unified job templates to launch alongside their launchtime parameters"
return Response(data, status=status.HTTP_200_OK)
@extend_schema_if_available(extensions={"x-ai-description": "Bulk launch job templates"})
def post(self, request):
bulkjob_serializer = serializers.BulkJobLaunchSerializer(data=request.data, context={'request': request})
if bulkjob_serializer.is_valid():
@@ -64,9 +69,11 @@ class BulkHostCreateView(GenericAPIView):
serializer_class = serializers.BulkHostCreateSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
@extend_schema_if_available(extensions={"x-ai-description": "Get information about bulk host create endpoint"})
def get(self, request):
return Response({"detail": "Bulk create hosts with this endpoint"}, status=status.HTTP_200_OK)
@extend_schema_if_available(extensions={"x-ai-description": "Bulk create hosts"})
def post(self, request):
serializer = serializers.BulkHostCreateSerializer(data=request.data, context={'request': request})
if serializer.is_valid():
@@ -81,9 +88,11 @@ class BulkHostDeleteView(GenericAPIView):
serializer_class = serializers.BulkHostDeleteSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
@extend_schema_if_available(extensions={"x-ai-description": "Get information about bulk host delete endpoint"})
def get(self, request):
return Response({"detail": "Bulk delete hosts with this endpoint"}, status=status.HTTP_200_OK)
@extend_schema_if_available(extensions={"x-ai-description": "Bulk delete hosts"})
def post(self, request):
serializer = serializers.BulkHostDeleteSerializer(data=request.data, context={'request': request})
if serializer.is_valid():

View File

@@ -5,6 +5,7 @@ from django.conf import settings
from rest_framework.permissions import AllowAny
from rest_framework.response import Response
from awx.api.generics import APIView
from ansible_base.lib.utils.schema import extend_schema_if_available
from awx.main.scheduler import TaskManager, DependencyManager, WorkflowManager
@@ -14,7 +15,9 @@ class TaskManagerDebugView(APIView):
exclude_from_schema = True
permission_classes = [AllowAny]
prefix = 'Task'
resource_purpose = 'debug task manager'
@extend_schema_if_available(extensions={"x-ai-description": "Trigger task manager scheduling"})
def get(self, request):
TaskManager().schedule()
if not settings.AWX_DISABLE_TASK_MANAGERS:
@@ -29,7 +32,9 @@ class DependencyManagerDebugView(APIView):
exclude_from_schema = True
permission_classes = [AllowAny]
prefix = 'Dependency'
resource_purpose = 'debug dependency manager'
@extend_schema_if_available(extensions={"x-ai-description": "Trigger dependency manager scheduling"})
def get(self, request):
DependencyManager().schedule()
if not settings.AWX_DISABLE_TASK_MANAGERS:
@@ -44,7 +49,9 @@ class WorkflowManagerDebugView(APIView):
exclude_from_schema = True
permission_classes = [AllowAny]
prefix = 'Workflow'
resource_purpose = 'debug workflow manager'
@extend_schema_if_available(extensions={"x-ai-description": "Trigger workflow manager scheduling"})
def get(self, request):
WorkflowManager().schedule()
if not settings.AWX_DISABLE_TASK_MANAGERS:
@@ -58,7 +65,9 @@ class DebugRootView(APIView):
_ignore_model_permissions = True
exclude_from_schema = True
permission_classes = [AllowAny]
resource_purpose = 'debug endpoints root'
@extend_schema_if_available(extensions={"x-ai-description": "List available debug endpoints"})
def get(self, request, format=None):
'''List of available debug urls'''
data = OrderedDict()

View File

@@ -10,9 +10,10 @@ import time
import re
import asn1
from ansible_base.lib.utils.schema import extend_schema_if_available
from awx.api import serializers
from awx.api.generics import GenericAPIView, Response
from awx.api.permissions import IsSystemAdminOrAuditor
from awx.api.permissions import IsSystemAdmin
from awx.main import models
from cryptography import x509
from cryptography.hazmat.primitives import hashes, serialization
@@ -48,8 +49,10 @@ class InstanceInstallBundle(GenericAPIView):
name = _('Install Bundle')
model = models.Instance
serializer_class = serializers.InstanceSerializer
permission_classes = (IsSystemAdminOrAuditor,)
permission_classes = (IsSystemAdmin,)
resource_purpose = 'install bundle'
@extend_schema_if_available(extensions={"x-ai-description": "Generate and download install bundle for an instance"})
def get(self, request, *args, **kwargs):
instance_obj = self.get_object()
@@ -195,8 +198,8 @@ def generate_receptor_tls(instance_obj):
.issuer_name(ca_cert.issuer)
.public_key(csr.public_key())
.serial_number(x509.random_serial_number())
.not_valid_before(datetime.datetime.utcnow())
.not_valid_after(datetime.datetime.utcnow() + datetime.timedelta(days=3650))
.not_valid_before(datetime.datetime.now(datetime.UTC))
.not_valid_after(datetime.datetime.now(datetime.UTC) + datetime.timedelta(days=3650))
.add_extension(
csr.extensions.get_extension_for_class(x509.SubjectAlternativeName).value,
critical=csr.extensions.get_extension_for_class(x509.SubjectAlternativeName).critical,

View File

@@ -19,6 +19,8 @@ from rest_framework import serializers
# AWX
from awx.main.models import ActivityStream, Inventory, JobTemplate, Role, User, InstanceGroup, InventoryUpdateEvent, InventoryUpdate
from ansible_base.lib.utils.schema import extend_schema_if_available
from awx.api.generics import (
ListCreateAPIView,
RetrieveUpdateDestroyAPIView,
@@ -43,7 +45,6 @@ from awx.api.views.mixin import RelatedJobsPreventDeleteMixin
from awx.api.pagination import UnifiedJobEventPagination
logger = logging.getLogger('awx.api.views.organization')
@@ -55,6 +56,7 @@ class InventoryUpdateEventsList(SubListAPIView):
name = _('Inventory Update Events List')
search_fields = ('stdout',)
pagination_class = UnifiedJobEventPagination
resource_purpose = 'events of an inventory update'
def get_queryset(self):
iu = self.get_parent_object()
@@ -69,11 +71,17 @@ class InventoryUpdateEventsList(SubListAPIView):
class InventoryList(ListCreateAPIView):
model = Inventory
serializer_class = InventorySerializer
resource_purpose = 'inventories'
@extend_schema_if_available(extensions={"x-ai-description": "A list of inventories."})
def get(self, request, *args, **kwargs):
return super().get(request, *args, **kwargs)
class InventoryDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Inventory
serializer_class = InventorySerializer
resource_purpose = 'inventory detail'
def update(self, request, *args, **kwargs):
obj = self.get_object()
@@ -100,33 +108,39 @@ class InventoryDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIVie
class ConstructedInventoryDetail(InventoryDetail):
serializer_class = ConstructedInventorySerializer
resource_purpose = 'constructed inventory detail'
class ConstructedInventoryList(InventoryList):
serializer_class = ConstructedInventorySerializer
resource_purpose = 'constructed inventories'
def get_queryset(self):
r = super().get_queryset()
return r.filter(kind='constructed')
@extend_schema_if_available(extensions={"x-ai-description": "Get or create input inventory inventory"})
class InventoryInputInventoriesList(SubListAttachDetachAPIView):
model = Inventory
serializer_class = InventorySerializer
parent_model = Inventory
relationship = 'input_inventories'
resource_purpose = 'input inventories of a constructed inventory'
def is_valid_relation(self, parent, sub, created=False):
if sub.kind == 'constructed':
raise serializers.ValidationError({'error': 'You cannot add a constructed inventory to another constructed inventory.'})
@extend_schema_if_available(extensions={"x-ai-description": "Get activity stream for an inventory"})
class InventoryActivityStreamList(SubListAPIView):
model = ActivityStream
serializer_class = ActivityStreamSerializer
parent_model = Inventory
relationship = 'activitystream_set'
search_fields = ('changes',)
resource_purpose = 'activity stream for an inventory'
def get_queryset(self):
parent = self.get_parent_object()
@@ -140,11 +154,13 @@ class InventoryInstanceGroupsList(SubListAttachDetachAPIView):
serializer_class = InstanceGroupSerializer
parent_model = Inventory
relationship = 'instance_groups'
resource_purpose = 'instance groups of an inventory'
class InventoryAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's
parent_model = Inventory
resource_purpose = 'users who can access the inventory'
class InventoryObjectRolesList(SubListAPIView):
@@ -153,6 +169,7 @@ class InventoryObjectRolesList(SubListAPIView):
parent_model = Inventory
search_fields = ('role_field', 'content_type__model')
deprecated = True
resource_purpose = 'roles of an inventory'
def get_queryset(self):
po = self.get_parent_object()
@@ -165,6 +182,7 @@ class InventoryJobTemplateList(SubListAPIView):
serializer_class = JobTemplateSerializer
parent_model = Inventory
relationship = 'jobtemplates'
resource_purpose = 'job templates using an inventory'
def get_queryset(self):
parent = self.get_parent_object()
@@ -175,8 +193,10 @@ class InventoryJobTemplateList(SubListAPIView):
class InventoryLabelList(LabelSubListCreateAttachDetachView):
parent_model = Inventory
resource_purpose = 'labels of an inventory'
class InventoryCopy(CopyAPIView):
model = Inventory
copy_return_serializer_class = InventorySerializer
resource_purpose = 'copy of an inventory'

View File

@@ -2,6 +2,7 @@
from awx.api.generics import SubListCreateAttachDetachAPIView, RetrieveUpdateAPIView, ListCreateAPIView
from awx.main.models import Label
from awx.api.serializers import LabelSerializer
from ansible_base.lib.utils.schema import extend_schema_if_available
# Django
from django.utils.translation import gettext_lazy as _
@@ -24,9 +25,10 @@ class LabelSubListCreateAttachDetachView(SubListCreateAttachDetachAPIView):
model = Label
serializer_class = LabelSerializer
relationship = 'labels'
resource_purpose = 'labels of a resource'
def unattach(self, request, *args, **kwargs):
(sub_id, res) = super().unattach_validate(request)
sub_id, res = super().unattach_validate(request)
if res:
return res
@@ -39,6 +41,7 @@ class LabelSubListCreateAttachDetachView(SubListCreateAttachDetachAPIView):
return res
@extend_schema_if_available(extensions={"x-ai-description": "Create or attach a label to a resource"})
def post(self, request, *args, **kwargs):
# If a label already exists in the database, attach it instead of erroring out
# that it already exists
@@ -61,9 +64,11 @@ class LabelSubListCreateAttachDetachView(SubListCreateAttachDetachAPIView):
class LabelDetail(RetrieveUpdateAPIView):
model = Label
serializer_class = LabelSerializer
resource_purpose = 'label detail'
class LabelList(ListCreateAPIView):
name = _("Labels")
model = Label
serializer_class = LabelSerializer
resource_purpose = 'labels'

View File

@@ -2,6 +2,7 @@
# All Rights Reserved.
from django.utils.translation import gettext_lazy as _
from ansible_base.lib.utils.schema import extend_schema_if_available
from awx.api.generics import APIView, Response
from awx.api.permissions import IsSystemAdminOrAuditor
@@ -13,7 +14,9 @@ class MeshVisualizer(APIView):
name = _("Mesh Visualizer")
permission_classes = (IsSystemAdminOrAuditor,)
swagger_topic = "System Configuration"
resource_purpose = 'mesh network topology visualization data'
@extend_schema_if_available(extensions={"x-ai-description": "Get mesh network topology visualization data"})
def get(self, request, format=None):
data = {
'nodes': InstanceNodeSerializer(Instance.objects.all(), many=True).data,

View File

@@ -7,13 +7,13 @@ import logging
# Django
from django.conf import settings
from django.utils.translation import gettext_lazy as _
from ansible_base.lib.utils.schema import extend_schema_if_available
# Django REST Framework
from rest_framework.permissions import AllowAny
from rest_framework.response import Response
from rest_framework.exceptions import PermissionDenied
# AWX
# from awx.main.analytics import collectors
import awx.main.analytics.subsystem_metrics as s_metrics
@@ -22,13 +22,13 @@ from awx.api import renderers
from awx.api.generics import APIView
logger = logging.getLogger('awx.analytics')
class MetricsView(APIView):
name = _('Metrics')
swagger_topic = 'Metrics'
resource_purpose = 'prometheus metrics data'
renderer_classes = [renderers.PlainTextRenderer, renderers.PrometheusJSONRenderer, renderers.BrowsableAPIRenderer]
@@ -37,6 +37,7 @@ class MetricsView(APIView):
self.permission_classes = (AllowAny,)
return super(APIView, self).initialize_request(request, *args, **kwargs)
@extend_schema_if_available(extensions={"x-ai-description": "Get Prometheus metrics data"})
def get(self, request):
'''Show Metrics Details'''
if settings.ALLOW_METRICS_FOR_ANONYMOUS_USERS or request.user.is_superuser or request.user.is_system_auditor:

View File

@@ -53,21 +53,20 @@ from awx.api.serializers import (
CredentialSerializer,
)
from awx.api.views.mixin import RelatedJobsPreventDeleteMixin, OrganizationCountsMixin, OrganizationInstanceGroupMembershipMixin
from awx.api.views import immutablesharedfields
logger = logging.getLogger('awx.api.views.organization')
@immutablesharedfields
class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization
serializer_class = OrganizationSerializer
resource_purpose = 'organizations'
@immutablesharedfields
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization
serializer_class = OrganizationSerializer
resource_purpose = 'organization detail'
def get_serializer_context(self, *args, **kwargs):
full_context = super(OrganizationDetail, self).get_serializer_context(*args, **kwargs)
@@ -105,24 +104,25 @@ class OrganizationInventoriesList(SubListAPIView):
serializer_class = InventorySerializer
parent_model = Organization
relationship = 'inventories'
resource_purpose = 'inventories of an organization'
@immutablesharedfields
class OrganizationUsersList(BaseUsersList):
model = User
serializer_class = UserSerializer
parent_model = Organization
relationship = 'member_role.members'
ordering = ('username',)
resource_purpose = 'users of an organization'
@immutablesharedfields
class OrganizationAdminsList(BaseUsersList):
model = User
serializer_class = UserSerializer
parent_model = Organization
relationship = 'admin_role.members'
ordering = ('username',)
resource_purpose = 'administrators of an organization'
class OrganizationProjectsList(SubListCreateAPIView):
@@ -130,6 +130,7 @@ class OrganizationProjectsList(SubListCreateAPIView):
serializer_class = ProjectSerializer
parent_model = Organization
parent_key = 'organization'
resource_purpose = 'projects of an organization'
class OrganizationExecutionEnvironmentsList(SubListCreateAttachDetachAPIView):
@@ -139,6 +140,7 @@ class OrganizationExecutionEnvironmentsList(SubListCreateAttachDetachAPIView):
relationship = 'executionenvironments'
parent_key = 'organization'
swagger_topic = "Execution Environments"
resource_purpose = 'execution environments of an organization'
class OrganizationJobTemplatesList(SubListCreateAPIView):
@@ -146,6 +148,7 @@ class OrganizationJobTemplatesList(SubListCreateAPIView):
serializer_class = JobTemplateSerializer
parent_model = Organization
parent_key = 'organization'
resource_purpose = 'job templates of an organization'
class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
@@ -153,15 +156,16 @@ class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
serializer_class = WorkflowJobTemplateSerializer
parent_model = Organization
parent_key = 'organization'
resource_purpose = 'workflow job templates of an organization'
@immutablesharedfields
class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
model = Team
serializer_class = TeamSerializer
parent_model = Organization
relationship = 'teams'
parent_key = 'organization'
resource_purpose = 'teams of an organization'
class OrganizationActivityStreamList(SubListAPIView):
@@ -170,6 +174,7 @@ class OrganizationActivityStreamList(SubListAPIView):
parent_model = Organization
relationship = 'activitystream_set'
search_fields = ('changes',)
resource_purpose = 'activity stream for an organization'
class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView):
@@ -178,28 +183,34 @@ class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView):
parent_model = Organization
relationship = 'notification_templates'
parent_key = 'organization'
resource_purpose = 'notification templates of an organization'
class OrganizationNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
resource_purpose = 'base view for notification templates of an organization'
class OrganizationNotificationTemplatesStartedList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_started'
resource_purpose = 'notification templates for job started events of an organization'
class OrganizationNotificationTemplatesErrorList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_error'
resource_purpose = 'notification templates for job error events of an organization'
class OrganizationNotificationTemplatesSuccessList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_success'
resource_purpose = 'notification templates for job success events of an organization'
class OrganizationNotificationTemplatesApprovalList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_approvals'
resource_purpose = 'notification templates for workflow approval events of an organization'
class OrganizationInstanceGroupsList(OrganizationInstanceGroupMembershipMixin, SubListAttachDetachAPIView):
@@ -208,6 +219,7 @@ class OrganizationInstanceGroupsList(OrganizationInstanceGroupMembershipMixin, S
parent_model = Organization
relationship = 'instance_groups'
filter_read_permission = False
resource_purpose = 'instance groups of an organization'
class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
@@ -216,6 +228,7 @@ class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
parent_model = Organization
relationship = 'galaxy_credentials'
filter_read_permission = False
resource_purpose = 'galaxy credentials of an organization'
def is_valid_relation(self, parent, sub, created=False):
if sub.kind != 'galaxy_api_token':
@@ -225,6 +238,7 @@ class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
class OrganizationAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's
parent_model = Organization
resource_purpose = 'users who can access the organization'
class OrganizationObjectRolesList(SubListAPIView):
@@ -233,6 +247,7 @@ class OrganizationObjectRolesList(SubListAPIView):
parent_model = Organization
search_fields = ('role_field', 'content_type__model')
deprecated = True
resource_purpose = 'roles of an organization'
def get_queryset(self):
po = self.get_parent_object()

View File

@@ -8,6 +8,8 @@ import operator
from collections import OrderedDict
from django.conf import settings
from django.core.cache import cache
from django.db import connection
from django.utils.encoding import smart_str
from django.utils.decorators import method_decorator
from django.views.decorators.csrf import ensure_csrf_cookie
@@ -21,11 +23,14 @@ from rest_framework import status
import requests
from ansible_base.lib.utils.schema import extend_schema_if_available
from awx import MODE
from awx.api.generics import APIView
from awx.conf.registry import settings_registry
from awx.main.analytics import all_collectors
from awx.main.ha import is_ha_environment
from awx.main.tasks.system import clear_setting_cache
from awx.main.utils import get_awx_version, get_custom_venv_choices
from awx.main.utils.licensing import validate_entitlement_manifest
from awx.api.versioning import URLPathVersioning, reverse, drf_reverse
@@ -43,8 +48,10 @@ class ApiRootView(APIView):
name = _('REST API')
versioning_class = URLPathVersioning
swagger_topic = 'Versioning'
resource_purpose = 'api root and version information'
@method_decorator(ensure_csrf_cookie)
@extend_schema_if_available(extensions={"x-ai-description": "List supported API versions"})
def get(self, request, format=None):
'''List supported API versions'''
v2 = reverse('api:api_v2_root_view', request=request, kwargs={'version': 'v2'})
@@ -56,14 +63,16 @@ class ApiRootView(APIView):
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
data['login_redirect_override'] = settings.LOGIN_REDIRECT_OVERRIDE
if MODE == 'development':
data['swagger'] = drf_reverse('api:schema-swagger-ui')
data['docs'] = drf_reverse('api:schema-swagger-ui')
return Response(data)
class ApiVersionRootView(APIView):
permission_classes = (AllowAny,)
swagger_topic = 'Versioning'
resource_purpose = 'api top-level resources'
@extend_schema_if_available(extensions={"x-ai-description": "List top-level API resources"})
def get(self, request, format=None):
'''List top level resources'''
data = OrderedDict()
@@ -123,6 +132,7 @@ class ApiVersionRootView(APIView):
class ApiV2RootView(ApiVersionRootView):
name = _('Version 2')
resource_purpose = 'api v2 root'
class ApiV2PingView(APIView):
@@ -134,7 +144,11 @@ class ApiV2PingView(APIView):
authentication_classes = ()
name = _('Ping')
swagger_topic = 'System Configuration'
resource_purpose = 'basic instance information'
@extend_schema_if_available(
extensions={'x-ai-description': 'Return basic information about this instance'},
)
def get(self, request, format=None):
"""Return some basic information about this instance
@@ -169,24 +183,59 @@ class ApiV2SubscriptionView(APIView):
permission_classes = (IsAuthenticated,)
name = _('Subscriptions')
swagger_topic = 'System Configuration'
resource_purpose = 'aap subscription validation'
def check_permissions(self, request):
super(ApiV2SubscriptionView, self).check_permissions(request)
if not request.user.is_superuser and request.method.lower() not in {'options', 'head'}:
self.permission_denied(request) # Raises PermissionDenied exception.
@extend_schema_if_available(
extensions={'x-ai-description': 'List valid AAP subscriptions'},
)
def post(self, request):
data = request.data.copy()
if data.get('subscriptions_client_secret') == '$encrypted$':
data['subscriptions_client_secret'] = settings.SUBSCRIPTIONS_CLIENT_SECRET
try:
user, pw = data.get('subscriptions_client_id'), data.get('subscriptions_client_secret')
user = None
pw = None
basic_auth = False
# determine if the credentials are for basic auth or not
if data.get('subscriptions_client_id'):
user, pw = data.get('subscriptions_client_id'), data.get('subscriptions_client_secret')
if pw == '$encrypted$':
pw = settings.SUBSCRIPTIONS_CLIENT_SECRET
elif data.get('subscriptions_username'):
user, pw = data.get('subscriptions_username'), data.get('subscriptions_password')
if pw == '$encrypted$':
pw = settings.SUBSCRIPTIONS_PASSWORD
basic_auth = True
if not user or not pw:
return Response({"error": _("Missing subscription credentials")}, status=status.HTTP_400_BAD_REQUEST)
with set_environ(**settings.AWX_TASK_ENV):
validated = get_licenser().validate_rh(user, pw)
if user:
settings.SUBSCRIPTIONS_CLIENT_ID = data['subscriptions_client_id']
if pw:
settings.SUBSCRIPTIONS_CLIENT_SECRET = data['subscriptions_client_secret']
validated = get_licenser().validate_rh(user, pw, basic_auth)
# update settings if the credentials were valid
if basic_auth:
if user:
settings.SUBSCRIPTIONS_USERNAME = user
if pw:
settings.SUBSCRIPTIONS_PASSWORD = pw
# mutual exclusion for basic auth and service account
# only one should be set at a given time so that
# config/attach/ knows which credentials to use
settings.SUBSCRIPTIONS_CLIENT_ID = ""
settings.SUBSCRIPTIONS_CLIENT_SECRET = ""
else:
if user:
settings.SUBSCRIPTIONS_CLIENT_ID = user
if pw:
settings.SUBSCRIPTIONS_CLIENT_SECRET = pw
# mutual exclusion for basic auth and service account
settings.SUBSCRIPTIONS_USERNAME = ""
settings.SUBSCRIPTIONS_PASSWORD = ""
except Exception as exc:
msg = _("Invalid Subscription")
if isinstance(exc, TokenError) or (
@@ -210,24 +259,37 @@ class ApiV2AttachView(APIView):
permission_classes = (IsAuthenticated,)
name = _('Attach Subscription')
swagger_topic = 'System Configuration'
resource_purpose = 'subscription attachment'
def check_permissions(self, request):
super(ApiV2AttachView, self).check_permissions(request)
if not request.user.is_superuser and request.method.lower() not in {'options', 'head'}:
self.permission_denied(request) # Raises PermissionDenied exception.
@extend_schema_if_available(
extensions={'x-ai-description': 'Attach a subscription'},
)
def post(self, request):
data = request.data.copy()
subscription_id = data.get('subscription_id', None)
if not subscription_id:
return Response({"error": _("No subscription ID provided.")}, status=status.HTTP_400_BAD_REQUEST)
# Ensure we always use the latest subscription credentials
cache.delete_many(['SUBSCRIPTIONS_CLIENT_ID', 'SUBSCRIPTIONS_CLIENT_SECRET', 'SUBSCRIPTIONS_USERNAME', 'SUBSCRIPTIONS_PASSWORD'])
user = getattr(settings, 'SUBSCRIPTIONS_CLIENT_ID', None)
pw = getattr(settings, 'SUBSCRIPTIONS_CLIENT_SECRET', None)
basic_auth = False
if not (user and pw):
user = getattr(settings, 'SUBSCRIPTIONS_USERNAME', None)
pw = getattr(settings, 'SUBSCRIPTIONS_PASSWORD', None)
basic_auth = True
if not (user and pw):
return Response({"error": _("Missing subscription credentials")}, status=status.HTTP_400_BAD_REQUEST)
if subscription_id and user and pw:
data = request.data.copy()
try:
with set_environ(**settings.AWX_TASK_ENV):
validated = get_licenser().validate_rh(user, pw)
validated = get_licenser().validate_rh(user, pw, basic_auth)
except Exception as exc:
msg = _("Invalid Subscription")
if isinstance(exc, requests.exceptions.HTTPError) and getattr(getattr(exc, 'response', None), 'status_code', None) == 401:
@@ -241,10 +303,12 @@ class ApiV2AttachView(APIView):
else:
logger.exception(smart_str(u"Invalid subscription submitted."), extra=dict(actor=request.user.username))
return Response({"error": msg}, status=status.HTTP_400_BAD_REQUEST)
for sub in validated:
if sub['subscription_id'] == subscription_id:
sub['valid_key'] = True
settings.LICENSE = sub
connection.on_commit(lambda: clear_setting_cache.delay(['LICENSE']))
return Response(sub)
return Response({"error": _("Error processing subscription metadata.")}, status=status.HTTP_400_BAD_REQUEST)
@@ -254,17 +318,20 @@ class ApiV2ConfigView(APIView):
permission_classes = (IsAuthenticated,)
name = _('Configuration')
swagger_topic = 'System Configuration'
resource_purpose = 'system configuration and license management'
def check_permissions(self, request):
super(ApiV2ConfigView, self).check_permissions(request)
if not request.user.is_superuser and request.method.lower() not in {'options', 'head', 'get'}:
self.permission_denied(request) # Raises PermissionDenied exception.
@extend_schema_if_available(
extensions={'x-ai-description': 'Return various configuration settings'},
)
def get(self, request, format=None):
'''Return various sitewide configuration settings'''
license_data = get_licenser().validate()
if not license_data.get('valid_key', False):
license_data = {}
@@ -299,6 +366,7 @@ class ApiV2ConfigView(APIView):
return Response(data)
@extend_schema_if_available(extensions={"x-ai-description": "Add or update a subscription manifest license"})
def post(self, request):
if not isinstance(request.data, dict):
return Response({"error": _("Invalid subscription data")}, status=status.HTTP_400_BAD_REQUEST)
@@ -328,6 +396,7 @@ class ApiV2ConfigView(APIView):
try:
license_data_validated = get_licenser().license_from_manifest(license_data)
connection.on_commit(lambda: clear_setting_cache.delay(['LICENSE']))
except Exception:
logger.warning(smart_str(u"Invalid subscription submitted."), extra=dict(actor=request.user.username))
return Response({"error": _("Invalid License")}, status=status.HTTP_400_BAD_REQUEST)
@@ -343,9 +412,13 @@ class ApiV2ConfigView(APIView):
logger.warning(smart_str(u"Invalid subscription submitted."), extra=dict(actor=request.user.username))
return Response({"error": _("Invalid subscription")}, status=status.HTTP_400_BAD_REQUEST)
@extend_schema_if_available(
extensions={'x-ai-description': 'Remove the current subscription'},
)
def delete(self, request):
try:
settings.LICENSE = {}
connection.on_commit(lambda: clear_setting_cache.delay(['LICENSE']))
return Response(status=status.HTTP_204_NO_CONTENT)
except Exception:
# FIX: Log

View File

@@ -11,6 +11,7 @@ from rest_framework import status
from rest_framework.exceptions import PermissionDenied
from rest_framework.permissions import AllowAny
from rest_framework.response import Response
from ansible_base.lib.utils.schema import extend_schema_if_available
from awx.api import serializers
from awx.api.generics import APIView, GenericAPIView
@@ -24,6 +25,7 @@ logger = logging.getLogger('awx.api.views.webhooks')
class WebhookKeyView(GenericAPIView):
serializer_class = serializers.EmptySerializer
permission_classes = (WebhookKeyPermission,)
resource_purpose = 'webhook key management'
def get_queryset(self):
qs_models = {'job_templates': JobTemplate, 'workflow_job_templates': WorkflowJobTemplate}
@@ -31,11 +33,13 @@ class WebhookKeyView(GenericAPIView):
return super().get_queryset()
@extend_schema_if_available(extensions={"x-ai-description": "Get the webhook key for a template"})
def get(self, request, *args, **kwargs):
obj = self.get_object()
return Response({'webhook_key': obj.webhook_key})
@extend_schema_if_available(extensions={"x-ai-description": "Rotate the webhook key for a template"})
def post(self, request, *args, **kwargs):
obj = self.get_object()
obj.rotate_webhook_key()
@@ -52,6 +56,7 @@ class WebhookReceiverBase(APIView):
authentication_classes = ()
ref_keys = {}
resource_purpose = 'webhook receiver for triggering jobs'
def get_queryset(self):
qs_models = {'job_templates': JobTemplate, 'workflow_job_templates': WorkflowJobTemplate}
@@ -127,7 +132,8 @@ class WebhookReceiverBase(APIView):
raise PermissionDenied
@csrf_exempt
def post(self, request, *args, **kwargs):
@extend_schema_if_available(extensions={"x-ai-description": "Receive a webhook event and trigger a job"})
def post(self, request, *args, **kwargs_in):
# Ensure that the full contents of the request are captured for multiple uses.
request.body
@@ -175,6 +181,7 @@ class WebhookReceiverBase(APIView):
class GithubWebhookReceiver(WebhookReceiverBase):
service = 'github'
resource_purpose = 'github webhook receiver'
ref_keys = {
'pull_request': 'pull_request.head.sha',
@@ -212,6 +219,7 @@ class GithubWebhookReceiver(WebhookReceiverBase):
class GitlabWebhookReceiver(WebhookReceiverBase):
service = 'gitlab'
resource_purpose = 'gitlab webhook receiver'
ref_keys = {'Push Hook': 'checkout_sha', 'Tag Push Hook': 'checkout_sha', 'Merge Request Hook': 'object_attributes.last_commit.id'}
@@ -250,6 +258,7 @@ class GitlabWebhookReceiver(WebhookReceiverBase):
class BitbucketDcWebhookReceiver(WebhookReceiverBase):
service = 'bitbucket_dc'
resource_purpose = 'bitbucket data center webhook receiver'
ref_keys = {
'repo:refs_changed': 'changes.0.toHash',

View File

@@ -6,7 +6,7 @@ import urllib.parse as urlparse
from collections import OrderedDict
# Django
from django.core.validators import URLValidator, _lazy_re_compile
from django.core.validators import URLValidator, DomainNameValidator, _lazy_re_compile
from django.utils.translation import gettext_lazy as _
# Django REST Framework
@@ -160,10 +160,11 @@ class StringListIsolatedPathField(StringListField):
class URLField(CharField):
# these lines set up a custom regex that allow numbers in the
# top-level domain
tld_re = (
r'\.' # dot
r'(?!-)' # can't start with a dash
r'(?:[a-z' + URLValidator.ul + r'0-9' + '-]{2,63}' # domain label, this line was changed from the original URLValidator
r'(?:[a-z' + DomainNameValidator.ul + r'0-9' + '-]{2,63}' # domain label, this line was changed from the original URLValidator
r'|xn--[a-z0-9]{1,59})' # or punycode label
r'(?<!-)' # can't end with a dash
r'\.?' # may have a trailing dot

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.conf.views import SettingCategoryList, SettingSingletonDetail, SettingLoggingTest
urlpatterns = [
re_path(r'^$', SettingCategoryList.as_view(), name='setting_category_list'),
re_path(r'^(?P<category_slug>[a-z0-9-]+)/$', SettingSingletonDetail.as_view(), name='setting_singleton_detail'),

View File

@@ -31,7 +31,7 @@ from awx.conf.models import Setting
from awx.conf.serializers import SettingCategorySerializer, SettingSingletonSerializer
from awx.conf import settings_registry
from awx.main.utils.external_logging import reconfigure_rsyslog
from ansible_base.lib.utils.schema import extend_schema_if_available
SettingCategory = collections.namedtuple('SettingCategory', ('url', 'slug', 'name'))
@@ -42,6 +42,10 @@ class SettingCategoryList(ListAPIView):
filter_backends = []
name = _('Setting Categories')
@extend_schema_if_available(extensions={"x-ai-description": "A list of additional API endpoints related to settings."})
def get(self, request, *args, **kwargs):
return super().get(request, *args, **kwargs)
def get_queryset(self):
setting_categories = []
categories = settings_registry.get_registered_categories()
@@ -63,6 +67,10 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
filter_backends = []
name = _('Setting Detail')
@extend_schema_if_available(extensions={"x-ai-description": "Update system settings."})
def patch(self, request, *args, **kwargs):
return super().patch(request, *args, **kwargs)
def get_queryset(self):
self.category_slug = self.kwargs.get('category_slug', 'all')
all_category_slugs = list(settings_registry.get_registered_categories().keys())

View File

@@ -639,7 +639,9 @@ class UserAccess(BaseAccess):
prefetch_related = ('resource',)
def filtered_queryset(self):
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and (self.user.admin_of_organizations.exists() or self.user.auditor_of_organizations.exists()):
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and (
Organization.access_qs(self.user, 'change').exists() or Organization.access_qs(self.user, 'audit').exists()
):
qs = User.objects.all()
else:
qs = (
@@ -1224,7 +1226,9 @@ class TeamAccess(BaseAccess):
)
def filtered_queryset(self):
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and (self.user.admin_of_organizations.exists() or self.user.auditor_of_organizations.exists()):
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and (
Organization.access_qs(self.user, 'change').exists() or Organization.access_qs(self.user, 'audit').exists()
):
return self.model.objects.all()
return self.model.objects.filter(
Q(organization__in=Organization.accessible_pk_qs(self.user, 'member_role')) | Q(pk__in=self.model.accessible_pk_qs(self.user, 'read_role'))
@@ -2564,7 +2568,7 @@ class NotificationTemplateAccess(BaseAccess):
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
return self.model.access_qs(self.user, 'view')
return self.model.objects.filter(
Q(organization__in=Organization.access_qs(self.user, 'add_notificationtemplate')) | Q(organization__in=self.user.auditor_of_organizations)
Q(organization__in=Organization.access_qs(self.user, 'add_notificationtemplate')) | Q(organization__in=Organization.access_qs(self.user, 'audit'))
).distinct()
@check_superuser
@@ -2599,7 +2603,7 @@ class NotificationAccess(BaseAccess):
def filtered_queryset(self):
return self.model.objects.filter(
Q(notification_template__organization__in=Organization.access_qs(self.user, 'add_notificationtemplate'))
| Q(notification_template__organization__in=self.user.auditor_of_organizations)
| Q(notification_template__organization__in=Organization.access_qs(self.user, 'audit'))
).distinct()
def can_delete(self, obj):

View File

@@ -1,15 +1,17 @@
# Python
import logging
# Dispatcherd
from dispatcherd.publish import task
# AWX
from awx.main.analytics.subsystem_metrics import DispatcherMetrics, CallbackReceiverMetrics
from awx.main.dispatch.publish import task as task_awx
from awx.main.dispatch import get_task_queuename
logger = logging.getLogger('awx.main.scheduler')
@task_awx(queue=get_task_queuename)
@task(queue=get_task_queuename, timeout=300, on_duplicate='discard')
def send_subsystem_metrics():
DispatcherMetrics().send_metrics()
CallbackReceiverMetrics().send_metrics()

View File

@@ -1,8 +1,6 @@
import datetime
import asyncio
import logging
import redis
import redis.asyncio
import re
from prometheus_client import (
@@ -15,7 +13,7 @@ from prometheus_client import (
)
from django.conf import settings
from awx.main.utils.redis import get_redis_client, get_redis_client_async
BROADCAST_WEBSOCKET_REDIS_KEY_NAME = 'broadcast_websocket_stats'
@@ -66,6 +64,8 @@ class FixedSlidingWindow:
class RelayWebsocketStatsManager:
_redis_client = None # Cached Redis client for get_stats_sync()
def __init__(self, local_hostname):
self._local_hostname = local_hostname
self._stats = dict()
@@ -80,7 +80,7 @@ class RelayWebsocketStatsManager:
async def run_loop(self):
try:
redis_conn = await redis.asyncio.Redis.from_url(settings.BROKER_URL)
redis_conn = get_redis_client_async()
while True:
stats_data_str = ''.join(stat.serialize() for stat in self._stats.values())
await redis_conn.set(self._redis_key, stats_data_str)
@@ -103,8 +103,10 @@ class RelayWebsocketStatsManager:
"""
Stringified verion of all the stats
"""
redis_conn = redis.Redis.from_url(settings.BROKER_URL)
stats_str = redis_conn.get(BROADCAST_WEBSOCKET_REDIS_KEY_NAME) or b''
# Reuse cached Redis client to avoid creating new connection pools on every call
if cls._redis_client is None:
cls._redis_client = get_redis_client()
stats_str = cls._redis_client.get(BROADCAST_WEBSOCKET_REDIS_KEY_NAME) or b''
return parser.text_string_to_metric_families(stats_str.decode('UTF-8'))

View File

@@ -487,9 +487,7 @@ def unified_jobs_table(since, full_path, until, **kwargs):
OR (main_unifiedjob.finished > '{0}' AND main_unifiedjob.finished <= '{1}'))
AND main_unifiedjob.launch_type != 'sync'
ORDER BY main_unifiedjob.id ASC) TO STDOUT WITH CSV HEADER
'''.format(
since.isoformat(), until.isoformat()
)
'''.format(since.isoformat(), until.isoformat())
return _copy_table(table='unified_jobs', query=unified_job_query, path=full_path)
@@ -550,9 +548,7 @@ def workflow_job_node_table(since, full_path, until, **kwargs):
) always_nodes ON main_workflowjobnode.id = always_nodes.from_workflowjobnode_id
WHERE (main_workflowjobnode.modified > '{}' AND main_workflowjobnode.modified <= '{}')
ORDER BY main_workflowjobnode.id ASC) TO STDOUT WITH CSV HEADER
'''.format(
since.isoformat(), until.isoformat()
)
'''.format(since.isoformat(), until.isoformat())
return _copy_table(table='workflow_job_node', query=workflow_job_node_query, path=full_path)

View File

@@ -0,0 +1,41 @@
import http.client
import socket
import urllib.error
import urllib.request
import logging
from django.conf import settings
logger = logging.getLogger(__name__)
def get_dispatcherd_metrics(request):
metrics_cfg = settings.METRICS_SUBSYSTEM_CONFIG.get('server', {}).get(settings.METRICS_SERVICE_DISPATCHER, {})
host = metrics_cfg.get('host', 'localhost')
port = metrics_cfg.get('port', 8015)
metrics_filter = []
if request is not None and hasattr(request, "query_params"):
try:
nodes_filter = request.query_params.getlist("node")
except Exception:
nodes_filter = []
if nodes_filter and settings.CLUSTER_HOST_ID not in nodes_filter:
return ''
try:
metrics_filter = request.query_params.getlist("metric")
except Exception:
metrics_filter = []
if metrics_filter:
# Right now we have no way of filtering the dispatcherd metrics
# so just avoid getting in the way if another metric is filtered for
return ''
url = f"http://{host}:{port}/metrics"
try:
with urllib.request.urlopen(url, timeout=1.0) as response:
payload = response.read()
if not payload:
return ''
return payload.decode('utf-8')
except (urllib.error.URLError, UnicodeError, socket.timeout, TimeoutError, http.client.HTTPException) as exc:
logger.debug(f"Failed to collect dispatcherd metrics from {url}: {exc}")
return ''

View File

@@ -14,6 +14,8 @@ from rest_framework.request import Request
from awx.main.consumers import emit_channel_notification
from awx.main.utils import is_testing
from awx.main.utils.redis import get_redis_client
from .dispatcherd_metrics import get_dispatcherd_metrics
root_key = settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX
logger = logging.getLogger('awx.main.analytics')
@@ -44,11 +46,12 @@ class MetricsServer(MetricsServerSettings):
class BaseM:
def __init__(self, field, help_text):
def __init__(self, field, help_text, labels=None):
self.field = field
self.help_text = help_text
self.current_value = 0
self.metric_has_changed = False
self.labels = labels or {}
def reset_value(self, conn):
conn.hset(root_key, self.field, 0)
@@ -69,12 +72,16 @@ class BaseM:
value = conn.hget(root_key, self.field)
return self.decode_value(value)
def to_prometheus(self, instance_data):
def to_prometheus(self, instance_data, namespace=None):
output_text = f"# HELP {self.field} {self.help_text}\n# TYPE {self.field} gauge\n"
for instance in instance_data:
if self.field in instance_data[instance]:
# Build label string
labels = f'node="{instance}"'
if namespace:
labels += f',subsystem="{namespace}"'
# on upgrade, if there are stale instances, we can end up with issues where new metrics are not present
output_text += f'{self.field}{{node="{instance}"}} {instance_data[instance][self.field]}\n'
output_text += f'{self.field}{{{labels}}} {instance_data[instance][self.field]}\n'
return output_text
@@ -167,14 +174,17 @@ class HistogramM(BaseM):
self.sum.store_value(conn)
self.inf.store_value(conn)
def to_prometheus(self, instance_data):
def to_prometheus(self, instance_data, namespace=None):
output_text = f"# HELP {self.field} {self.help_text}\n# TYPE {self.field} histogram\n"
for instance in instance_data:
# Build label string
node_label = f'node="{instance}"'
subsystem_label = f',subsystem="{namespace}"' if namespace else ''
for i, b in enumerate(self.buckets):
output_text += f'{self.field}_bucket{{le="{b}",node="{instance}"}} {sum(instance_data[instance][self.field]["counts"][0:i+1])}\n'
output_text += f'{self.field}_bucket{{le="+Inf",node="{instance}"}} {instance_data[instance][self.field]["inf"]}\n'
output_text += f'{self.field}_count{{node="{instance}"}} {instance_data[instance][self.field]["inf"]}\n'
output_text += f'{self.field}_sum{{node="{instance}"}} {instance_data[instance][self.field]["sum"]}\n'
output_text += f'{self.field}_bucket{{le="{b}",{node_label}{subsystem_label}}} {sum(instance_data[instance][self.field]["counts"][0:i+1])}\n'
output_text += f'{self.field}_bucket{{le="+Inf",{node_label}{subsystem_label}}} {instance_data[instance][self.field]["inf"]}\n'
output_text += f'{self.field}_count{{{node_label}{subsystem_label}}} {instance_data[instance][self.field]["inf"]}\n'
output_text += f'{self.field}_sum{{{node_label}{subsystem_label}}} {instance_data[instance][self.field]["sum"]}\n'
return output_text
@@ -190,8 +200,8 @@ class Metrics(MetricsNamespace):
def __init__(self, namespace, auto_pipe_execute=False, instance_name=None, metrics_have_changed=True, **kwargs):
MetricsNamespace.__init__(self, namespace)
self.pipe = redis.Redis.from_url(settings.BROKER_URL).pipeline()
self.conn = redis.Redis.from_url(settings.BROKER_URL)
self.conn = get_redis_client()
self.pipe = self.conn.pipeline()
self.last_pipe_execute = time.time()
# track if metrics have been modified since last saved to redis
# start with True so that we get an initial save to redis
@@ -273,20 +283,22 @@ class Metrics(MetricsNamespace):
def pipe_execute(self):
if self.metrics_have_changed is True:
duration_to_save = time.perf_counter()
duration_pipe_exec = time.perf_counter()
for m in self.METRICS:
self.METRICS[m].store_value(self.pipe)
self.pipe.execute()
self.last_pipe_execute = time.time()
self.metrics_have_changed = False
duration_to_save = time.perf_counter() - duration_to_save
self.METRICS['subsystem_metrics_pipe_execute_seconds'].inc(duration_to_save)
self.METRICS['subsystem_metrics_pipe_execute_calls'].inc(1)
duration_pipe_exec = time.perf_counter() - duration_pipe_exec
duration_to_save = time.perf_counter()
duration_send_metrics = time.perf_counter()
self.send_metrics()
duration_to_save = time.perf_counter() - duration_to_save
self.METRICS['subsystem_metrics_send_metrics_seconds'].inc(duration_to_save)
duration_send_metrics = time.perf_counter() - duration_send_metrics
# Increment operational metrics
self.METRICS['subsystem_metrics_pipe_execute_seconds'].inc(duration_pipe_exec)
self.METRICS['subsystem_metrics_pipe_execute_calls'].inc(1)
self.METRICS['subsystem_metrics_send_metrics_seconds'].inc(duration_send_metrics)
def send_metrics(self):
# more than one thread could be calling this at the same time, so should
@@ -352,7 +364,13 @@ class Metrics(MetricsNamespace):
if instance_data:
for field in self.METRICS:
if len(metrics_filter) == 0 or field in metrics_filter:
output_text += self.METRICS[field].to_prometheus(instance_data)
# Add subsystem label only for operational metrics
namespace = (
self._namespace
if field in ['subsystem_metrics_pipe_execute_seconds', 'subsystem_metrics_pipe_execute_calls', 'subsystem_metrics_send_metrics_seconds']
else None
)
output_text += self.METRICS[field].to_prometheus(instance_data, namespace)
return output_text
@@ -381,11 +399,6 @@ class DispatcherMetrics(Metrics):
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
# dispatcher subsystem metrics
SetIntM('dispatcher_pool_scale_up_events', 'Number of times local dispatcher scaled up a worker since startup'),
SetIntM('dispatcher_pool_active_task_count', 'Number of active tasks in the worker pool when last task was submitted'),
SetIntM('dispatcher_pool_max_worker_count', 'Highest number of workers in worker pool in last collection interval, about 20s'),
SetFloatM('dispatcher_availability', 'Fraction of time (in last collection interval) dispatcher was able to receive messages'),
]
def __init__(self, *args, **kwargs):
@@ -413,8 +426,12 @@ class CallbackReceiverMetrics(Metrics):
def metrics(request):
output_text = ''
for m in [DispatcherMetrics(), CallbackReceiverMetrics()]:
output_text += m.generate_metrics(request)
output_text += DispatcherMetrics().generate_metrics(request)
output_text += CallbackReceiverMetrics().generate_metrics(request)
dispatcherd_metrics = get_dispatcherd_metrics(request)
if dispatcherd_metrics:
output_text += dispatcherd_metrics
return output_text
@@ -440,7 +457,10 @@ class CustomToPrometheusMetricsCollector(prometheus_client.registry.Collector):
logger.debug(f"No metric data not found in redis for metric namespace '{self._metrics._namespace}'")
return None
host_metrics = instance_data.get(my_hostname)
if not (host_metrics := instance_data.get(my_hostname)):
logger.debug(f"Metric data for this node '{my_hostname}' not found in redis for metric namespace '{self._metrics._namespace}'")
return None
for _, metric in self._metrics.METRICS.items():
entry = host_metrics.get(metric.field)
if not entry:
@@ -461,13 +481,6 @@ class CallbackReceiverMetricsServer(MetricsServer):
super().__init__(settings.METRICS_SERVICE_CALLBACK_RECEIVER, registry)
class DispatcherMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
registry.register(CustomToPrometheusMetricsCollector(DispatcherMetrics(metrics_have_changed=False)))
super().__init__(settings.METRICS_SERVICE_DISPATCHER, registry)
class WebsocketsMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)

View File

@@ -82,7 +82,7 @@ class MainConfig(AppConfig):
def configure_dispatcherd(self):
"""This implements the default configuration for dispatcherd
If running the tasking service like awx-manage run_dispatcher,
If running the tasking service like awx-manage dispatcherd,
some additional config will be applied on top of this.
This configuration provides the minimum such that code can submit
tasks to pg_notify to run those tasks.

View File

@@ -105,6 +105,7 @@ register(
),
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -143,6 +144,35 @@ register(
category_slug='system',
)
register(
'SUBSCRIPTIONS_USERNAME',
field_class=fields.CharField,
default='',
allow_blank=True,
encrypted=False,
read_only=False,
label=_('Red Hat Username for Subscriptions'),
help_text=_('Username used to retrieve subscription and content information'), # noqa
category=_('System'),
category_slug='system',
hidden=True,
)
register(
'SUBSCRIPTIONS_PASSWORD',
field_class=fields.CharField,
default='',
allow_blank=True,
encrypted=True,
read_only=False,
label=_('Red Hat Password for Subscriptions'),
help_text=_('Password used to retrieve subscription and content information'), # noqa
category=_('System'),
category_slug='system',
hidden=True,
)
register(
'SUBSCRIPTIONS_CLIENT_ID',
field_class=fields.CharField,
@@ -154,6 +184,7 @@ register(
help_text=_('Client ID used to retrieve subscription and content information'), # noqa
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -167,6 +198,7 @@ register(
help_text=_('Client secret used to retrieve subscription and content information'), # noqa
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -1093,3 +1125,13 @@ register(
category=('PolicyAsCode'),
category_slug='policyascode',
)
def policy_as_code_validate(serializer, attrs):
opa_host = attrs.get('OPA_HOST', '')
if opa_host and (opa_host.startswith('http://') or opa_host.startswith('https://')):
raise serializers.ValidationError({'OPA_HOST': _("OPA_HOST should not include 'http://' or 'https://' prefixes. Please enter only the hostname.")})
return attrs
register_validate('policyascode', policy_as_code_validate)

View File

@@ -3,7 +3,6 @@ import logging
import time
import hmac
import asyncio
import redis
from django.core.serializers.json import DjangoJSONEncoder
from django.conf import settings
@@ -14,6 +13,8 @@ from channels.generic.websocket import AsyncJsonWebsocketConsumer
from channels.layers import get_channel_layer
from channels.db import database_sync_to_async
from awx.main.utils.redis import get_redis_client_async
logger = logging.getLogger('awx.main.consumers')
XRF_KEY = '_auth_user_xrf'
@@ -40,10 +41,10 @@ class WebsocketSecretAuthHelper:
@classmethod
def verify_secret(cls, s, nonce_tolerance=300):
try:
(prefix, payload) = s.split(' ')
prefix, payload = s.split(' ')
if prefix != 'HMAC-SHA256':
raise ValueError('Unsupported encryption algorithm')
(nonce_parsed, secret_parsed) = payload.split(':')
nonce_parsed, secret_parsed = payload.split(':')
except Exception:
raise ValueError("Failed to parse secret")
@@ -94,6 +95,9 @@ class RelayConsumer(AsyncJsonWebsocketConsumer):
await self.channel_layer.group_add(settings.BROADCAST_WEBSOCKET_GROUP_NAME, self.channel_name)
logger.info(f"client '{self.channel_name}' joined the broadcast group.")
# Initialize Redis client once for reuse across all message handling
self._redis_conn = get_redis_client_async()
async def disconnect(self, code):
logger.info(f"client '{self.channel_name}' disconnected from the broadcast group.")
await self.channel_layer.group_discard(settings.BROADCAST_WEBSOCKET_GROUP_NAME, self.channel_name)
@@ -102,11 +106,12 @@ class RelayConsumer(AsyncJsonWebsocketConsumer):
await self.send(event['text'])
async def receive_json(self, data):
(group, message) = unwrap_broadcast_msg(data)
group, message = unwrap_broadcast_msg(data)
if group == "metrics":
message = json.loads(message['text'])
conn = redis.Redis.from_url(settings.BROKER_URL)
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "-" + message['metrics_namespace'] + "_instance_" + message['instance'], message['metrics'])
await self._redis_conn.set(
settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "-" + message['metrics_namespace'] + "_instance_" + message['instance'], message['metrics']
)
else:
await self.channel_layer.group_send(group, message)

View File

@@ -77,14 +77,13 @@ class PubSub(object):
n = psycopg.connection.Notify(pgn.relname.decode(enc), pgn.extra.decode(enc), pgn.be_pid)
yield n
def events(self, yield_timeouts=False):
def events(self):
if not self.conn.autocommit:
raise RuntimeError('Listening for events can only be done in autocommit mode')
while True:
if select.select([self.conn], [], [], self.select_timeout) == NOT_READY:
if yield_timeouts:
yield None
yield None
else:
notification_generator = self.current_notifies(self.conn)
for notification in notification_generator:

View File

@@ -2,7 +2,7 @@ from django.conf import settings
from ansible_base.lib.utils.db import get_pg_notify_params
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.pool import get_auto_max_workers
from awx.main.utils.common import get_auto_max_workers
def get_dispatcherd_config(for_service: bool = False, mock_publish: bool = False) -> dict:
@@ -11,28 +11,35 @@ def get_dispatcherd_config(for_service: bool = False, mock_publish: bool = False
Parameters:
for_service: if True, include dynamic options needed for running the dispatcher service
this will require database access, you should delay evaluation until after app setup
mock_publish: if True, use mock values that don't require database access
this is used during tests to avoid database queries during app initialization
"""
# When mock_publish=True (e.g., during tests), use a default value to avoid
# database access in get_auto_max_workers() which queries settings.IS_K8S
if mock_publish:
max_workers = 20 # Reasonable default for tests
else:
max_workers = get_auto_max_workers()
config = {
"version": 2,
"service": {
"pool_kwargs": {
"min_workers": settings.JOB_EVENT_WORKERS,
"max_workers": get_auto_max_workers(),
"max_workers": max_workers,
},
"main_kwargs": {"node_id": settings.CLUSTER_HOST_ID},
"process_manager_cls": "ForkServerManager",
"process_manager_kwargs": {"preload_modules": ['awx.main.dispatch.hazmat']},
"process_manager_kwargs": {"preload_modules": ['awx.main.dispatch.prefork']},
},
"brokers": {
"socket": {"socket_path": settings.DISPATCHERD_DEBUGGING_SOCKFILE},
},
"publish": {"default_control_broker": "socket"},
"brokers": {},
"publish": {},
"worker": {"worker_cls": "awx.main.dispatch.worker.dispatcherd.AWXTaskWorker"},
}
if mock_publish:
config["brokers"]["noop"] = {}
config["publish"]["default_broker"] = "noop"
config["brokers"]["dispatcherd.testing.brokers.noop"] = {}
config["publish"]["default_broker"] = "dispatcherd.testing.brokers.noop"
else:
config["brokers"]["pg_notify"] = {
"config": get_pg_notify_params(),
@@ -49,5 +56,11 @@ def get_dispatcherd_config(for_service: bool = False, mock_publish: bool = False
}
config["brokers"]["pg_notify"]["channels"] = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()]
metrics_cfg = settings.METRICS_SUBSYSTEM_CONFIG.get('server', {}).get(settings.METRICS_SERVICE_DISPATCHER)
if metrics_cfg:
config["service"]["metrics_kwargs"] = {
"host": metrics_cfg.get("host", "localhost"),
"port": metrics_cfg.get("port", 8015),
}
return config

View File

@@ -1,78 +0,0 @@
import logging
import uuid
import json
from django.conf import settings
from django.db import connection
import redis
from awx.main.dispatch import get_task_queuename
from . import pg_bus_conn
logger = logging.getLogger('awx.main.dispatch')
class Control(object):
services = ('dispatcher', 'callback_receiver')
result = None
def __init__(self, service, host=None):
if service not in self.services:
raise RuntimeError('{} must be in {}'.format(service, self.services))
self.service = service
self.queuename = host or get_task_queuename()
def status(self, *args, **kwargs):
r = redis.Redis.from_url(settings.BROKER_URL)
if self.service == 'dispatcher':
stats = r.get(f'awx_{self.service}_statistics') or b''
return stats.decode('utf-8')
else:
workers = []
for key in r.keys('awx_callback_receiver_statistics_*'):
workers.append(r.get(key).decode('utf-8'))
return '\n'.join(workers)
def running(self, *args, **kwargs):
return self.control_with_reply('running', *args, **kwargs)
def cancel(self, task_ids, with_reply=True):
if with_reply:
return self.control_with_reply('cancel', extra_data={'task_ids': task_ids})
else:
self.control({'control': 'cancel', 'task_ids': task_ids, 'reply_to': None}, extra_data={'task_ids': task_ids})
def schedule(self, *args, **kwargs):
return self.control_with_reply('schedule', *args, **kwargs)
@classmethod
def generate_reply_queue_name(cls):
return f"reply_to_{str(uuid.uuid4()).replace('-','_')}"
def control_with_reply(self, command, timeout=5, extra_data=None):
logger.warning('checking {} {} for {}'.format(self.service, command, self.queuename))
reply_queue = Control.generate_reply_queue_name()
self.result = None
if not connection.get_autocommit():
raise RuntimeError('Control-with-reply messages can only be done in autocommit mode')
with pg_bus_conn(select_timeout=timeout) as conn:
conn.listen(reply_queue)
send_data = {'control': command, 'reply_to': reply_queue}
if extra_data:
send_data.update(extra_data)
conn.notify(self.queuename, json.dumps(send_data))
for reply in conn.events(yield_timeouts=True):
if reply is None:
logger.error(f'{self.service} did not reply within {timeout}s')
raise RuntimeError(f"{self.service} did not reply within {timeout}s")
break
return json.loads(reply.payload)
def control(self, msg, **kwargs):
with pg_bus_conn() as conn:
conn.notify(self.queuename, json.dumps(msg))

View File

@@ -1,147 +0,0 @@
import logging
import time
import yaml
from datetime import datetime
logger = logging.getLogger('awx.main.dispatch.periodic')
class ScheduledTask:
"""
Class representing schedules, very loosely modeled after python schedule library Job
the idea of this class is to:
- only deal in relative times (time since the scheduler global start)
- only deal in integer math for target runtimes, but float for current relative time
Missed schedule policy:
Invariant target times are maintained, meaning that if interval=10s offset=0
and it runs at t=7s, then it calls for next run in 3s.
However, if a complete interval has passed, that is counted as a missed run,
and missed runs are abandoned (no catch-up runs).
"""
def __init__(self, name: str, data: dict):
# parameters need for schedule computation
self.interval = int(data['schedule'].total_seconds())
self.offset = 0 # offset relative to start time this schedule begins
self.index = 0 # number of periods of the schedule that has passed
# parameters that do not affect scheduling logic
self.last_run = None # time of last run, only used for debug
self.completed_runs = 0 # number of times schedule is known to run
self.name = name
self.data = data # used by caller to know what to run
@property
def next_run(self):
"Time until the next run with t=0 being the global_start of the scheduler class"
return (self.index + 1) * self.interval + self.offset
def due_to_run(self, relative_time):
return bool(self.next_run <= relative_time)
def expected_runs(self, relative_time):
return int((relative_time - self.offset) / self.interval)
def mark_run(self, relative_time):
self.last_run = relative_time
self.completed_runs += 1
new_index = self.expected_runs(relative_time)
if new_index > self.index + 1:
logger.warning(f'Missed {new_index - self.index - 1} schedules of {self.name}')
self.index = new_index
def missed_runs(self, relative_time):
"Number of times job was supposed to ran but failed to, only used for debug"
missed_ct = self.expected_runs(relative_time) - self.completed_runs
# if this is currently due to run do not count that as a missed run
if missed_ct and self.due_to_run(relative_time):
missed_ct -= 1
return missed_ct
class Scheduler:
def __init__(self, schedule):
"""
Expects schedule in the form of a dictionary like
{
'job1': {'schedule': timedelta(seconds=50), 'other': 'stuff'}
}
Only the schedule nearest-second value is used for scheduling,
the rest of the data is for use by the caller to know what to run.
"""
self.jobs = [ScheduledTask(name, data) for name, data in schedule.items()]
min_interval = min(job.interval for job in self.jobs)
num_jobs = len(self.jobs)
# this is intentionally oppioniated against spammy schedules
# a core goal is to spread out the scheduled tasks (for worker management)
# and high-frequency schedules just do not work with that
if num_jobs > min_interval:
raise RuntimeError(f'Number of schedules ({num_jobs}) is more than the shortest schedule interval ({min_interval} seconds).')
# even space out jobs over the base interval
for i, job in enumerate(self.jobs):
job.offset = (i * min_interval) // num_jobs
# internally times are all referenced relative to startup time, add grace period
self.global_start = time.time() + 2.0
def get_and_mark_pending(self, reftime=None):
if reftime is None:
reftime = time.time() # mostly for tests
relative_time = reftime - self.global_start
to_run = []
for job in self.jobs:
if job.due_to_run(relative_time):
to_run.append(job)
logger.debug(f'scheduler found {job.name} to run, {relative_time - job.next_run} seconds after target')
job.mark_run(relative_time)
return to_run
def time_until_next_run(self, reftime=None):
if reftime is None:
reftime = time.time() # mostly for tests
relative_time = reftime - self.global_start
next_job = min(self.jobs, key=lambda j: j.next_run)
delta = next_job.next_run - relative_time
if delta <= 0.1:
# careful not to give 0 or negative values to the select timeout, which has unclear interpretation
logger.warning(f'Scheduler next run of {next_job.name} is {-delta} seconds in the past')
return 0.1
elif delta > 20.0:
logger.warning(f'Scheduler next run unexpectedly over 20 seconds in future: {delta}')
return 20.0
logger.debug(f'Scheduler next run is {next_job.name} in {delta} seconds')
return delta
def debug(self, *args, **kwargs):
data = dict()
data['title'] = 'Scheduler status'
reftime = time.time()
now = datetime.fromtimestamp(reftime).strftime('%Y-%m-%d %H:%M:%S UTC')
start_time = datetime.fromtimestamp(self.global_start).strftime('%Y-%m-%d %H:%M:%S UTC')
relative_time = reftime - self.global_start
data['started_time'] = start_time
data['current_time'] = now
data['current_time_relative'] = round(relative_time, 3)
data['total_schedules'] = len(self.jobs)
data['schedule_list'] = dict(
[
(
job.name,
dict(
last_run_seconds_ago=round(relative_time - job.last_run, 3) if job.last_run else None,
next_run_in_seconds=round(job.next_run - relative_time, 3),
offset_in_seconds=job.offset,
completed_runs=job.completed_runs,
missed_runs=job.missed_runs(relative_time),
),
)
for job in sorted(self.jobs, key=lambda job: job.interval)
]
)
return yaml.safe_dump(data, default_flow_style=False, sort_keys=False)

View File

@@ -1,583 +1,54 @@
import logging
import os
import random
import signal
import sys
import time
import traceback
from datetime import datetime
from uuid import uuid4
import json
import collections
from multiprocessing import Process
from multiprocessing import Queue as MPQueue
from queue import Full as QueueFull, Empty as QueueEmpty
from django.conf import settings
from django.db import connection as django_connection, connections
from django.db import connection as django_connection
from django.core.cache import cache as django_cache
from django.utils.timezone import now as tz_now
from django_guid import set_guid
from jinja2 import Template
import psutil
from ansible_base.lib.logging.runtime import log_excess_runtime
from awx.main.models import UnifiedJob
from awx.main.dispatch import reaper
from awx.main.utils.common import get_mem_effective_capacity, get_corrected_memory, get_corrected_cpu, get_cpu_effective_capacity
# ansible-runner
from ansible_runner.utils.capacity import get_mem_in_bytes, get_cpu_count
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
else:
logger = logging.getLogger('awx.main.dispatch')
RETIRED_SENTINEL_TASK = "[retired]"
class NoOpResultQueue(object):
def put(self, item):
pass
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
class PoolWorker(object):
"""
Used to track a worker child process and its pending and finished messages.
A simple wrapper around a multiprocessing.Process that tracks a worker child process.
This class makes use of two distinct multiprocessing.Queues to track state:
- self.queue: this is a queue which represents pending messages that should
be handled by this worker process; as new AMQP messages come
in, a pool will put() them into this queue; the child
process that is forked will get() from this queue and handle
received messages in an endless loop
- self.finished: this is a queue which the worker process uses to signal
that it has finished processing a message
When a message is put() onto this worker, it is tracked in
self.managed_tasks.
Periodically, the worker will call .calculate_managed_tasks(), which will
cause messages in self.finished to be removed from self.managed_tasks.
In this way, self.managed_tasks represents a view of the messages assigned
to a specific process. The message at [0] is the least-recently inserted
message, and it represents what the worker is running _right now_
(self.current_task).
A worker is "busy" when it has at least one message in self.managed_tasks.
It is "idle" when self.managed_tasks is empty.
The worker process runs the provided target function.
"""
track_managed_tasks = False
def __init__(self, queue_size, target, args, **kwargs):
self.messages_sent = 0
self.messages_finished = 0
self.managed_tasks = collections.OrderedDict()
self.finished = MPQueue(queue_size) if self.track_managed_tasks else NoOpResultQueue()
self.queue = MPQueue(queue_size)
self.process = Process(target=target, args=(self.queue, self.finished) + args)
def __init__(self, target, args):
self.process = Process(target=target, args=args)
self.process.daemon = True
self.creation_time = time.monotonic()
self.retiring = False
def start(self):
self.process.start()
def put(self, body):
if self.retiring:
uuid = body.get('uuid', 'N/A') if isinstance(body, dict) else 'N/A'
logger.info(f"Worker pid:{self.pid} is retiring. Refusing new task {uuid}.")
raise QueueFull("Worker is retiring and not accepting new tasks") # AutoscalePool.write handles QueueFull
uuid = '?'
if isinstance(body, dict):
if not body.get('uuid'):
body['uuid'] = str(uuid4())
uuid = body['uuid']
if self.track_managed_tasks:
self.managed_tasks[uuid] = body
self.queue.put(body, block=True, timeout=5)
self.messages_sent += 1
self.calculate_managed_tasks()
def quit(self):
"""
Send a special control message to the worker that tells it to exit
gracefully.
"""
self.queue.put('QUIT')
@property
def age(self):
"""Returns the current age of the worker in seconds."""
return time.monotonic() - self.creation_time
@property
def pid(self):
return self.process.pid
@property
def qsize(self):
return self.queue.qsize()
@property
def alive(self):
return self.process.is_alive()
@property
def mb(self):
if self.alive:
return '{:0.3f}'.format(psutil.Process(self.pid).memory_info().rss / 1024.0 / 1024.0)
return '0'
@property
def exitcode(self):
return str(self.process.exitcode)
def calculate_managed_tasks(self):
if not self.track_managed_tasks:
return
# look to see if any tasks were finished
finished = []
for _ in range(self.finished.qsize()):
try:
finished.append(self.finished.get(block=False))
except QueueEmpty:
break # qsize is not always _totally_ up to date
# if any tasks were finished, removed them from the managed tasks for
# this worker
for uuid in finished:
try:
del self.managed_tasks[uuid]
self.messages_finished += 1
except KeyError:
# ansible _sometimes_ appears to send events w/ duplicate UUIDs;
# UUIDs for ansible events are *not* actually globally unique
# when this occurs, it's _fine_ to ignore this KeyError because
# the purpose of self.managed_tasks is to just track internal
# state of which events are *currently* being processed.
logger.warning('Event UUID {} appears to be have been duplicated.'.format(uuid))
if self.retiring:
self.managed_tasks[RETIRED_SENTINEL_TASK] = {'task': RETIRED_SENTINEL_TASK}
@property
def current_task(self):
if not self.track_managed_tasks:
return None
self.calculate_managed_tasks()
# the task at [0] is the one that's running right now (or is about to
# be running)
if len(self.managed_tasks):
return self.managed_tasks[list(self.managed_tasks.keys())[0]]
return None
@property
def orphaned_tasks(self):
if not self.track_managed_tasks:
return []
orphaned = []
if not self.alive:
# if this process had a running task that never finished,
# requeue its error callbacks
current_task = self.current_task
if isinstance(current_task, dict):
orphaned.extend(current_task.get('errbacks', []))
# if this process has any pending messages requeue them
for _ in range(self.qsize):
try:
message = self.queue.get(block=False)
if message != 'QUIT':
orphaned.append(message)
except QueueEmpty:
break # qsize is not always _totally_ up to date
if len(orphaned):
logger.error('requeuing {} messages from gone worker pid:{}'.format(len(orphaned), self.pid))
return orphaned
@property
def busy(self):
self.calculate_managed_tasks()
return len(self.managed_tasks) > 0
@property
def idle(self):
return not self.busy
class StatefulPoolWorker(PoolWorker):
track_managed_tasks = True
class WorkerPool(object):
"""
Creates a pool of forked PoolWorkers.
As WorkerPool.write(...) is called (generally, by a kombu consumer
implementation when it receives an AMQP message), messages are passed to
one of the multiprocessing Queues where some work can be done on them.
Each worker process runs the provided target function in an isolated process.
The pool manages spawning, tracking, and stopping worker processes.
class MessagePrinter(awx.main.dispatch.worker.BaseWorker):
def perform_work(self, body):
print(body)
pool = WorkerPool(min_workers=4) # spawn four worker processes
pool.init_workers(MessagePrint().work_loop)
pool.write(
0, # preferred worker 0
'Hello, World!'
)
Example:
pool = WorkerPool(workers_num=4) # spawn four worker processes
"""
pool_cls = PoolWorker
debug_meta = ''
def __init__(self, workers_num=None):
self.workers_num = workers_num or settings.JOB_EVENT_WORKERS
def __init__(self, min_workers=None, queue_size=None):
self.name = settings.CLUSTER_HOST_ID
self.pid = os.getpid()
self.min_workers = min_workers or settings.JOB_EVENT_WORKERS
self.queue_size = queue_size or settings.JOB_EVENT_MAX_QUEUE_SIZE
self.workers = []
def __len__(self):
return len(self.workers)
def init_workers(self, target, *target_args):
self.target = target
self.target_args = target_args
for idx in range(self.min_workers):
self.up()
def up(self):
idx = len(self.workers)
# It's important to close these because we're _about_ to fork, and we
# don't want the forked processes to inherit the open sockets
# for the DB and cache connections (that way lies race conditions)
django_connection.close()
django_cache.close()
worker = self.pool_cls(self.queue_size, self.target, (idx,) + self.target_args)
self.workers.append(worker)
try:
worker.start()
except Exception:
logger.exception('could not fork')
else:
logger.debug('scaling up worker pid:{}'.format(worker.pid))
return idx, worker
def debug(self, *args, **kwargs):
tmpl = Template(
'Recorded at: {{ dt }} \n'
'{{ pool.name }}[pid:{{ pool.pid }}] workers total={{ workers|length }} {{ meta }} \n'
'{% for w in workers %}'
'. worker[pid:{{ w.pid }}]{% if not w.alive %} GONE exit={{ w.exitcode }}{% endif %}'
' sent={{ w.messages_sent }}'
' age={{ "%.0f"|format(w.age) }}s'
' retiring={{ w.retiring }}'
'{% if w.messages_finished %} finished={{ w.messages_finished }}{% endif %}'
' qsize={{ w.managed_tasks|length }}'
' rss={{ w.mb }}MB'
'{% for task in w.managed_tasks.values() %}'
'\n - {% if loop.index0 == 0 %}running {% if "age" in task %}for: {{ "%.1f" % task["age"] }}s {% endif %}{% else %}queued {% endif %}'
'{{ task["uuid"] }} '
'{% if "task" in task %}'
'{{ task["task"].rsplit(".", 1)[-1] }}'
# don't print kwargs, they often contain launch-time secrets
'(*{{ task.get("args", []) }})'
'{% endif %}'
'{% endfor %}'
'{% if not w.managed_tasks|length %}'
' [IDLE]'
'{% endif %}'
'\n'
'{% endfor %}'
)
now = datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S UTC')
return tmpl.render(pool=self, workers=self.workers, meta=self.debug_meta, dt=now)
def write(self, preferred_queue, body):
queue_order = sorted(range(len(self.workers)), key=lambda x: -1 if x == preferred_queue else x)
write_attempt_order = []
for queue_actual in queue_order:
def init_workers(self, target):
for idx in range(self.workers_num):
# It's important to close these because we're _about_ to fork, and we
# don't want the forked processes to inherit the open sockets
# for the DB and cache connections (that way lies race conditions)
django_connection.close()
django_cache.close()
worker = PoolWorker(target, (idx,))
try:
self.workers[queue_actual].put(body)
return queue_actual
except QueueFull:
pass
worker.start()
except Exception:
tb = traceback.format_exc()
logger.warning("could not write to queue %s" % preferred_queue)
logger.warning("detail: {}".format(tb))
write_attempt_order.append(preferred_queue)
logger.error("could not write payload to any queue, attempted order: {}".format(write_attempt_order))
return None
def stop(self, signum):
try:
for worker in self.workers:
os.kill(worker.pid, signum)
except Exception:
logger.exception('could not kill {}'.format(worker.pid))
def get_auto_max_workers():
"""Method we normally rely on to get max_workers
Uses almost same logic as Instance.local_health_check
The important thing is to be MORE than Instance.capacity
so that the task-manager does not over-schedule this node
Ideally we would just use the capacity from the database plus reserve workers,
but this poses some bootstrap problems where OCP task containers
register themselves after startup
"""
# Get memory from ansible-runner
total_memory_gb = get_mem_in_bytes()
# This may replace memory calculation with a user override
corrected_memory = get_corrected_memory(total_memory_gb)
# Get same number as max forks based on memory, this function takes memory as bytes
mem_capacity = get_mem_effective_capacity(corrected_memory, is_control_node=True)
# Follow same process for CPU capacity constraint
cpu_count = get_cpu_count()
corrected_cpu = get_corrected_cpu(cpu_count)
cpu_capacity = get_cpu_effective_capacity(corrected_cpu, is_control_node=True)
# Here is what is different from health checks,
auto_max = max(mem_capacity, cpu_capacity)
# add magic number of extra workers to ensure
# we have a few extra workers to run the heartbeat
auto_max += 7
return auto_max
class AutoscalePool(WorkerPool):
"""
An extended pool implementation that automatically scales workers up and
down based on demand
"""
pool_cls = StatefulPoolWorker
def __init__(self, *args, **kwargs):
self.max_workers = kwargs.pop('max_workers', None)
self.max_worker_lifetime_seconds = kwargs.pop(
'max_worker_lifetime_seconds', getattr(settings, 'WORKER_MAX_LIFETIME_SECONDS', 14400)
) # Default to 4 hours
super(AutoscalePool, self).__init__(*args, **kwargs)
if self.max_workers is None:
self.max_workers = get_auto_max_workers()
# max workers can't be less than min_workers
self.max_workers = max(self.min_workers, self.max_workers)
# the task manager enforces settings.TASK_MANAGER_TIMEOUT on its own
# but if the task takes longer than the time defined here, we will force it to stop here
self.task_manager_timeout = settings.TASK_MANAGER_TIMEOUT + settings.TASK_MANAGER_TIMEOUT_GRACE_PERIOD
# initialize some things for subsystem metrics periodic gathering
# the AutoscalePool class does not save these to redis directly, but reports via produce_subsystem_metrics
self.scale_up_ct = 0
self.worker_count_max = 0
# last time we wrote current tasks, to avoid too much log spam
self.last_task_list_log = time.monotonic()
def produce_subsystem_metrics(self, metrics_object):
metrics_object.set('dispatcher_pool_scale_up_events', self.scale_up_ct)
metrics_object.set('dispatcher_pool_active_task_count', sum(len(w.managed_tasks) for w in self.workers))
metrics_object.set('dispatcher_pool_max_worker_count', self.worker_count_max)
self.worker_count_max = len(self.workers)
@property
def should_grow(self):
if len(self.workers) < self.min_workers:
# If we don't have at least min_workers, add more
return True
# If every worker is busy doing something, add more
return all([w.busy for w in self.workers])
@property
def full(self):
return len(self.workers) == self.max_workers
@property
def debug_meta(self):
return 'min={} max={}'.format(self.min_workers, self.max_workers)
@log_excess_runtime(logger, debug_cutoff=0.05, cutoff=0.2)
def cleanup(self):
"""
Perform some internal account and cleanup. This is run on
every cluster node heartbeat:
1. Discover worker processes that exited, and recover messages they
were handling.
2. Clean up unnecessary, idle workers.
IMPORTANT: this function is one of the few places in the dispatcher
(aside from setting lookups) where we talk to the database. As such,
if there's an outage, this method _can_ throw various
django.db.utils.Error exceptions. Act accordingly.
"""
orphaned = []
for w in self.workers[::]:
is_retirement_age = self.max_worker_lifetime_seconds is not None and w.age > self.max_worker_lifetime_seconds
if not w.alive:
# the worker process has exited
# 1. take the task it was running and enqueue the error
# callbacks
# 2. take any pending tasks delivered to its queue and
# send them to another worker
logger.error('worker pid:{} is gone (exit={})'.format(w.pid, w.exitcode))
if w.current_task:
if w.current_task == {'task': RETIRED_SENTINEL_TASK}:
logger.debug('scaling down worker pid:{} due to worker age: {}'.format(w.pid, w.age))
self.workers.remove(w)
continue
if w.current_task != 'QUIT':
try:
for j in UnifiedJob.objects.filter(celery_task_id=w.current_task['uuid']):
reaper.reap_job(j, 'failed')
except Exception:
logger.exception('failed to reap job UUID {}'.format(w.current_task['uuid']))
else:
logger.warning(f'Worker was told to quit but has not, pid={w.pid}')
orphaned.extend(w.orphaned_tasks)
self.workers.remove(w)
elif w.idle and len(self.workers) > self.min_workers:
# the process has an empty queue (it's idle) and we have
# more processes in the pool than we need (> min)
# send this process a message so it will exit gracefully
# at the next opportunity
logger.debug('scaling down worker pid:{}'.format(w.pid))
w.quit()
self.workers.remove(w)
elif w.idle and is_retirement_age:
logger.debug('scaling down worker pid:{} due to worker age: {}'.format(w.pid, w.age))
w.quit()
self.workers.remove(w)
elif is_retirement_age and not w.retiring and not w.idle:
logger.info(
f"Worker pid:{w.pid} (age: {w.age:.0f}s) exceeded max lifetime ({self.max_worker_lifetime_seconds:.0f}s). "
"Signaling for graceful retirement."
)
# Send QUIT signal; worker will finish current task then exit.
w.quit()
# mark as retiring to reject any future tasks that might be assigned in meantime
w.retiring = True
if w.alive:
# if we discover a task manager invocation that's been running
# too long, reap it (because otherwise it'll just hold the postgres
# advisory lock forever); the goal of this code is to discover
# deadlocks or other serious issues in the task manager that cause
# the task manager to never do more work
current_task = w.current_task
if current_task and isinstance(current_task, dict):
endings = ('tasks.task_manager', 'tasks.dependency_manager', 'tasks.workflow_manager')
current_task_name = current_task.get('task', '')
if current_task_name.endswith(endings):
if 'started' not in current_task:
w.managed_tasks[current_task['uuid']]['started'] = time.time()
age = time.time() - current_task['started']
w.managed_tasks[current_task['uuid']]['age'] = age
if age > self.task_manager_timeout:
logger.error(f'{current_task_name} has held the advisory lock for {age}, sending SIGUSR1 to {w.pid}')
os.kill(w.pid, signal.SIGUSR1)
for m in orphaned:
# if all the workers are dead, spawn at least one
if not len(self.workers):
self.up()
idx = random.choice(range(len(self.workers)))
self.write(idx, m)
def add_bind_kwargs(self, body):
bind_kwargs = body.pop('bind_kwargs', [])
body.setdefault('kwargs', {})
if 'dispatch_time' in bind_kwargs:
body['kwargs']['dispatch_time'] = tz_now().isoformat()
if 'worker_tasks' in bind_kwargs:
worker_tasks = {}
for worker in self.workers:
worker.calculate_managed_tasks()
worker_tasks[worker.pid] = list(worker.managed_tasks.keys())
body['kwargs']['worker_tasks'] = worker_tasks
def up(self):
if self.full:
# if we can't spawn more workers, just toss this message into a
# random worker's backlog
idx = random.choice(range(len(self.workers)))
return idx, self.workers[idx]
else:
self.scale_up_ct += 1
ret = super(AutoscalePool, self).up()
new_worker_ct = len(self.workers)
if new_worker_ct > self.worker_count_max:
self.worker_count_max = new_worker_ct
return ret
@staticmethod
def fast_task_serialization(current_task):
try:
return str(current_task.get('task')) + ' - ' + str(sorted(current_task.get('args', []))) + ' - ' + str(sorted(current_task.get('kwargs', {})))
except Exception:
# just make sure this does not make things worse
return str(current_task)
def write(self, preferred_queue, body):
if 'guid' in body:
set_guid(body['guid'])
try:
if isinstance(body, dict) and body.get('bind_kwargs'):
self.add_bind_kwargs(body)
if self.should_grow:
self.up()
# we don't care about "preferred queue" round robin distribution, just
# find the first non-busy worker and claim it
workers = self.workers[:]
random.shuffle(workers)
for w in workers:
if not w.busy:
w.put(body)
break
logger.exception('could not fork')
else:
task_name = 'unknown'
if isinstance(body, dict):
task_name = body.get('task')
logger.warning(f'Workers maxed, queuing {task_name}, load: {sum(len(w.managed_tasks) for w in self.workers)} / {len(self.workers)}')
# Once every 10 seconds write out task list for debugging
if time.monotonic() - self.last_task_list_log >= 10.0:
task_counts = {}
for worker in self.workers:
task_slug = self.fast_task_serialization(worker.current_task)
task_counts.setdefault(task_slug, 0)
task_counts[task_slug] += 1
logger.info(f'Running tasks by count:\n{json.dumps(task_counts, indent=2)}')
self.last_task_list_log = time.monotonic()
return super(AutoscalePool, self).write(preferred_queue, body)
except Exception:
for conn in connections.all():
# If the database connection has a hiccup, re-establish a new
# connection
conn.close_if_unusable_or_obsolete()
logger.exception('failed to write inbound message')
logger.debug('scaling up worker pid:{}'.format(worker.process.pid))

View File

@@ -10,7 +10,6 @@ from awx import prepare_env
from dispatcherd.utils import resolve_callable
prepare_env()
django.setup() # noqa
@@ -18,9 +17,8 @@ django.setup() # noqa
from django.conf import settings
# Preload all periodic tasks so their imports will be in shared memory
for name, options in settings.CELERYBEAT_SCHEDULE.items():
for name, options in settings.DISPATCHER_SCHEDULE.items():
resolve_callable(options['task'])
@@ -31,6 +29,5 @@ from awx.main.scheduler.kubernetes import PodManager # noqa
from django.core.cache import cache as django_cache
from django.db import connection
connection.close()
django_cache.close()

Some files were not shown because too many files have changed in this diff Show More