Compare commits

...

112 Commits

Author SHA1 Message Date
Rodrigo Toshiaki Horie
30bf910bd5 fix schema generator (#16265) 2026-02-04 20:54:28 -03:00
jessicamack
c9085e4b7f Update OpenAPI spec to improve descriptions and messages (#16260)
* Update OpenAPI spec

* lint fixes

* fix decorator for retrieve endpoints

* change decorator method

* fix import

* lint fix
2026-02-04 22:32:57 +00:00
Alan Rominger
5e93f60b9e AAP-41776 Enable new fancy asyncio metrics for dispatcherd (#16233)
* Enable new fancy asyncio metrics for dispatcherd

Remove old dispatcher metrics and patch in new data from local whatever

Update test fixture to new dispatcherd version

* Update dispatcherd again

* Handle node filter in URL, and catch more errors

* Add test for metric filter

* Split module for dispatcherd metrics
2026-02-04 15:28:34 -05:00
Rodrigo Toshiaki Horie
6a031158ce Fix OpenAPI schema enum values for CredentialType kind field (#16262)
The OpenAPI schema incorrectly showed all 12 credential type kinds as
valid for POST/PUT/PATCH operations, when only 'cloud' and 'net' are
allowed for custom credential types. This caused API clients and LLM
agents to receive HTTP 400 errors when attempting to create credential
types with invalid kind values.

Add postprocessing hook to filter CredentialTypeRequest and
PatchedCredentialTypeRequest schemas to only show 'cloud', 'net',
and null as valid enum values, matching the existing validation logic.

No API behavior changes - this is purely a documentation fix.

Co-authored-by: Claude <noreply@anthropic.com>
2026-02-04 16:39:03 -03:00
joeywashburn
749735b941 Standardize spelling of 'canceled' in wsrelay.py (#16178)
Changed two instances of 'cancelled' to 'canceled' in awx/main/wsrelay.py
to match AWX's standardized American English spelling convention.

- Updated log message in WebsocketRelayConnection.connect()
- Updated comment in WebSocketRelayManager.cleanup_offline_host()

Fixes #15177

Signed-off-by: Joey Washburn <joey@joeywashburn.com>
2026-02-04 12:29:45 -05:00
Chris Meyers
315f9c7eef Rename args var
* https://sonarcloud.io/project/issues?open=AZDmRbV12PiUXMD3dYmh&id=ansible_awx
2026-02-04 08:17:51 -05:00
Chris Meyers
00c0f7e8db add test 2026-02-03 16:12:22 -05:00
Chris Meyers
37ccbc28bd Harden log message output containing user input
* base64 encode user inputed url when logging so that newlines or other
  malicious payloads can't be injected into the log stream
2026-02-03 16:12:22 -05:00
Chris Meyers
63fafec76f Remove init return value
* https://sonarcloud.io/project/issues?open=AZDmRaaJ2PiUXMD3dXly&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
cba01339a1 Remove redunant role attribute
* https://sonarcloud.io/project/issues?open=AZDmRbYN2PiUXMD3dYo5&id=ansible_awx&tab=code
2026-02-03 16:12:00 -05:00
Chris Meyers
2622e9d295 Add alt text for awx logo
* https://sonarcloud.io/project/issues?open=AZDmRbYR2PiUXMD3dYo6&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
a6afec6ebb Add generic font family
* https://sonarcloud.io/project/issues?open=AZDmRbX42PiUXMD3dYo1&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
f406a377f7 Add generic font family
* https://sonarcloud.io/project/issues?open=AZDmRbX42PiUXMD3dYo0&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
adc3e35978 Add generic font family
* remove quotes so that it's treated as a generic font family
* https://sonarcloud.io/project/issues?open=AZDmRbX42PiUXMD3dYoz&id=ansible_awx&tab=code
2026-02-03 16:12:00 -05:00
Chris Meyers
838e67005c Remove duplicate css property
* Last one wins so remove the first one.
* https://sonarcloud.io/project/issues?open=AZpWSq7yO74rjWmAOcwf&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
e13fcfe29f Add alt text to 504 image
* https://sonarcloud.io/project/issues?open=AZDmRbXt2PiUXMD3dYor&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
0f4e91419a Add lang english tag to 504 page
* https://sonarcloud.io/project/issues?open=AZDmRbXt2PiUXMD3dYop&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
cca70b242a Add alt text to 502 image
* https://sonarcloud.io/project/issues?open=AZDmRbXx2PiUXMD3dYou&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
edf459f8ec Add language english to 502
* https://sonarcloud.io/project/issues?open=AZDmRbXx2PiUXMD3dYos&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
f4286216d6 Add doctype and lang
* https://sonarcloud.io/project/issues?open=AZDmRbX02PiUXMD3dYov&id=ansible_awx
* https://sonarcloud.io/project/issues?open=AZDmRbX02PiUXMD3dYow&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
0ab1fea731 Use replaceAll() for global regex
* https://sonarcloud.io/project/issues?open=AZlyqQeaRtfhwxlTsfP0&id=ansible_awx
* https://sonarcloud.io/project/issues?open=AZlyqQeaRtfhwxlTsfP1&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
e3ac581fdf Always use a tz aware timestamp
* https://sonarcloud.io/project/issues?open=AZDmRade2PiUXMD3dXnx&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
5aa3e8cf3b Make tz aware
* Note, this doesn't change the logic or output, but maybe makes
  someones life easier later.
* https://sonarcloud.io/project/issues?open=AZDmRade2PiUXMD3dXnx&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
8289003c0d Remove unreachable code path
* https://sonarcloud.io/project/issues?open=AZDmRaXd2PiUXMD3dXkN&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
125083538a Compare float to float
* https://sonarcloud.io/project/issues?open=AZDmRaXd2PiUXMD3dXkF&id=ansible_awx&tab=code
2026-02-03 16:12:00 -05:00
Chris Meyers
ed5ab8becd Remove unused variable
* https://sonarcloud.io/project/issues?open=AZL9AcQZ0bcYsK7qDrTo&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
fc0087f1b2 Add language to api stdout for translation helping
* https://sonarcloud.io/project/issues?open=AZDmRbV82PiUXMD3dYmx&id=ansible_awx&tab=code
2026-02-03 16:12:00 -05:00
Chris Meyers
cfc5ad9d91 Remove return value from __init__
* https://sonarcloud.io/project/issues?open=AZDmRaZX2PiUXMD3dXle&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
d929b767b6 Rename kwargs
* https://sonarcloud.io/project/issues?open=AZDmRbVW2PiUXMD3dYmW&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
5f434ac348 Rename exception args variable
* https://sonarcloud.io/project/issues?open=AZDmRbV12PiUXMD3dYmg&id=ansible_awx
* https://sonarcloud.io/project/issues?open=AZDmRaZX2PiUXMD3dXle&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
4de9c8356b Use fromkeys for constant
* https://sonarcloud.io/project/issues?open=AZeD0GsJyrLLb-kZUOoF&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
91118adbd3 Fix summary_dict None check
* https://sonarcloud.io/project/issues?open=AZDmRbWy2PiUXMD3dYoJ&id=ansible_awx
2026-02-03 16:12:00 -05:00
Chris Meyers
25f538277a Fix init return
* __init__ shouldn't return
2026-02-03 16:12:00 -05:00
joeywashburn
82cb52d648 Sanitize SSH key whitespace to prevent validation errors (#16179)
Strip leading and trailing whitespace from SSH keys in validate_ssh_private_key()
to handle common copy-paste scenarios where hidden newlines cause base64 decoding
failures.

Changes:
- Added data.strip() in validate_ssh_private_key() before calling validate_pem()
- Added test_ssh_key_with_whitespace() to verify keys with leading/trailing
  newlines are properly sanitized and validated

This prevents the confusing "HTTP 500: Internal Server Error" and
"binascii.Error: Incorrect padding" errors when users paste SSH keys with
accidental whitespace.

Fixes #14219

Signed-off-by: Joey Washburn <joey@joeywashburn.com>
2026-02-02 11:16:28 -05:00
Peter Braun
f7958b93bd add deprecated fields to x-ai-description for credential post (#16255) 2026-01-29 18:17:31 +01:00
Alan Rominger
3d68ca848e Fix race condition of un-expired cache in local workers (#16256) 2026-01-29 11:31:06 -05:00
Adrià Sala
99dce79078 fix: add py311 to make version detection 2026-01-28 14:20:48 +01:00
Alan Rominger
271383d018 AAP-60470 Add dispatcherctl and dispatcherd commands as updated interface to dispatcherd lib (#16206)
* Add dispatcherctl command

* Add tests for dispatcherctl command

* Exit early if sqlite3

* Switch to dispatcherd mgmt cmd

* Move unwanted command options to run_dispatcher

* Add test for new stuff

* Update the SOS report status command

* make docs always reference new command

* Consistently error if given config file
2026-01-27 15:57:23 -05:00
Alan Rominger
1128ad5a57 AAP-64221 Fix broken cancel logic with dispatcherd (#16247)
* Fix broken cancel logic with dispatcherd

Update tests for UnifiedJob

Update test assertion

* Further simply cancel path
2026-01-27 14:39:08 -05:00
Seth Foster
823b736afe Remove unused INSIGHTS_OIDC_ENDPOINT (#16235)
This setting is set in defaults.py, but
currently not being used. More technically,
project_update.yml is not passing this value to
the insights.py action plugin. Therefore, we
can safely remove references to it.

insights.py already has a default oidc endpoint
defined for authentication.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2026-01-27 10:13:37 -05:00
Alan Rominger
f80bbc57d8 AAP-43117 Additional dispatcher removal simplifications and waiting reaper updates (#16243)
* Additional dispatcher removal simplifications and waiting repear updates

* Fix double call and logging message

* Implement bugbot comment, should reap running on lost instances

* Add test case for new pending behavior
2026-01-26 13:55:37 -05:00
TVo
12a7229ee9 Publish open api spec on AWX for community use (#16221)
* Added link and ref to openAPI spec for community

* Update docs/docsite/rst/contributor/openapi_link.rst

Co-authored-by: Don Naro <dnaro@redhat.com>

* add sphinxcontrib-redoc to requirements

* sphinxcontrib.redoc configuration

* create openapi directory and files

* update download script for both schema files

* suppress warning for redoc

* update labels

* fix extra closing parenthesis

* update schema url

* exclude doc config and download script

The Sphinx configuration (conf.py) and schema download script
(download-json.py) are not application logic and used only for building
documentation. Coverage requirements for these files are overkill.

* exclude only the sphinx config file

---------

Co-authored-by: Don Naro <dnaro@redhat.com>
2026-01-26 18:16:49 +00:00
Don Naro
ceed692354 change contact email address (#16245)
Use the ansible-community@redhat.com alias as the contact email address.
2026-01-26 17:28:00 +00:00
Jake Jackson
36a00ec46b AAP-58539 Move to dispatcherd (#16209)
* WIP First pass
* started removing feature flags and adjusting logic
* Add decorator
* moved to dispatcher decorator
* updated as many as I could find
* Keep callback receiver working
* remove any code that is not used by the call back receiver
* add back auto_max_workers
* added back get_auto_max_workers into common utils
* Remove control and hazmat (squash this not done)
* moved status out and deleted control as no longer needed
* removed unused imports
* adjusted test import to pull correct method
* fixed imports and addressed clusternode heartbeat test
* Update function comments
* Add back hazmat for config and remove baseworker
* added back hazmat per @alancoding feedback around config
* removed baseworker completely and refactored it into the callback
  worker
* Fix dispatcher run call and remove dispatch setting
* remove dispatcher mock publish setting
* Adjust heartbeat arg and more formatting
* fixed the call to cluster_node_heartbeat missing binder
* Fix attribute error in server logs
2026-01-23 20:49:32 +00:00
Alan Rominger
94d5769f32 Fix extremely flaky failure (#16161) 2026-01-23 10:05:44 -05:00
Alan Rominger
98430db58f Collect operator logs on timeout (#16239)
* Collect operator logs on timeout

* Set timeout back to prod value

* Add e to the bash with timeout block
2026-01-23 10:05:32 -05:00
Rodrigo Toshiaki Horie
acf8721a09 Enhance OpenAPI schema with AI descriptions and fix method names (#16228)
* Enhance OpenAPI schema with AI descriptions and fix method names

Add x-ai-description extensions to API endpoints for better AI agent
comprehension. Fix view method names to
ensure proper drf-spectacular schema generation.

* Enhance OpenAPI schema with AI descriptions and fix method names

Add x-ai-description extensions to API endpoints for better AI agent
comprehension. Fix view method names to
ensure proper drf-spectacular schema generation.
2026-01-21 16:53:19 -03:00
Hao Liu
a839ce8cb1 Update kubernetes python client to 35.0.0 from PyPI (#16237)
Remove transitive dependencies no longer needed by kubernetes 35.0.0

Removes google-auth and rsa which were transitive dependencies of the older
kubernetes client but are no longer required in v35.0.0.

Adds cachetools as a direct dependency since it's used by awx/conf/settings.py
for TTLCache (was previously a transitive dep of google-auth).
2026-01-20 17:31:18 -05:00
Hao Liu
543b2a66a3 Update kubernetes python client to 35.0.0 from PyPI (#16236)
- Move kubernetes from git-based install to PyPI (v35.0.0 now available)
- Remove urllib3 cap comment since kubernetes 35.0.0 no longer restricts it
- Update README.md upgrade blocker documentation
2026-01-20 16:20:41 -05:00
Alan Rominger
8c5cf49c23 Avoid errors installing with python 3.11 (#16231) 2026-01-20 10:24:41 -05:00
Peter Braun
80bb0c9862 remove artifacts from list endpoint (#16230) 2026-01-20 10:58:01 +01:00
Alan Rominger
b34ee01fb3 Slightly alter history to avoid having a Django 5 related migration (#16214)
* Slightly alter history to avoid having a Django 5 related migration

* Revert prior field states to be slightly more clear
2026-01-19 13:33:46 -05:00
Alan Rominger
dce5ac73c5 Apply new rules from black update (#16232) 2026-01-19 12:58:07 -05:00
PabloHiro
43a3a620e3 [AAP-43413] Removing hardcoded number of flags from feature flag test
Assited-by: Claude
2026-01-19 09:37:20 +01:00
PascalKont
051357e573 fixed description for option notification_templates_approvals in module organizations (#16170)
fixed module organizations description for option notification_templates_approvals

Co-authored-by: Pascal Kontschan <pascal.kontschan.extern@atruvia.de>
2026-01-15 11:48:27 -05:00
John Barker
75aba0f62d docs: migrate RTD URLs to docs.ansible.com (#16189)
* docs: update readthedocs.io URLs to docs.ansible.com equivalents

🤖 Generated with Claude Code
https://claude.ai/code

Co-Authored-By: Claude <noreply@anthropic.com>

* Update Bullhorn newsletter link in communication docs

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-15 12:27:47 +00:00
Hao Liu
fee71b8917 Replace pytz with standard library timezone (#16197)
Refactored code to use Python's built-in datetime.timezone and zoneinfo instead of pytz for timezone handling. This modernizes the codebase and removes the dependency on pytz, aligning with current best practices for timezone-aware datetime objects.
2026-01-09 16:05:08 -05:00
Hao Liu
dbe979b425 Add make targets for updating requirements (#16195)
Introduces new Makefile targets to update and upgrade requirements files using pip-compile, both directly and via docker-runner. These additions streamline dependency management for development and CI workflows.
2026-01-09 16:04:27 -05:00
Hao Liu
03d0ed882c Add kubernetes python client from git at release-34.0 (#16226)
Switch to git-based installation of kubernetes python client from
github.com/kubernetes-client/python at commit df31d90d6c910d6b5c883b98011c93421cac067d
(release-34.0 branch). This also allows removing the urllib3<2.4.0 upper bound
constraint that was previously required by kubernetes 34.1.0 from PyPI.
2026-01-09 20:16:34 +00:00
Hao Liu
b0debf93a0 Use dnf module for Node.js 18 instead of n version manager - damn it aarch64 (#16225)
Use dnf module for Node.js 18 instead of n version manager

The n version manager fails to extract Node.js archives due to very long
file paths in include/node/openssl/archs/ directories when running in
Docker BuildKit's overlay filesystem. This causes CI build failures with
tar "Cannot open: Invalid argument" errors.

Switch to installing Node.js 18 directly from CentOS Stream 9's module
stream which avoids the archive extraction issue entirely.
2026-01-09 18:37:34 +00:00
Hao Liu
d018096cae Fix devel awx, awx_devel, awx_kube_devel build (#16219)
* Fix ARM64 build failure by upgrading dev container Node.js to 18

Node.js 16.13.1 fails to extract on ARM64 in Docker BuildKit's
overlay filesystem during multi-arch builds. Upgrade to Node 18
which is already used by the UI builder stage and has proper
ARM64 support.

* Fix collectstatic failure by setting AWX_MODE=default

  AWX_MODE=defaults is an intentionally "invalid" environment name that:

  1. Loads only defaults.py - the base settings file without any environment-specific overrides (development_defaults.py, production_defaults.py, etc.)
  2. Bypasses production checks - since "production" not in "defaults", it skips the assertion that requires /etc/tower/settings.py to exist
  3. Bypasses development mode - since is_development_mode would be false

  This is perfect for collectstatic during container build because:
  - No database connection needed
  - No secret key needed (hence SKIP_SECRET_KEY_CHECK)
  - No PostgreSQL version check (hence SKIP_PG_VERSION_CHECK)
  - Just need minimal Django settings to collect static files
2026-01-09 12:07:23 -05:00
Alan Rominger
cfe0b367b5 Do not eat errors building images (#16216) 2026-01-09 02:48:34 +00:00
Alan Rominger
7d24bdbf13 Clear in-memory cache, suggested by bugbot (#16218)
* Clear in-memory cache, suggested by bugbot

* Clear the cache even harder than we were before

* Syntax bugbot
2026-01-08 16:03:29 -05:00
Alan Rominger
3cba5e1744 Cache juggling to help address test flake (#16217) 2026-01-08 14:23:01 -05:00
Hao Liu
10a2946f9f Fix requirement for python3.12 (#16215)
* Fix pip version constraint for Python 3.12 compatibility

Remove outdated pip<22.0 constraint that was a workaround for
pip-tools#1558. This issue was fixed in pip-tools 6.5.0+ and
the old constraint breaks Python 3.12 where pkgutil.ImpImporter
was removed.

* Update requirements.txt

* Fix license file inconsistencies with requirements

- Rename awx-plugins.interfaces.txt to awx-plugins-interfaces.txt
  to match the package name in requirements
- Remove backports-tarfile.txt and importlib-resources.txt as these
  packages are no longer in requirements

* Fix updater.sh for pip 25.3 normalized output format

Changes to requirements_git.txt:
- Update to PEP 440 format (name @ git+url) to match pip-compile output
- Normalize package names (hyphens instead of dots/underscores)
- Sort extras alphabetically with hyphens (e.g., jwt-consumer not jwt_consumer)
- Add documentation explaining format requirements

Changes to updater.sh:
- Escape BRE regex metacharacters in sed pattern to handle brackets in extras
- Change sed delimiter from ! to | to avoid conflict with comment text
- Add explicit return statements to functions
- Assign positional parameters to local variables
- Redirect error messages to stderr
- Replace backticks with $() for command substitution
- Pin pip to version 25.3

requirements.txt regenerated via updater.sh

* Normalize package names in requirements.in to match pip output

- prometheus_client -> prometheus-client
- setuptools_scm -> setuptools-scm
- dispatcherd[pg_notify] -> dispatcherd[pg-notify]

PEP 503 specifies that package names should use hyphens.

* Fix license files to match normalized package names

- Remove awx_plugins.interfaces.txt (duplicate of awx-plugins-interfaces.txt)
- Rename system-certifi.txt to certifi.txt to match package name
2026-01-08 14:21:11 -05:00
Hao Liu
049a4b6438 Remove graph_jobs management command and asciichartpy dependency (#16078)
Deleted the awx/main/management/commands/graph_jobs.py file and removed the asciichartpy package from requirements. This cleans up unused code and dependencies related to terminal job status graphing.
2026-01-07 20:53:30 -05:00
jessicamack
de86b93690 AAP-59874: Update to Python 3.12 (#16208)
* update to Python 3.12

* remove use of utcnow

* switch to timezone.utc

datetime.UTC is an alias of datetime.timezone.utc. if we're doing the double import for datetime it's more straightforward to just import timezone as well and get it directly

* debug python env version issue

* change python version

* pin to SHA and remove debug portion
2026-01-07 11:57:24 -05:00
Alan Rominger
48c7534b57 AAP-60452 Remove the dynamic log level filter for the dispatcherd main process (#16200)
* Remove the dynamic filter on dispatcher startup

Configure the dynamic logging level only on startup

* Special case for log level on settings change

* Add unit test for new behavior

* Add test for initial config

* Mark test django DB

* Do necessary requirement bump

* Delete cache in live test fixture
2026-01-02 15:45:06 -05:00
Alan Rominger
40059512d8 Bump requirement because version was yanked from PyPI (#16212) 2026-01-02 11:14:00 -05:00
Bryan Havenstein
e2c1c5116d AAP-58457 Update UT for removed IPv6 feature flag 2026-01-02 09:39:04 -05:00
Chris Meyers
41f1ffc1dd AAP-45541 Add test to recreate jobs/4075584/job_events/children_summary/ error (#16163)
* Add test to recreate the error

* Also begin to add detection for empty event

* Remove breakpoint

* fix: ignore events with missing event types

* run linter and apply changes

---------

Co-authored-by: AlanCoding <arominge@redhat.com>
Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-12-17 21:34:53 +01:00
Alan Rominger
d7eb714859 Remove custom docs endpoint in DAB now (#16204)
* Remove custom docs endpoint in DAB now
2025-12-16 13:57:27 -05:00
Alan Rominger
7a58377aff Update ENV pattern in Dockerfile (#16202) 2025-12-16 09:57:54 -05:00
Hao Liu
2fbfe4ca73 Fix __pycache__ directory removal in clean target (#16196)
Replaces the use of 'find -delete' with 'find -exec rm -rf {} +' to ensure all __pycache__ directories are properly removed during the clean process.
2025-12-16 09:00:38 -05:00
Hao Liu
04fadab253 Remove unused ANSIBLE_BASE_PERMISSION_MODEL setting (#16198)
Deleted the ANSIBLE_BASE_PERMISSION_MODEL configuration from defaults.py as it is no longer needed.
2025-12-16 08:57:14 -05:00
Alan Rominger
054f6032fd AAP-47956 Use pg_notify for cancel and debugging, abandon socket approach (#16199)
* Use pg_notify for cancel and debugging, abandon socket approach

* Bump dispatcherd for pg_notify chunking
2025-12-10 14:38:39 -05:00
Seth Foster
f935134a19 Unpin rsyslog in container (#16203)
docker buildx build fails with
"Error: Unable to find a match: rsyslog-8.2102.0-106.el9"

unpinning builds successfully for both arm64 and x86_64

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-12-10 14:32:01 -05:00
Hao Liu
b24156805a Upgrade to Django 5.2 LTS (#16185)
Upgrade to Django 5.2 LTS with compatibility fixes across fields, migrations, dispatch config, tests, and dev deps.

Dependencies:
- Upgrade django to 5.2.8 and relax requirements.in to >=5.2,<5.3.
- Bump django-debug-toolbar to >=6.0 for compatibility.

Backend:
- awx/conf/fields.py: switch URL TLD regex to use DomainNameValidator.ul in custom URLField.
- awx/main/management/commands/gather_analytics.py: use datetime.timezone.utc for naïve datetime handling.
- awx/main/dispatch/config.py: add mock_publish option; avoid DB access for test runs, set default max_workers, and support a noop broker.

Migrations (SQLite/Postgres compatibility):
- Add awx/main/migrations/_sqlite_helper.py with db-aware AlterIndexTogether/RenameIndex wrappers; consume in 0144_event_partitions.py and 0184_django_indexes.py.
- Update 0187_hop_nodes.py to use CheckConstraint(condition=...).
- Add 0205_alter_instance_peers_alter_job_hosts_and_more.py adjusting through_fields/relations on instance.peers, job.hosts, and role.ancestors.
- _dab_rbac.py: iterate roles with chunk_size=1000 for migration performance.

Tests:
Include hcp_terraform in default credential types in test_credential.py.
---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-12-03 14:22:52 -05:00
Elijah DeLee
711b018ae7 cache dashboard query (#16165)
This causes an expensive query and the view sometimes called excessively
by the UI.  Memoize per unique user and params (time period) for 15s.
2025-12-03 13:03:39 -05:00
Stevenson Michel
be30a75c4f Removal of Warning for Distro Deprecation (#16193)
remove warning for distro deprecation
2025-12-02 14:28:12 -05:00
Seth Foster
a20f299cd6 Add x-ai-description to schema (#16186)
Adding ansible_base.api_documentation
to the INSTALL_APPS which extends the schema
to include an LLM-friendly description
to each endpoint

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-12-01 19:45:28 -05:00
Lila Yasin
4f41b50a09 AAP-57817 Add Redis connection retry using redis-py 7.0+ built-in (#16176)
* AAP-57817 Add Redis connection retry using redis-py 7.0+ built-in mechanism

* Refactor Redis client helpers to use settings and eliminate code duplication

* Create awx/main/utils/redis.py and move Redis client functions to avoid circular imports

* Fix subsystem_metrics to share Redis connection pool between
  client and pipeline

* Cache Redis clients in RelayConsumer and RelayWebsocketStatsManager to avoid creating new connection pools on every call

* Add cap and base config

* Add Redis retry logic with exponential backoff to handle connection failures during long-running operations

* Add REDIS_BACKOFF_CAP and REDIS_BACKOFF_BASE settings to allow
  adjustment of retry timing in worst-case scenarios without code changes

* Simplify Redis retry tests by removing unnecessary reload logic
2025-12-01 09:08:47 -05:00
Rodrigo Toshiaki Horie
0d86874d5d Organize S3 schema uploads by product (awx/tower) (#16190)
Update schema upload workflows to organize S3 files by product name:
- Upload schemas to s3://awx-public-ci-files/{product}/{branch}/schema.json
- Update Makefile to download from product-specific paths for schema diff
- Update feature branch deletion to clean up from correct product path

This separates AWX and Tower schemas into distinct S3 folders.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 10:43:38 -03:00
Fabricio Aguiar
2b2f2b73ac Move to Runtime Platform Flags (#16148)
* move to platform flags

Signed-off-by: Fabricio Aguiar <fabricio.aguiar@gmail.com>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED

* SonarCloud analyzes files without coverage data.
2025-11-25 10:20:04 -05:00
Lila Yasin
e03beb4d54 Add hcp_terraform to list of expected cred types to fix failing api test CI Check (#16188)
* Add hcp_terraform to list of expected cred types to fix failing api test ci check
2025-11-24 13:09:04 -05:00
Chris Meyers
4db52e074b Fix collectstatic (#16162) 2025-11-21 15:19:48 -05:00
Lila Yasin
4e1911f7c4 Bump Django to 4.2.26 to agree with DAB changes (#16183) 2025-11-19 14:56:29 -05:00
Peter Braun
b02117979d AAP-29938 add force flag to refspec (#16173)
* add force flag to refspec

* Development of git --amend test

* Update awx/main/tests/live/tests/conftest.py

Co-authored-by: Alan Rominger <arominge@redhat.com>

---------

Co-authored-by: AlanCoding <arominge@redhat.com>
2025-11-13 14:51:23 +01:00
Seth Foster
2fa2cd8beb Add timeout and on duplicate to system tasks (#16169)
Modify the invocation of @task_awx to accept timeout and
on_duplicate keyword arguments. These arguments are
only used in the new dispatcher implementation.

Add decorator params:
- timeout
- on_duplicate

to tasks to ensure better recovery for
stuck or long-running processes.

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-11-12 23:18:57 -05:00
Rodrigo Toshiaki Horie
f81859510c Change Swagger UI endpoint from /api/swagger/ to /api/docs/ (#16172)
* Change Swagger UI endpoint from /api/swagger/ to /api/docs/

- Update URL pattern to use /docs/ instead of /swagger/
- Update API root response to show 'docs' key instead of 'swagger'
- Add authentication requirement for schema documentation endpoints
- Update contact email to controller-eng@redhat.com

The schema endpoints (/api/docs/, /api/schema/, /api/redoc/) now
require authentication to prevent unauthorized access to API
documentation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Require authentication for all schema endpoints including /api/schema/

Create custom view classes that enforce authentication for all schema
endpoints to prevent inconsistent access control where UI views required
authentication but the raw schema endpoint remained publicly accessible.

This ensures all schema endpoints (/api/schema/, /api/docs/, /api/redoc/)
consistently require authentication.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add unit tests for authenticated schema view classes

Add test coverage for the new AuthenticatedSpectacular* view classes
to ensure they properly require authentication.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* remove unused import

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-12 09:14:54 -03:00
Rodrigo Toshiaki Horie
335a4bbbc6 AAP-45927 Add drf-spectacular (#16154)
* AAP-45927 Add drf-spectacular

- Remove drf-yasg
- Add drf-spectacular

* move SPECTACULAR_SETTINGS from development_defaults.py to defaults.py

* move SPECTACULAR_SETTINGS from development_defaults.py to defaults.py

* Fix swagger tests: enable schema endpoints in all modes

Schema endpoints were restricted to development mode, causing
test_swagger_generation.py to fail. Made schema URLs available in
all modes and fixed deprecated Django warning filters in pytest.ini.

* remove swagger from Makefile

* remove swagger from Makefile

* change docker-compose-build-swagger to docker-compose-build-schema

* remove MODE

* remove unused import

* Update genschema to use drf-spectacular with awx-link dependency

- Add awx-link as dependency for genschema targets to ensure package metadata exists
- Remove --validate --fail-on-warn flags (schema needs improvements first)
- Add genschema-yaml target for YAML output
- Add schema.yaml to .gitignore

* Fix detect-schema-change to not fail on schema differences

Add '-' prefix to diff command so Make ignores its exit status.
diff returns exit code 1 when files differ, which is expected behavior
for schema change detection, not an error.

* Truncate schema diff summary to stay under GitHub's 1MB limit

Limit schema diff output in job summary to first 1000 lines to avoid
exceeding GitHub's 1MB step summary size limit. Add message indicating
when diff is truncated and direct users to job logs or artifacts for
full output.

* readd MODE

* add drf-spectacular to requirements.in and the requirements.txt generated from the script

* Add drf-spectacular BSD license file

Required for test_python_licenses test to pass now that drf-spectacular
is in requirements.txt.

* add licenses

* Add comprehensive unit tests for CustomAutoSchema

Adds 15 unit tests for awx/api/schema.py to improve SonarCloud test
coverage. Tests cover all code paths in CustomAutoSchema including:
- get_tags() method with various scenarios (swagger_topic, serializer
  Meta.model, view.model, exception handling, fallbacks, warnings)
- is_deprecated() method with different view configurations
- Edge cases and priority ordering

All tests passing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* remove unused imports

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-10 12:35:22 -03:00
Lila Yasin
5ea2fe65b0 Fix failing Collection CI Checks (#16157)
* Fix CI: Use Python 3.13 for ansible-test compatibility

  ansible-test only supports Python 3.11, 3.12, and 3.13.
  Changed collection-integration jobs from '3.x' to '3.13'
  to avoid using Python 3.14 which is not supported.

* Fix ansible-test Python version for CI integration tests

  ansible-test only supports Python 3.11, 3.12, and 3.13.
  Added ANSIBLE_TEST_PYTHON_VERSION variable to explicitly pass
  --python 3.13 flag to ansible-test integration command.

  This prevents ansible-test from auto-detecting and using
  Python 3.14.0, which is not supported.

* Fix CI: Execute ansible-test with Python 3.13 to avoid
  unsupported Python 3.14

* Fix CI: Use Python 3.13 across all jobs to avoid Python
  3.14 compatibility issues

* Fix CI: Use 'python' and 'ansible.test' module for Python
   3.13 compatibility

* Fix CI: Use 'python' instead of 'python3' for Python 3.13
   compatibility

* Fix CI: Ensure ansible-test uses Python 3.13 environment
  explicitly

* Fix: Remove silent failure check for ansible-core in test suite

* Fix CI: Export PYTHONPATH to make awxkit available to ansible-test

* Fix CI: Use 'python' in run_awx_devel to maintain Python
  3.13 environment

* Fix CI: Remove setup-python from awx_devel_image that was resetting Python 3.13 to 3.14
2025-11-06 09:22:44 -05:00
Daniel Finca Martínez
f3f10ae9ce [AAP-42616] Bump receptor collection version to 2.0.6 (#16156)
Bump receptor collection version to 2.0.6
2025-11-05 15:49:43 -07:00
Jake Jackson
5be4462395 Update sonar and CI (#16153)
* actually upload PR coverage reports and inject PR number if report is
  generated from a PR
* upload general report of devel on merge and make things kinda pretty
2025-11-03 14:42:55 +00:00
Jake Jackson
d1d3a3471b Add new api schema check workflow (#16143)
* add new file to separate out the schema check so that it is no longer
  part of CI check and won't cacuse the whole workflow to fail
* remove old API schema check from ci.yml
2025-10-27 11:59:40 -04:00
Alan Rominger
a53fdaddae Fix f-string in log that is broken (#16132) 2025-10-20 14:23:38 -04:00
Hao Liu
f72591195e Fix migration failure for 0200 (#16135)
Moved the AddField operation before the RunPython operations for 'rename_jts' and 'rename_projects' in migration 0200_template_name_constraint.py. This ensures the new 'org_unique' field exists before related data migrations are executed.

Fix
```
django.db.utils.ProgrammingError: column main_unifiedjobtemplate.org_unique does not exist
```

while applying migration 0200_template_name_constraint.py

when there's a job template or poject with duplicate name in the same org
2025-10-20 08:50:23 -04:00
TVo
0d9483b54c Added Django and API requirements to AWX Contributor Docs for POC (#16093)
* Requirements POC docs from Claude Code eval

* Removed unnecessary reference.

* Excluded custom DRF configurations per @AlanCoding

* Implement review changes from @chrismeyersfsu

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-10-16 10:38:37 -06:00
jessicamack
f3fd9945d6 Update dependencies (#16122)
* prometheus-client returns an additional value as of v.0.22.0

* add license, remove outdated ones, add new embedded sources

* update requirements and UPGRADE BLOCKERs in README
2025-10-15 15:55:21 +00:00
Jake Jackson
72a42f23d5 Remove 'UI' from PR template component options (#16123)
Removed 'UI' from the list of component names in the PR template.
2025-10-13 10:23:56 -04:00
Jake Jackson
309724b12b Add SonarQube Coverage and report generation (#16112)
* added sonar config file and started cleaning up
* we do not place the report at the root of the repo
* limit scope to only the awx directory and its contents
* update exclusions for things in awx/ that we don't want covered
2025-10-10 10:44:35 -04:00
Seth Foster
300605ff73 Make subscriptions credentials mutually exclusive (#16126)
settings.SUBSCRIPTIONS_USERNAME and
settings.SUBSCRIPTIONS_CLIENT_ID

should be mutually exclusive. This is because
the POST to api/v2/config/attach/ accepts only
a subscription_id, and infers which credentials to
use based on settings. If both are set, it is ambiguous
and can lead to unexpected 400s when attempting
to attach a license.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-10-09 16:57:58 -04:00
Jake Jackson
77fab1c534 Update dependabot to check python deps (#16127)
* add list item to check python deps daily
* move to weekly after we gain confidence
2025-10-09 19:40:34 +00:00
Chris Meyers
51b2524b25 Gracefully handle hostname change in metrics code
* Previously, we would error out because we assumed that when we got a
  metrics payload from redis, that there was data in it and it was for
  the current host.
* Now, we do not assume that since we got a metrics payload, that is
  well formed and for the current hostname because the hostname could
  have changed and we could have not yet collected metrics for the new
  host.
2025-10-09 14:08:01 -04:00
Chris Coutinho
612e8e7688 Fix duplicate metrics in AWX subsystem_metrics (#15964)
Separate out operation subsystem metrics to fix duplicate error

Remove unnecessary comments

Revert to single subsystem_metrics_* metric with labels

Format via black
2025-10-09 10:28:55 +02:00
Andrea Restle-Lay
0d18308112 [AAP-46830]: Fix AWX CLI authentication with AAP Gateway environments (#16113)
* migrate pr #16081 to new fork

* add test coverage - add clearer error messaging

* update tests to use monkeypatch
2025-10-02 10:06:10 -04:00
Chris Meyers
f51af03424 Create system_administrator rbac role in migration
* We had race conditions with the system_administrator role being
  created just-in-time. Instead of fixing the race condition(s), dodge
  them by ensuring the role always exists
2025-10-02 08:25:40 -04:00
Alan Rominger
622f6ea166 AAP-53980 Disconnect logic to fill in role parents (#15462)
* Disconnect logic to fill in role parents

Get tests passing hopefully

Whatever SonarCloud

* remove role parents/children endpoints and related views

* remove duplicate get_queryset method from RoleTeamsList

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-10-02 13:06:37 +02:00
Seth Foster
2729076f7f Add basic auth to subscription management API (#16103)
Allow users to do subscription management using
Red Hat username and password.

In basic auth case, the candlepin API
at subscriptions.rhsm.redhat.com will be used instead
of console.redhat.com.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-10-02 01:06:47 -04:00
Alan Rominger
6db08bfa4e Rewrite the s3 upload step to fix breakage with new Ansible version (#16111)
* Rewrite the s3 upload step to fix breakage with new Ansible version

* Use commit hash for security

* Add the public read flag
2025-09-30 15:47:54 -04:00
Stevenson Michel
ceed41d352 Sharing Credentials Across Organizations (#16106)
* Added tests for cross org sharing of credentials

* added negative testing for sharing of credentials

* added conditions and tests for roleteamslist regarding cross org credentials

* removed redundant codes

* made error message more articulated and specific
2025-09-30 10:44:27 -04:00
jessicamack
98697a8ce7 Fix Grafana notification bug (#16104)
* accept empty string for dashboard and panel IDs

* update grafana tests and add new one
2025-09-29 10:04:58 -04:00
348 changed files with 7253 additions and 4617 deletions

View File

@@ -1,3 +1,3 @@
# Community Code of Conduct
Please see the official [Ansible Community Code of Conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
Please see the official [Ansible Community Code of Conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html).

View File

@@ -13,7 +13,7 @@ body:
attributes:
label: Please confirm the following
options:
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html).
required: true
- label: I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
required: true

View File

@@ -5,7 +5,7 @@ contact_links:
url: https://github.com/ansible/awx#get-involved
about: For general debugging or technical support please see the Get Involved section of our readme.
- name: 📝 Ansible Code of Conduct
url: https://docs.ansible.com/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_template_chooser
url: https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_template_chooser
about: AWX uses the Ansible Code of Conduct; ❤ Be nice to other members of the community. ☮ Behave.
- name: 💼 For Enterprise
url: https://www.ansible.com/products/engine?utm_medium=github&utm_source=issue_template_chooser

View File

@@ -13,7 +13,7 @@ body:
attributes:
label: Please confirm the following
options:
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html).
required: true
- label: I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
required: true

View File

@@ -17,7 +17,6 @@ in as the first entry for your PR title.
##### COMPONENT NAME
<!--- Name of the module/plugin/module/task -->
- API
- UI
- Collection
- CLI
- Docs

View File

@@ -11,8 +11,6 @@ inputs:
runs:
using: composite
steps:
- uses: ./.github/actions/setup-python
- name: Set lower case owner name
shell: bash
run: echo "OWNER_LC=${OWNER,,}" >> $GITHUB_ENV

View File

@@ -36,7 +36,7 @@ runs:
- name: Upgrade ansible-core
shell: bash
run: python3 -m pip install --upgrade ansible-core
run: python -m pip install --upgrade ansible-core
- name: Install system deps
shell: bash

View File

@@ -8,3 +8,10 @@ updates:
labels:
- "docs"
- "dependencies"
- package-ecosystem: "pip"
directory: "requirements/"
schedule:
interval: "daily" #run daily until we trust it, then back this off to weekly
open-pull-requests-limit: 2
labels:
- "dependencies"

View File

@@ -70,10 +70,10 @@ Thank you for your submission and for supporting AWX!
- Hello, we'd love to help, but we need a little more information about the problem you're having. Screenshots, log outputs, or any reproducers would be very helpful.
### Code of Conduct
- Hello. Please keep in mind that Ansible adheres to a Code of Conduct in its community spaces. The spirit of the code of conduct is to be kind, and this is your friendly reminder to be so. Please see the full code of conduct here if you have questions: https://docs.ansible.com/ansible/latest/community/code_of_conduct.html
- Hello. Please keep in mind that Ansible adheres to a Code of Conduct in its community spaces. The spirit of the code of conduct is to be kind, and this is your friendly reminder to be so. Please see the full code of conduct here if you have questions: https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html
### EE Contents / Community General
- Hello. The awx-ee contains the collections and dependencies needed for supported AWX features to function. Anything beyond that (like the community.general package) will require you to build your own EE. For information on how to do that, see https://ansible-builder.readthedocs.io/en/stable/ \
- Hello. The awx-ee contains the collections and dependencies needed for supported AWX features to function. Anything beyond that (like the community.general package) will require you to build your own EE. For information on how to do that, see https://docs.ansible.com/projects/builder/en/stable/ \
\
The Ansible Community is looking at building an EE that corresponds to all of the collections inside the ansible package. That may help you if and when it happens; see https://github.com/ansible-community/community-topics/issues/31 for details.
@@ -88,7 +88,7 @@ The Ansible Community is looking at building an EE that corresponds to all of th
- Hello, we think your idea is good! Please consider contributing a PR for this following our contributing guidelines: https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md
### Receptor
- You can find the receptor docs here: https://receptor.readthedocs.io/en/latest/
- You can find the receptor docs here: https://docs.ansible.com/projects/receptor/en/latest/
- Hello, your issue seems related to receptor. Could you please open an issue in the receptor repository? https://github.com/ansible/receptor. Thanks!
### Ansible Engine not AWX

72
.github/workflows/api_schema_check.yml vendored Normal file
View File

@@ -0,0 +1,72 @@
---
name: API Schema Change Detection
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
CI_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DEV_DOCKER_OWNER: ${{ github.repository_owner }}
COMPOSE_TAG: ${{ github.base_ref || 'devel' }}
UPSTREAM_REPOSITORY_ID: 91594105
on:
pull_request:
branches:
- devel
- release_**
- feature_**
- stable-**
jobs:
api-schema-detection:
name: Detect API Schema Changes
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v4
with:
show-progress: false
fetch-depth: 0
- name: Build awx_devel image for schema check
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
private-github-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Detect API schema changes
id: schema-check
continue-on-error: true
run: |
AWX_DOCKER_ARGS='-e GITHUB_ACTIONS' \
AWX_DOCKER_CMD='make detect-schema-change SCHEMA_DIFF_BASE_BRANCH=${{ github.event.pull_request.base.ref }}' \
make docker-runner 2>&1 | tee schema-diff.txt
exit ${PIPESTATUS[0]}
- name: Add schema diff to job summary
if: always()
# show text and if for some reason, it can't be generated, state that it can't be.
run: |
echo "## API Schema Change Detection Results" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
if [ -f schema-diff.txt ]; then
if grep -q "^+" schema-diff.txt || grep -q "^-" schema-diff.txt; then
echo "### Schema changes detected" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
# Truncate to first 1000 lines to stay under GitHub's 1MB summary limit
TOTAL_LINES=$(wc -l < schema-diff.txt)
if [ $TOTAL_LINES -gt 1000 ]; then
echo "_Showing first 1000 of ${TOTAL_LINES} lines. See job logs or download artifact for full diff._" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
fi
echo '```diff' >> $GITHUB_STEP_SUMMARY
head -n 1000 schema-diff.txt >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
else
echo "### No schema changes detected" >> $GITHUB_STEP_SUMMARY
fi
else
echo "### Unable to generate schema diff" >> $GITHUB_STEP_SUMMARY
fi

View File

@@ -32,18 +32,9 @@ jobs:
- name: api-lint
command: /var/lib/awx/venv/awx/bin/tox -e linters
coverage-upload-name: ""
- name: api-swagger
command: /start_tests.sh swagger
coverage-upload-name: ""
- name: awx-collection
command: /start_tests.sh test_collection_all
coverage-upload-name: "awx-collection"
- name: api-schema
command: >-
/start_tests.sh detect-schema-change SCHEMA_DIFF_BASE_BRANCH=${{
github.event.pull_request.base.ref || github.ref_name
}}
coverage-upload-name: ""
steps:
- uses: actions/checkout@v4
@@ -63,6 +54,17 @@ jobs:
AWX_DOCKER_CMD='${{ matrix.tests.command }}'
make docker-runner
- name: Inject PR number into coverage.xml
if: >-
!cancelled()
&& github.event_name == 'pull_request'
&& steps.make-run.outputs.cov-report-files != ''
run: |
if [ -f "reports/coverage.xml" ]; then
sed -i '2i<!-- PR ${{ github.event.pull_request.number }} -->' reports/coverage.xml
echo "Injected PR number ${{ github.event.pull_request.number }} into coverage.xml"
fi
- name: Upload test coverage to Codecov
if: >-
!cancelled()
@@ -102,6 +104,14 @@ jobs:
}}
token: ${{ secrets.CODECOV_TOKEN }}
- name: Upload test artifacts
if: always()
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.tests.name }}-artifacts
path: reports/coverage.xml
retention-days: 5
- name: Upload awx jUnit test reports
if: >-
!cancelled()
@@ -132,7 +142,7 @@ jobs:
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
python-version: '3.13'
- uses: ./.github/actions/run_awx_devel
id: awx
@@ -173,14 +183,19 @@ jobs:
path: awx-operator
- name: Setup python, referencing action at awx relative path
uses: ./awx/.github/actions/setup-python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065
with:
python-version: '3.x'
python-version: '3.12'
- name: Install playbook dependencies
run: |
python3 -m pip install docker
python -m pip install docker
- name: Check Python version
working-directory: awx
run: |
make print-PYTHON
- name: Build AWX image
working-directory: awx
run: |
@@ -192,27 +207,59 @@ jobs:
- name: Run test deployment with awx-operator
working-directory: awx-operator
id: awx_operator_test
timeout-minutes: 60
continue-on-error: true
run: |
python3 -m pip install -r molecule/requirements.txt
python3 -m pip install PyYAML # for awx/tools/scripts/rewrite-awx-operator-requirements.py
$(realpath ../awx/tools/scripts/rewrite-awx-operator-requirements.py) molecule/requirements.yml $(realpath ../awx)
ansible-galaxy collection install -r molecule/requirements.yml
sudo rm -f $(which kustomize)
make kustomize
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind -- --skip-tags=replicas
set +e
timeout 15m bash -elc '
python -m pip install -r molecule/requirements.txt
python -m pip install PyYAML # for awx/tools/scripts/rewrite-awx-operator-requirements.py
$(realpath ../awx/tools/scripts/rewrite-awx-operator-requirements.py) molecule/requirements.yml $(realpath ../awx)
ansible-galaxy collection install -r molecule/requirements.yml
sudo rm -f $(which kustomize)
make kustomize
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind -- --skip-tags=replicas
'
rc=$?
if [ $rc -eq 124 ]; then
echo "timed_out=true" >> "$GITHUB_OUTPUT"
fi
exit $rc
env:
AWX_TEST_IMAGE: local/awx
AWX_TEST_VERSION: ci
AWX_EE_TEST_IMAGE: quay.io/ansible/awx-ee:latest
STORE_DEBUG_OUTPUT: true
- name: Collect awx-operator logs on timeout
# Only run on timeout; normal failures should use molecule's built-in log collection.
if: steps.awx_operator_test.outputs.timed_out == 'true'
run: |
mkdir -p "$DEBUG_OUTPUT_DIR"
if command -v kind >/dev/null 2>&1; then
for cluster in $(kind get clusters 2>/dev/null); do
kind export logs "$DEBUG_OUTPUT_DIR/$cluster" --name "$cluster" || true
done
fi
if command -v kubectl >/dev/null 2>&1; then
kubectl get all -A -o wide > "$DEBUG_OUTPUT_DIR/kubectl-get-all.txt" || true
kubectl get pods -A -o wide > "$DEBUG_OUTPUT_DIR/kubectl-get-pods.txt" || true
kubectl describe pods -A > "$DEBUG_OUTPUT_DIR/kubectl-describe-pods.txt" || true
fi
docker ps -a > "$DEBUG_OUTPUT_DIR/docker-ps.txt" || true
- name: Upload debug output
if: failure()
if: always()
uses: actions/upload-artifact@v4
with:
name: awx-operator-debug-output
path: ${{ env.DEBUG_OUTPUT_DIR }}
- name: Fail awx-operator check if test deployment failed
if: steps.awx_operator_test.outcome != 'success'
run: exit 1
collection-sanity:
name: awx_collection sanity
runs-on: ubuntu-latest
@@ -281,7 +328,11 @@ jobs:
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
python-version: '3.13'
- name: Remove system ansible to avoid conflicts
run: |
python -m pip uninstall -y ansible ansible-core || true
- uses: ./.github/actions/run_awx_devel
id: awx
@@ -292,8 +343,9 @@ jobs:
- name: Install dependencies for running tests
run: |
python3 -m pip install -e ./awxkit/
python3 -m pip install -r awx_collection/requirements.txt
python -m pip install -e ./awxkit/
python -m pip install -r awx_collection/requirements.txt
hash -r # Rehash to pick up newly installed scripts
- name: Run integration tests
id: make-run
@@ -305,6 +357,7 @@ jobs:
echo 'password = password' >> ~/.tower_cli.cfg
echo 'verify_ssl = false' >> ~/.tower_cli.cfg
TARGETS="$(ls awx_collection/tests/integration/targets | grep '${{ matrix.target-regex.regex }}' | tr '\n' ' ')"
export PYTHONPATH="$(python -c 'import site; print(":".join(site.getsitepackages()))')${PYTHONPATH:+:$PYTHONPATH}"
make COLLECTION_VERSION=100.100.100-git COLLECTION_TEST_TARGET="--requirements $TARGETS" test_collection_integration
env:
ANSIBLE_TEST_PREFER_PODMAN: 1
@@ -359,10 +412,14 @@ jobs:
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
python-version: '3.13'
- name: Remove system ansible to avoid conflicts
run: |
python -m pip uninstall -y ansible ansible-core || true
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
run: python -m pip install --upgrade ansible-core
- name: Download coverage artifacts
uses: actions/download-artifact@v4
@@ -377,11 +434,12 @@ jobs:
mkdir -p ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage
cp -rv coverage/* ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
cd ~/.ansible/collections/ansible_collections/awx/awx
ansible-test coverage combine --requirements
ansible-test coverage html
hash -r # Rehash to pick up newly installed scripts
PATH="$(python -c 'import sys; import os; print(os.path.dirname(sys.executable))'):$PATH" ansible-test coverage combine --requirements
PATH="$(python -c 'import sys; import os; print(os.path.dirname(sys.executable))'):$PATH" ansible-test coverage html
echo '## AWX Collection Integration Coverage' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
ansible-test coverage report >> $GITHUB_STEP_SUMMARY
PATH="$(python -c 'import sys; import os; print(os.path.dirname(sys.executable))'):$PATH" ansible-test coverage report >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo >> $GITHUB_STEP_SUMMARY
echo '## AWX Collection Integration Coverage HTML' >> $GITHUB_STEP_SUMMARY

View File

@@ -20,4 +20,4 @@ jobs:
run: |
ansible localhost -c local, -m command -a "{{ ansible_python_interpreter + ' -m pip install boto3'}}"
ansible localhost -c local -m aws_s3 \
-a "bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=delobj permission=public-read"
-a "bucket=awx-public-ci-files object=${{ github.event.repository.name }}/${GITHUB_REF##*/}/schema.json mode=delobj permission=public-read"

248
.github/workflows/sonarcloud_pr.yml vendored Normal file
View File

@@ -0,0 +1,248 @@
# SonarCloud Analysis Workflow for awx
#
# This workflow runs SonarCloud analysis triggered by CI workflow completion.
# It is split into two separate jobs for clarity and maintainability:
#
# FLOW: CI completes → workflow_run triggers this workflow → appropriate job runs
#
# JOB 1: sonar-pr-analysis (for PRs)
# - Triggered by: workflow_run (CI on pull_request)
# - Steps: Download coverage → Get PR info → Get changed files → Run SonarCloud PR analysis
# - Scans: All changed files in the PR (Python, YAML, JSON, etc.)
# - Quality gate: Focuses on new/changed code in PR only
#
# JOB 2: sonar-branch-analysis (for long-lived branches)
# - Triggered by: workflow_run (CI on push to devel)
# - Steps: Download coverage → Run SonarCloud branch analysis
# - Scans: Full codebase
# - Quality gate: Focuses on overall project health
#
# This ensures coverage data is always available from CI before analysis runs.
#
# What files are scanned:
# - All files in the repository that SonarCloud can analyze
# - Excludes: tests, scripts, dev environments, external collections (see sonar-project.properties)
# With much help from:
# https://community.sonarsource.com/t/how-to-use-sonarcloud-with-a-forked-repository-on-github/7363/30
# https://community.sonarsource.com/t/how-to-use-sonarcloud-with-a-forked-repository-on-github/7363/32
name: SonarCloud
on:
workflow_run: # This is triggered by CI being completed.
workflows:
- CI
types:
- completed
permissions: read-all
jobs:
sonar-pr-analysis:
name: SonarCloud PR Analysis
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.event == 'pull_request' &&
github.repository == 'ansible/awx'
steps:
- uses: actions/checkout@v4
# Download all individual coverage artifacts from CI workflow
- name: Download coverage artifacts
uses: dawidd6/action-download-artifact@246dbf436b23d7c49e21a7ab8204ca9ecd1fe615
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
workflow: CI
run_id: ${{ github.event.workflow_run.id }}
pattern: api-test-artifacts
# Extract PR metadata from workflow_run event
- name: Set PR metadata and prepare files for analysis
env:
COMMIT_SHA: ${{ github.event.workflow_run.head_sha }}
REPO_NAME: ${{ github.event.repository.full_name }}
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Find all downloaded coverage XML files
coverage_files=$(find . -name "coverage.xml" -type f | tr '\n' ',' | sed 's/,$//')
echo "Found coverage files: $coverage_files"
echo "COVERAGE_PATHS=$coverage_files" >> $GITHUB_ENV
# Extract PR number from first coverage.xml file found
first_coverage=$(find . -name "coverage.xml" -type f | head -1)
if [ -f "$first_coverage" ]; then
PR_NUMBER=$(grep -m 1 '<!-- PR' "$first_coverage" | awk '{print $3}' || echo "")
else
PR_NUMBER=""
fi
echo "🔍 SonarCloud Analysis Decision Summary"
echo "========================================"
echo "├── CI Event: ✅ Pull Request"
echo "├── PR Number from coverage.xml: #${PR_NUMBER:-<not found>}"
if [ -z "$PR_NUMBER" ]; then
echo "##[error]❌ FATAL: PR number not found in coverage.xml"
echo "##[error]This job requires a PR number to run PR analysis."
echo "##[error]The ci workflow should have injected the PR number into coverage.xml."
exit 1
fi
# Get PR metadata from GitHub API
PR_DATA=$(gh api "repos/$REPO_NAME/pulls/$PR_NUMBER")
PR_BASE=$(echo "$PR_DATA" | jq -r '.base.ref')
PR_HEAD=$(echo "$PR_DATA" | jq -r '.head.ref')
# Print summary
echo "🔍 SonarCloud Analysis Decision Summary"
echo "========================================"
echo "├── CI Event: ✅ Pull Request"
echo "├── PR Number: #$PR_NUMBER"
echo "├── Base Branch: $PR_BASE"
echo "├── Head Branch: $PR_HEAD"
echo "├── Repo: $REPO_NAME"
# Export to GitHub env for later steps
echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_ENV
echo "PR_BASE=$PR_BASE" >> $GITHUB_ENV
echo "PR_HEAD=$PR_HEAD" >> $GITHUB_ENV
echo "COMMIT_SHA=$COMMIT_SHA" >> $GITHUB_ENV
echo "REPO_NAME=$REPO_NAME" >> $GITHUB_ENV
# Get all changed files from PR (with error handling)
files=""
if [ -n "$PR_NUMBER" ]; then
if gh api repos/$REPO_NAME/pulls/$PR_NUMBER/files --jq '.[].filename' > /tmp/pr_files.txt 2>/tmp/pr_error.txt; then
files=$(cat /tmp/pr_files.txt)
else
echo "├── Changed Files: ⚠️ Could not fetch (likely test repo or PR not found)"
if [ -f coverage.xml ] && [ -s coverage.xml ]; then
echo "├── Coverage Data: ✅ Available"
else
echo "├── Coverage Data: ⚠️ Not available"
fi
echo "└── Result: ✅ Running SonarCloud analysis (full scan)"
# No files = no inclusions filter = full scan
exit 0
fi
else
echo "├── PR Number: ⚠️ Not available"
if [ -f coverage.xml ] && [ -s coverage.xml ]; then
echo "├── Coverage Data: ✅ Available"
else
echo "├── Coverage Data: ⚠️ Not available"
fi
echo "└── Result: ✅ Running SonarCloud analysis (full scan)"
exit 0
fi
# Get file extensions and count for summary
extensions=$(echo "$files" | sed 's/.*\.//' | sort | uniq | tr '\n' ',' | sed 's/,$//')
file_count=$(echo "$files" | wc -l)
echo "├── Changed Files: $file_count file(s) (.${extensions})"
# Check if coverage.xml exists and has content
if [ -f coverage.xml ] && [ -s coverage.xml ]; then
echo "├── Coverage Data: ✅ Available"
else
echo "├── Coverage Data: ⚠️ Not available (analysis will proceed without coverage)"
fi
# Prepare file list for Sonar
echo "All changed files in PR:"
echo "$files"
# Filter out files that are excluded by .coveragerc to avoid coverage conflicts
# This prevents SonarCloud from analyzing files that have no coverage data
if [ -n "$files" ]; then
# Filter out files matching .coveragerc omit patterns
filtered_files=$(echo "$files" | grep -v "settings/.*_defaults\.py$" | grep -v "settings/defaults\.py$" | grep -v "main/migrations/")
# Show which files were filtered out for transparency
excluded_files=$(echo "$files" | grep -E "(settings/.*_defaults\.py$|settings/defaults\.py$|main/migrations/)" || true)
if [ -n "$excluded_files" ]; then
echo "├── Filtered out (coverage-excluded): $(echo "$excluded_files" | wc -l) file(s)"
echo "$excluded_files" | sed 's/^/│ - /'
fi
if [ -n "$filtered_files" ]; then
inclusions=$(echo "$filtered_files" | tr '\n' ',' | sed 's/,$//')
echo "SONAR_INCLUSIONS=$inclusions" >> $GITHUB_ENV
echo "└── Result: ✅ Will scan these files (excluding coverage-omitted files): $inclusions"
else
echo "└── Result: ✅ All changed files are excluded by coverage config, running full SonarCloud analysis"
# Don't set SONAR_INCLUSIONS, let it scan everything per sonar-project.properties
fi
else
echo "└── Result: ✅ Running SonarCloud analysis"
fi
- name: Add base branch
if: env.PR_NUMBER != ''
run: |
gh pr checkout ${{ env.PR_NUMBER }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: SonarCloud Scan
uses: SonarSource/sonarqube-scan-action@fd88b7d7ccbaefd23d8f36f73b59db7a3d246602 # v6
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.CICD_ORG_SONAR_TOKEN_CICD_BOT }}
with:
args: >
-Dsonar.scm.revision=${{ env.COMMIT_SHA }}
-Dsonar.pullrequest.key=${{ env.PR_NUMBER }}
-Dsonar.pullrequest.branch=${{ env.PR_HEAD }}
-Dsonar.pullrequest.base=${{ env.PR_BASE }}
-Dsonar.python.coverage.reportPaths=${{ env.COVERAGE_PATHS }}
${{ env.SONAR_INCLUSIONS && format('-Dsonar.inclusions={0}', env.SONAR_INCLUSIONS) || '' }}
sonar-branch-analysis:
name: SonarCloud Branch Analysis
runs-on: ubuntu-latest
if: |
github.event_name == 'workflow_run' &&
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.event == 'push' &&
github.repository == 'ansible/awx'
steps:
- uses: actions/checkout@v4
# Download all individual coverage artifacts from CI workflow (optional for branch pushes)
- name: Download coverage artifacts
continue-on-error: true
uses: dawidd6/action-download-artifact@246dbf436b23d7c49e21a7ab8204ca9ecd1fe615
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
workflow: CI
run_id: ${{ github.event.workflow_run.id }}
pattern: api-test-artifacts
- name: Print SonarCloud Analysis Summary
env:
BRANCH_NAME: ${{ github.event.workflow_run.head_branch }}
run: |
# Find all downloaded coverage XML files
coverage_files=$(find . -name "coverage.xml" -type f | tr '\n' ',' | sed 's/,$//')
echo "Found coverage files: $coverage_files"
echo "COVERAGE_PATHS=$coverage_files" >> $GITHUB_ENV
echo "🔍 SonarCloud Analysis Summary"
echo "=============================="
echo "├── CI Event: ✅ Push (via workflow_run)"
echo "├── Branch: $BRANCH_NAME"
echo "├── Coverage Files: ${coverage_files:-none}"
echo "├── Python Changes: N/A (Full codebase scan)"
echo "└── Result: ✅ Proceed - \"Running SonarCloud analysis\""
- name: SonarCloud Scan
uses: SonarSource/sonarqube-scan-action@fd88b7d7ccbaefd23d8f36f73b59db7a3d246602 # v6
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.CICD_ORG_SONAR_TOKEN_CICD_BOT }}
with:
args: >
-Dsonar.scm.revision=${{ github.event.workflow_run.head_sha }}
-Dsonar.branch.name=${{ github.event.workflow_run.head_branch }}
${{ env.COVERAGE_PATHS && format('-Dsonar.python.coverage.reportPaths={0}', env.COVERAGE_PATHS) || '' }}

View File

@@ -38,11 +38,12 @@ jobs:
--workdir=/awx_devel `make print-DEVEL_IMAGE_NAME` /start_tests.sh genschema
- name: Upload API Schema
env:
AWS_ACCESS_KEY: ${{ secrets.AWS_ACCESS_KEY }}
AWS_SECRET_KEY: ${{ secrets.AWS_SECRET_KEY }}
AWS_REGION: 'us-east-1'
run: |
ansible localhost -c local, -m command -a "{{ ansible_python_interpreter + ' -m pip install boto3'}}"
ansible localhost -c local -m aws_s3 \
-a "src=${{ github.workspace }}/schema.json bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=put permission=public-read"
uses: keithweaver/aws-s3-github-action@4dd5a7b81d54abaa23bbac92b27e85d7f405ae53
with:
command: cp
source: ${{ github.workspace }}/schema.json
destination: s3://awx-public-ci-files/${{ github.event.repository.name }}/${{ github.ref_name }}/schema.json
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_KEY }}
aws_region: us-east-1
flags: --acl public-read --only-show-errors

1
.gitignore vendored
View File

@@ -1,6 +1,7 @@
# Ignore generated schema
swagger.json
schema.json
schema.yaml
reference-schema.json
# Tags

View File

@@ -7,7 +7,7 @@ build:
os: ubuntu-22.04
tools:
python: >-
3.11
3.12
commands:
- pip install --user tox
- python3 -m tox -e docs --notest -v

View File

@@ -31,7 +31,7 @@ Have questions about this document or anything not covered here? Create a topic
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see [git push docs](https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt).
- If submitting a large code change, it's a good idea to create a [forum topic tagged with 'awx'](https://forum.ansible.com/tag/awx), and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
## Setting up your development environment

View File

@@ -1,6 +1,6 @@
-include awx/ui/Makefile
PYTHON := $(notdir $(shell for i in python3.11 python3; do command -v $$i; done|sed 1q))
PYTHON := $(notdir $(shell for i in python3.12 python3.11 python3; do command -v $$i; done|sed 1q))
SHELL := bash
DOCKER_COMPOSE ?= docker compose
OFFICIAL ?= no
@@ -27,6 +27,8 @@ TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests
PARALLEL_TESTS ?= -n auto
# collection integration test directories (defaults to all)
COLLECTION_TEST_TARGET ?=
# Python version for ansible-test (must be 3.11, 3.12, or 3.13)
ANSIBLE_TEST_PYTHON_VERSION ?= 3.13
# args for collection install
COLLECTION_PACKAGE ?= awx
COLLECTION_NAMESPACE ?= awx
@@ -77,7 +79,7 @@ RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio
# These should be upgraded in the AWX and Ansible venv before attempting
# to install the actual requirements
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==80.9.0 setuptools_scm[toml]==8.0.4 wheel==0.42.0 cython==3.1.3
VENV_BOOTSTRAP ?= pip==25.3 setuptools==80.9.0 setuptools_scm[toml]==9.2.2 wheel==0.45.1 cython==3.1.3
NAME ?= awx
@@ -105,6 +107,8 @@ else
endif
.PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \
update_requirements upgrade_requirements update_requirements_dev \
docker_update_requirements docker_upgrade_requirements docker_update_requirements_dev \
develop refresh adduser migrate dbchange \
receiver test test_unit test_coverage coverage_html \
sdist \
@@ -144,7 +148,7 @@ clean-api:
rm -rf build $(NAME)-$(VERSION) *.egg-info
rm -rf .tox
find . -type f -regex ".*\.py[co]$$" -delete
find . -type d -name "__pycache__" -delete
find . -type d -name "__pycache__" -exec rm -rf {} +
rm -f awx/awx_test.sqlite3*
rm -rf requirements/vendor
rm -rf awx/projects
@@ -194,6 +198,36 @@ requirements_dev: requirements_awx requirements_awx_dev
requirements_test: requirements
## Update requirements files using pip-compile (run inside container)
update_requirements:
cd requirements && ./updater.sh run
## Upgrade all requirements to latest versions (run inside container)
upgrade_requirements:
cd requirements && ./updater.sh upgrade
## Update development requirements (run inside container)
update_requirements_dev:
cd requirements && ./updater.sh dev
## Update requirements using docker-runner
docker_update_requirements:
@echo "Running requirements updater..."
AWX_DOCKER_CMD='make update_requirements' $(MAKE) docker-runner
@echo "Requirements update complete!"
## Upgrade requirements using docker-runner
docker_upgrade_requirements:
@echo "Running requirements upgrader..."
AWX_DOCKER_CMD='make upgrade_requirements' $(MAKE) docker-runner
@echo "Requirements upgrade complete!"
## Update dev requirements using docker-runner
docker_update_requirements_dev:
@echo "Running dev requirements updater..."
AWX_DOCKER_CMD='make update_requirements_dev' $(MAKE) docker-runner
@echo "Dev requirements update complete!"
## "Install" awx package in development mode.
develop:
@if [ "$(VIRTUAL_ENV)" ]; then \
@@ -255,7 +289,7 @@ dispatcher:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_dispatcher
$(PYTHON) manage.py dispatcherd
## Run to start the zeromq callback receiver
receiver:
@@ -314,20 +348,17 @@ black: reports
@echo "fi" >> .git/hooks/pre-commit
@chmod +x .git/hooks/pre-commit
genschema: reports
$(MAKE) swagger PYTEST_ADDOPTS="--genschema --create-db "
mv swagger.json schema.json
swagger: reports
genschema: awx-link reports
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
(set -o pipefail && py.test $(COVERAGE_ARGS) $(PARALLEL_TESTS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs | tee reports/$@.report)
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo 'cov-report-files=reports/coverage.xml' >> "${GITHUB_OUTPUT}"; \
echo 'test-result-files=reports/junit.xml' >> "${GITHUB_OUTPUT}"; \
fi
$(MANAGEMENT_COMMAND) spectacular --format openapi-json --file schema.json
genschema-yaml: awx-link reports
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(MANAGEMENT_COMMAND) spectacular --format openapi --file schema.yaml
check: black
@@ -431,8 +462,8 @@ test_collection_sanity:
test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && \
ansible-test integration --coverage -vvv $(COLLECTION_TEST_TARGET) && \
ansible-test coverage xml --requirements --group-by command --group-by version
PATH="$$($(PYTHON) -c 'import sys; import os; print(os.path.dirname(sys.executable))'):$$PATH" ansible-test integration --python $(ANSIBLE_TEST_PYTHON_VERSION) --coverage -vvv $(COLLECTION_TEST_TARGET) && \
PATH="$$($(PYTHON) -c 'import sys; import os; print(os.path.dirname(sys.executable))'):$$PATH" ansible-test coverage xml --requirements --group-by command --group-by version
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo cov-report-files="$$(find "$(COLLECTION_INSTALL)/tests/output/reports/" -type f -name 'coverage=integration*.xml' -print0 | tr '\0' ',' | sed 's#,$$##')" >> "${GITHUB_OUTPUT}"; \
@@ -537,14 +568,16 @@ docker-compose-test: awx/projects docker-compose-sources
docker-compose-runtest: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /start_tests.sh
docker-compose-build-swagger: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 /start_tests.sh swagger
docker-compose-build-schema: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 make genschema
SCHEMA_DIFF_BASE_FOLDER ?= awx
SCHEMA_DIFF_BASE_BRANCH ?= devel
detect-schema-change: genschema
curl https://s3.amazonaws.com/awx-public-ci-files/$(SCHEMA_DIFF_BASE_BRANCH)/schema.json -o reference-schema.json
curl https://s3.amazonaws.com/awx-public-ci-files/$(SCHEMA_DIFF_BASE_FOLDER)/$(SCHEMA_DIFF_BASE_BRANCH)/schema.json -o reference-schema.json
# Ignore differences in whitespace with -b
diff -u -b reference-schema.json schema.json
# diff exits with 1 when files differ - capture but don't fail
-diff -u -b reference-schema.json schema.json
docker-compose-clean: awx/projects
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml rm -sf
@@ -577,7 +610,7 @@ docker-compose-build: Dockerfile.dev
docker-compose-buildx: Dockerfile.dev
- docker buildx create --name docker-compose-buildx
docker buildx use docker-compose-buildx
- docker buildx build \
docker buildx build \
--ssh default=$(SSH_AUTH_SOCK) \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \
@@ -637,7 +670,7 @@ awx-kube-build: Dockerfile
awx-kube-buildx: Dockerfile
- docker buildx create --name awx-kube-buildx
docker buildx use awx-kube-buildx
- docker buildx build \
docker buildx build \
--ssh default=$(SSH_AUTH_SOCK) \
--push \
--build-arg VERSION=$(VERSION) \
@@ -671,7 +704,7 @@ awx-kube-dev-build: Dockerfile.kube-dev
awx-kube-dev-buildx: Dockerfile.kube-dev
- docker buildx create --name awx-kube-dev-buildx
docker buildx use awx-kube-dev-buildx
- docker buildx build \
docker buildx build \
--ssh default=$(SSH_AUTH_SOCK) \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \

View File

@@ -1,4 +1,4 @@
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![codecov](https://codecov.io/github/ansible/awx/graph/badge.svg?token=4L4GSP9IAR)](https://codecov.io/github/ansible/awx) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX on the Ansible Forum](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://forum.ansible.com/tag/awx)
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![codecov](https://codecov.io/github/ansible/awx/graph/badge.svg?token=4L4GSP9IAR)](https://codecov.io/github/ansible/awx) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX on the Ansible Forum](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://forum.ansible.com/tag/awx)
[![Ansible Matrix](https://img.shields.io/badge/matrix-Ansible%20Community-blueviolet.svg?logo=matrix)](https://chat.ansible.im/#/welcome) [![Ansible Discourse](https://img.shields.io/badge/discourse-Ansible%20Community-yellowgreen.svg?logo=discourse)](https://forum.ansible.com)
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
@@ -18,7 +18,7 @@ AWX provides a web-based user interface, REST API, and task engine built on top
To install AWX, please view the [Install guide](./INSTALL.md).
To learn more about using AWX, view the [AWX docs site](https://ansible.readthedocs.io/projects/awx/en/latest/).
To learn more about using AWX, view the [AWX docs site](https://docs.ansible.com/projects/awx/en/latest/).
The AWX Project Frequently Asked Questions can be found [here](https://www.ansible.com/awx-project-faq).
@@ -41,11 +41,11 @@ If you're experiencing a problem that you feel is a bug in AWX or have ideas for
Code of Conduct
---------------
We require all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
We require all of our community members and contributors to adhere to the [Ansible code of conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
Get Involved
------------
We welcome your feedback and ideas via the [Ansible Forum](https://forum.ansible.com/tag/awx).
For a full list of all the ways to talk with the Ansible Community, see the [AWX Communication guide](https://ansible.readthedocs.io/projects/awx/en/latest/contributor/communication.html).
For a full list of all the ways to talk with the Ansible Community, see the [AWX Communication guide](https://docs.ansible.com/projects/awx/en/latest/contributor/communication.html).

View File

@@ -7,7 +7,6 @@ from rest_framework import serializers
# AWX
from awx.conf import fields, register, register_validate
register(
'SESSION_COOKIE_AGE',
field_class=fields.IntegerField,

View File

@@ -21,7 +21,7 @@ class NullFieldMixin(object):
"""
def validate_empty_values(self, data):
(is_empty_value, data) = super(NullFieldMixin, self).validate_empty_values(data)
is_empty_value, data = super(NullFieldMixin, self).validate_empty_values(data)
if is_empty_value and data is None:
return (False, data)
return (is_empty_value, data)

View File

@@ -161,16 +161,14 @@ def get_view_description(view, html=False):
def get_default_schema():
if settings.DYNACONF.is_development_mode:
from awx.api.swagger import schema_view
return schema_view
else:
return views.APIView.schema
# drf-spectacular is configured via REST_FRAMEWORK['DEFAULT_SCHEMA_CLASS']
# Just use the DRF default, which will pick up our CustomAutoSchema
return views.APIView.schema
class APIView(views.APIView):
schema = get_default_schema()
# Schema is inherited from DRF's APIView, which uses DEFAULT_SCHEMA_CLASS
# No need to override it here - drf-spectacular will handle it
versioning_class = URLPathVersioning
def initialize_request(self, request, *args, **kwargs):
@@ -766,7 +764,7 @@ class SubListCreateAttachDetachAPIView(SubListCreateAPIView):
return Response(status=status.HTTP_204_NO_CONTENT)
def unattach(self, request, *args, **kwargs):
(sub_id, res) = self.unattach_validate(request)
sub_id, res = self.unattach_validate(request)
if res:
return res
return self.unattach_by_id(request, sub_id)
@@ -1025,6 +1023,9 @@ class GenericCancelView(RetrieveAPIView):
# In subclass set model, serializer_class
obj_permission_type = 'cancel'
def get(self, request, *args, **kwargs):
return super(GenericCancelView, self).get(request, *args, **kwargs)
@transaction.non_atomic_requests
def dispatch(self, *args, **kwargs):
return super(GenericCancelView, self).dispatch(*args, **kwargs)

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import MetricsView
urls = [re_path(r'^$', MetricsView.as_view(), name='metrics_view')]
__all__ = ['urls']

View File

@@ -111,7 +111,7 @@ class UnifiedJobEventPagination(Pagination):
def __init__(self, *args, **kwargs):
self.use_limit_paginator = False
self.limit_pagination = LimitPagination()
return super().__init__(*args, **kwargs)
super().__init__(*args, **kwargs)
def paginate_queryset(self, queryset, request, view=None):
if 'limit' in request.query_params:

119
awx/api/schema.py Normal file
View File

@@ -0,0 +1,119 @@
import warnings
from rest_framework.permissions import IsAuthenticated
from drf_spectacular.openapi import AutoSchema
from drf_spectacular.views import (
SpectacularAPIView,
SpectacularSwaggerView,
SpectacularRedocView,
)
def filter_credential_type_schema(
result,
generator, # NOSONAR
request, # NOSONAR
public, # NOSONAR
):
"""
Postprocessing hook to filter CredentialType kind enum values.
For CredentialTypeRequest and PatchedCredentialTypeRequest schemas (POST/PUT/PATCH),
filter the 'kind' enum to only show 'cloud' and 'net' values.
This ensures the OpenAPI schema accurately reflects that only 'cloud' and 'net'
credential types can be created or modified via the API, matching the validation
in CredentialTypeSerializer.validate().
Args:
result: The OpenAPI schema dict to be modified
generator, request, public: Required by drf-spectacular interface (unused)
Returns:
The modified OpenAPI schema dict
"""
schemas = result.get('components', {}).get('schemas', {})
# Filter CredentialTypeRequest (POST/PUT) - field is required
if 'CredentialTypeRequest' in schemas:
kind_prop = schemas['CredentialTypeRequest'].get('properties', {}).get('kind', {})
if 'enum' in kind_prop:
# Filter to only cloud and net (no None - field is required)
kind_prop['enum'] = ['cloud', 'net']
kind_prop['description'] = "* `cloud` - Cloud\\n* `net` - Network"
# Filter PatchedCredentialTypeRequest (PATCH) - field is optional
if 'PatchedCredentialTypeRequest' in schemas:
kind_prop = schemas['PatchedCredentialTypeRequest'].get('properties', {}).get('kind', {})
if 'enum' in kind_prop:
# Filter to only cloud and net (None allowed - field can be omitted in PATCH)
kind_prop['enum'] = ['cloud', 'net', None]
kind_prop['description'] = "* `cloud` - Cloud\\n* `net` - Network"
return result
class CustomAutoSchema(AutoSchema):
"""Custom AutoSchema to add swagger_topic to tags and handle deprecated endpoints."""
def get_tags(self):
tags = []
try:
if hasattr(self.view, 'get_serializer'):
serializer = self.view.get_serializer()
else:
serializer = None
except Exception:
serializer = None
warnings.warn(
'{}.get_serializer() raised an exception during '
'schema generation. Serializer fields will not be '
'generated for this view.'.format(self.view.__class__.__name__)
)
if hasattr(self.view, 'swagger_topic'):
tags.append(str(self.view.swagger_topic).title())
elif serializer and hasattr(serializer, 'Meta') and hasattr(serializer.Meta, 'model'):
tags.append(str(serializer.Meta.model._meta.verbose_name_plural).title())
elif hasattr(self.view, 'model'):
tags.append(str(self.view.model._meta.verbose_name_plural).title())
else:
tags = super().get_tags() # Use default drf-spectacular behavior
if not tags:
warnings.warn(f'Could not determine tags for {self.view.__class__.__name__}')
tags = ['api'] # Fallback to default value
return tags
def is_deprecated(self):
"""Return `True` if this operation is to be marked as deprecated."""
return getattr(self.view, 'deprecated', False)
class AuthenticatedSpectacularAPIView(SpectacularAPIView):
"""SpectacularAPIView that requires authentication."""
permission_classes = [IsAuthenticated]
class AuthenticatedSpectacularSwaggerView(SpectacularSwaggerView):
"""SpectacularSwaggerView that requires authentication."""
permission_classes = [IsAuthenticated]
class AuthenticatedSpectacularRedocView(SpectacularRedocView):
"""SpectacularRedocView that requires authentication."""
permission_classes = [IsAuthenticated]
# Schema view (returns OpenAPI schema JSON/YAML)
schema_view = AuthenticatedSpectacularAPIView.as_view()
# Swagger UI view
swagger_ui_view = AuthenticatedSpectacularSwaggerView.as_view(url_name='api:schema-json')
# ReDoc UI view
redoc_view = AuthenticatedSpectacularRedocView.as_view(url_name='api:schema-json')

View File

@@ -963,13 +963,13 @@ class UnifiedJobSerializer(BaseSerializer):
class UnifiedJobListSerializer(UnifiedJobSerializer):
class Meta:
fields = ('*', '-job_args', '-job_cwd', '-job_env', '-result_traceback', '-event_processing_finished')
fields = ('*', '-job_args', '-job_cwd', '-job_env', '-result_traceback', '-event_processing_finished', '-artifacts')
def get_field_names(self, declared_fields, info):
field_names = super(UnifiedJobListSerializer, self).get_field_names(declared_fields, info)
# Meta multiple inheritance and -field_name options don't seem to be
# taking effect above, so remove the undesired fields here.
return tuple(x for x in field_names if x not in ('job_args', 'job_cwd', 'job_env', 'result_traceback', 'event_processing_finished'))
return tuple(x for x in field_names if x not in ('job_args', 'job_cwd', 'job_env', 'result_traceback', 'event_processing_finished', 'artifacts'))
def get_types(self):
if type(self) is UnifiedJobListSerializer:
@@ -1230,7 +1230,7 @@ class OrganizationSerializer(BaseSerializer, OpaQueryPathMixin):
# to a team. This provides a hint to the ui so it can know to not
# display these roles for team role selection.
for key in ('admin_role', 'member_role'):
if key in summary_dict.get('object_roles', {}):
if summary_dict and key in summary_dict.get('object_roles', {}):
summary_dict['object_roles'][key]['user_only'] = True
return summary_dict
@@ -2165,13 +2165,13 @@ class BulkHostDeleteSerializer(serializers.Serializer):
attrs['hosts_data'] = attrs['host_qs'].values()
if len(attrs['host_qs']) == 0:
error_hosts = {host: "Hosts do not exist or you lack permission to delete it" for host in attrs['hosts']}
error_hosts = dict.fromkeys(attrs['hosts'], "Hosts do not exist or you lack permission to delete it")
raise serializers.ValidationError({'hosts': error_hosts})
if len(attrs['host_qs']) < len(attrs['hosts']):
hosts_exists = [host['id'] for host in attrs['hosts_data']]
failed_hosts = list(set(attrs['hosts']).difference(hosts_exists))
error_hosts = {host: "Hosts do not exist or you lack permission to delete it" for host in failed_hosts}
error_hosts = dict.fromkeys(failed_hosts, "Hosts do not exist or you lack permission to delete it")
raise serializers.ValidationError({'hosts': error_hosts})
# Getting all inventories that the hosts can be in
@@ -3527,7 +3527,7 @@ class JobRelaunchSerializer(BaseSerializer):
choices=NEW_JOB_TYPE_CHOICES,
write_only=True,
)
credential_passwords = VerbatimField(required=True, write_only=True)
credential_passwords = VerbatimField(required=False, write_only=True)
class Meta:
model = Job

View File

@@ -1,55 +0,0 @@
import warnings
from rest_framework.permissions import AllowAny
from drf_yasg import openapi
from drf_yasg.inspectors import SwaggerAutoSchema
from drf_yasg.views import get_schema_view
class CustomSwaggerAutoSchema(SwaggerAutoSchema):
"""Custom SwaggerAutoSchema to add swagger_topic to tags."""
def get_tags(self, operation_keys=None):
tags = []
try:
if hasattr(self.view, 'get_serializer'):
serializer = self.view.get_serializer()
else:
serializer = None
except Exception:
serializer = None
warnings.warn(
'{}.get_serializer() raised an exception during '
'schema generation. Serializer fields will not be '
'generated for {}.'.format(self.view.__class__.__name__, operation_keys)
)
if hasattr(self.view, 'swagger_topic'):
tags.append(str(self.view.swagger_topic).title())
elif serializer and hasattr(serializer, 'Meta'):
tags.append(str(serializer.Meta.model._meta.verbose_name_plural).title())
elif hasattr(self.view, 'model'):
tags.append(str(self.view.model._meta.verbose_name_plural).title())
else:
tags = ['api'] # Fallback to default value
if not tags:
warnings.warn(f'Could not determine tags for {self.view.__class__.__name__}')
return tags
def is_deprecated(self):
"""Return `True` if this operation is to be marked as deprecated."""
return getattr(self.view, 'deprecated', False)
schema_view = get_schema_view(
openapi.Info(
title='AWX API',
default_version='v2',
description='AWX API Documentation',
terms_of_service='https://www.google.com/policies/terms/',
contact=openapi.Contact(email='contact@snippets.local'),
license=openapi.License(name='Apache License'),
),
public=True,
permission_classes=[AllowAny],
)

View File

@@ -1,6 +1,6 @@
{% if content_only %}<div class="nocode ansi_fore ansi_back{% if dark %} ansi_dark{% endif %}">{% else %}
<!DOCTYPE HTML>
<html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>{{ title }}</title>

View File

@@ -1,4 +1,4 @@
---
collections:
- name: ansible.receptor
version: 2.0.3
version: 2.0.6

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import ActivityStreamList, ActivityStreamDetail
urls = [
re_path(r'^$', ActivityStreamList.as_view(), name='activity_stream_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ActivityStreamDetail.as_view(), name='activity_stream_detail'),

View File

@@ -14,7 +14,6 @@ from awx.api.views import (
AdHocCommandStdout,
)
urls = [
re_path(r'^$', AdHocCommandList.as_view(), name='ad_hoc_command_list'),
re_path(r'^(?P<pk>[0-9]+)/$', AdHocCommandDetail.as_view(), name='ad_hoc_command_detail'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import AdHocCommandEventDetail
urls = [
re_path(r'^(?P<pk>[0-9]+)/$', AdHocCommandEventDetail.as_view(), name='ad_hoc_command_event_detail'),
]

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
import awx.api.views.analytics as analytics
urls = [
re_path(r'^$', analytics.AnalyticsRootView.as_view(), name='analytics_root_view'),
re_path(r'^authorized/$', analytics.AnalyticsAuthorizedView.as_view(), name='analytics_authorized'),

View File

@@ -16,7 +16,6 @@ from awx.api.views import (
CredentialExternalTest,
)
urls = [
re_path(r'^$', CredentialList.as_view(), name='credential_list'),
re_path(r'^(?P<pk>[0-9]+)/activity_stream/$', CredentialActivityStreamList.as_view(), name='credential_activity_stream_list'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import CredentialInputSourceDetail, CredentialInputSourceList
urls = [
re_path(r'^$', CredentialInputSourceList.as_view(), name='credential_input_source_list'),
re_path(r'^(?P<pk>[0-9]+)/$', CredentialInputSourceDetail.as_view(), name='credential_input_source_detail'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import CredentialTypeList, CredentialTypeDetail, CredentialTypeCredentialList, CredentialTypeActivityStreamList, CredentialTypeExternalTest
urls = [
re_path(r'^$', CredentialTypeList.as_view(), name='credential_type_list'),
re_path(r'^(?P<pk>[0-9]+)/$', CredentialTypeDetail.as_view(), name='credential_type_detail'),

View File

@@ -8,7 +8,6 @@ from awx.api.views import (
ExecutionEnvironmentActivityStreamList,
)
urls = [
re_path(r'^$', ExecutionEnvironmentList.as_view(), name='execution_environment_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ExecutionEnvironmentDetail.as_view(), name='execution_environment_detail'),

View File

@@ -18,7 +18,6 @@ from awx.api.views import (
GroupAdHocCommandsList,
)
urls = [
re_path(r'^$', GroupList.as_view(), name='group_list'),
re_path(r'^(?P<pk>[0-9]+)/$', GroupDetail.as_view(), name='group_detail'),

View File

@@ -18,7 +18,6 @@ from awx.api.views import (
HostAdHocCommandEventsList,
)
urls = [
re_path(r'^$', HostList.as_view(), name='host_list'),
re_path(r'^(?P<pk>[0-9]+)/$', HostDetail.as_view(), name='host_detail'),

View File

@@ -14,7 +14,6 @@ from awx.api.views import (
)
from awx.api.views.instance_install_bundle import InstanceInstallBundle
urls = [
re_path(r'^$', InstanceList.as_view(), name='instance_list'),
re_path(r'^(?P<pk>[0-9]+)/$', InstanceDetail.as_view(), name='instance_detail'),

View File

@@ -12,7 +12,6 @@ from awx.api.views import (
InstanceGroupObjectRolesList,
)
urls = [
re_path(r'^$', InstanceGroupList.as_view(), name='instance_group_list'),
re_path(r'^(?P<pk>[0-9]+)/$', InstanceGroupDetail.as_view(), name='instance_group_detail'),

View File

@@ -29,7 +29,6 @@ from awx.api.views import (
InventoryVariableData,
)
urls = [
re_path(r'^$', InventoryList.as_view(), name='inventory_list'),
re_path(r'^(?P<pk>[0-9]+)/$', InventoryDetail.as_view(), name='inventory_detail'),

View File

@@ -18,7 +18,6 @@ from awx.api.views import (
InventorySourceNotificationTemplatesSuccessList,
)
urls = [
re_path(r'^$', InventorySourceList.as_view(), name='inventory_source_list'),
re_path(r'^(?P<pk>[0-9]+)/$', InventorySourceDetail.as_view(), name='inventory_source_detail'),

View File

@@ -15,7 +15,6 @@ from awx.api.views import (
InventoryUpdateCredentialsList,
)
urls = [
re_path(r'^$', InventoryUpdateList.as_view(), name='inventory_update_list'),
re_path(r'^(?P<pk>[0-9]+)/$', InventoryUpdateDetail.as_view(), name='inventory_update_detail'),

View File

@@ -19,7 +19,6 @@ from awx.api.views import (
JobHostSummaryDetail,
)
urls = [
re_path(r'^$', JobList.as_view(), name='job_list'),
re_path(r'^(?P<pk>[0-9]+)/$', JobDetail.as_view(), name='job_detail'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import JobHostSummaryDetail
urls = [re_path(r'^(?P<pk>[0-9]+)/$', JobHostSummaryDetail.as_view(), name='job_host_summary_detail')]
__all__ = ['urls']

View File

@@ -23,7 +23,6 @@ from awx.api.views import (
JobTemplateCopy,
)
urls = [
re_path(r'^$', JobTemplateList.as_view(), name='job_template_list'),
re_path(r'^(?P<pk>[0-9]+)/$', JobTemplateDetail.as_view(), name='job_template_detail'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views.labels import LabelList, LabelDetail
urls = [re_path(r'^$', LabelList.as_view(), name='label_list'), re_path(r'^(?P<pk>[0-9]+)/$', LabelDetail.as_view(), name='label_detail')]
__all__ = ['urls']

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import NotificationList, NotificationDetail
urls = [
re_path(r'^$', NotificationList.as_view(), name='notification_list'),
re_path(r'^(?P<pk>[0-9]+)/$', NotificationDetail.as_view(), name='notification_detail'),

View File

@@ -11,7 +11,6 @@ from awx.api.views import (
NotificationTemplateCopy,
)
urls = [
re_path(r'^$', NotificationTemplateList.as_view(), name='notification_template_list'),
re_path(r'^(?P<pk>[0-9]+)/$', NotificationTemplateDetail.as_view(), name='notification_template_detail'),

View File

@@ -27,7 +27,6 @@ from awx.api.views.organization import (
)
from awx.api.views import OrganizationCredentialList
urls = [
re_path(r'^$', OrganizationList.as_view(), name='organization_list'),
re_path(r'^(?P<pk>[0-9]+)/$', OrganizationDetail.as_view(), name='organization_detail'),

View File

@@ -22,7 +22,6 @@ from awx.api.views import (
ProjectCopy,
)
urls = [
re_path(r'^$', ProjectList.as_view(), name='project_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ProjectDetail.as_view(), name='project_detail'),

View File

@@ -13,7 +13,6 @@ from awx.api.views import (
ProjectUpdateEventsList,
)
urls = [
re_path(r'^$', ProjectUpdateList.as_view(), name='project_update_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ProjectUpdateDetail.as_view(), name='project_update_detail'),

View File

@@ -8,7 +8,6 @@ from awx.api.views import (
ReceptorAddressDetail,
)
urls = [
re_path(r'^$', ReceptorAddressesList.as_view(), name='receptor_addresses_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ReceptorAddressDetail.as_view(), name='receptor_address_detail'),

View File

@@ -3,16 +3,13 @@
from django.urls import re_path
from awx.api.views import RoleList, RoleDetail, RoleUsersList, RoleTeamsList, RoleParentsList, RoleChildrenList
from awx.api.views import RoleList, RoleDetail, RoleUsersList, RoleTeamsList
urls = [
re_path(r'^$', RoleList.as_view(), name='role_list'),
re_path(r'^(?P<pk>[0-9]+)/$', RoleDetail.as_view(), name='role_detail'),
re_path(r'^(?P<pk>[0-9]+)/users/$', RoleUsersList.as_view(), name='role_users_list'),
re_path(r'^(?P<pk>[0-9]+)/teams/$', RoleTeamsList.as_view(), name='role_teams_list'),
re_path(r'^(?P<pk>[0-9]+)/parents/$', RoleParentsList.as_view(), name='role_parents_list'),
re_path(r'^(?P<pk>[0-9]+)/children/$', RoleChildrenList.as_view(), name='role_children_list'),
]
__all__ = ['urls']

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import ScheduleList, ScheduleDetail, ScheduleUnifiedJobsList, ScheduleCredentialsList, ScheduleLabelsList, ScheduleInstanceGroupList
urls = [
re_path(r'^$', ScheduleList.as_view(), name='schedule_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ScheduleDetail.as_view(), name='schedule_detail'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import SystemJobList, SystemJobDetail, SystemJobCancel, SystemJobNotificationsList, SystemJobEventsList
urls = [
re_path(r'^$', SystemJobList.as_view(), name='system_job_list'),
re_path(r'^(?P<pk>[0-9]+)/$', SystemJobDetail.as_view(), name='system_job_detail'),

View File

@@ -14,7 +14,6 @@ from awx.api.views import (
SystemJobTemplateNotificationTemplatesSuccessList,
)
urls = [
re_path(r'^$', SystemJobTemplateList.as_view(), name='system_job_template_list'),
re_path(r'^(?P<pk>[0-9]+)/$', SystemJobTemplateDetail.as_view(), name='system_job_template_detail'),

View File

@@ -15,7 +15,6 @@ from awx.api.views import (
TeamAccessList,
)
urls = [
re_path(r'^$', TeamList.as_view(), name='team_list'),
re_path(r'^(?P<pk>[0-9]+)/$', TeamDetail.as_view(), name='team_detail'),

View File

@@ -4,7 +4,6 @@
from __future__ import absolute_import, unicode_literals
from django.urls import include, re_path
from awx import MODE
from awx.api.generics import LoggedLoginView, LoggedLogoutView
from awx.api.views.root import (
ApiRootView,
@@ -148,21 +147,15 @@ v2_urls = [
app_name = 'api'
urlpatterns = [
re_path(r'^$', ApiRootView.as_view(), name='api_root_view'),
re_path(r'^(?P<version>(v2))/', include(v2_urls)),
re_path(r'^login/$', LoggedLoginView.as_view(template_name='rest_framework/login.html', extra_context={'inside_login_context': True}), name='login'),
re_path(r'^logout/$', LoggedLogoutView.as_view(next_page='/api/', redirect_field_name='next'), name='logout'),
# the docs/, schema-related endpoints used to be listed here but now exposed by DAB api_documentation app
]
if MODE == 'development':
# Only include these if we are in the development environment
from awx.api.swagger import schema_view
from awx.api.urls.debug import urls as debug_urls
from awx.api.urls.debug import urls as debug_urls
urlpatterns += [re_path(r'^debug/', include(debug_urls))]
urlpatterns += [
re_path(r'^swagger(?P<format>\.json|\.yaml)/$', schema_view.without_ui(cache_timeout=0), name='schema-json'),
re_path(r'^swagger/$', schema_view.with_ui('swagger', cache_timeout=0), name='schema-swagger-ui'),
re_path(r'^redoc/$', schema_view.with_ui('redoc', cache_timeout=0), name='schema-redoc'),
]
urlpatterns += [re_path(r'^debug/', include(debug_urls))]

View File

@@ -2,7 +2,6 @@ from django.urls import re_path
from awx.api.views.webhooks import WebhookKeyView, GithubWebhookReceiver, GitlabWebhookReceiver, BitbucketDcWebhookReceiver
urlpatterns = [
re_path(r'^webhook_key/$', WebhookKeyView.as_view(), name='webhook_key'),
re_path(r'^github/$', GithubWebhookReceiver.as_view(), name='webhook_receiver_github'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import WorkflowApprovalList, WorkflowApprovalDetail, WorkflowApprovalApprove, WorkflowApprovalDeny
urls = [
re_path(r'^$', WorkflowApprovalList.as_view(), name='workflow_approval_list'),
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowApprovalDetail.as_view(), name='workflow_approval_detail'),

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.api.views import WorkflowApprovalTemplateDetail, WorkflowApprovalTemplateJobsList
urls = [
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowApprovalTemplateDetail.as_view(), name='workflow_approval_template_detail'),
re_path(r'^(?P<pk>[0-9]+)/approvals/$', WorkflowApprovalTemplateJobsList.as_view(), name='workflow_approval_template_jobs_list'),

View File

@@ -14,7 +14,6 @@ from awx.api.views import (
WorkflowJobActivityStreamList,
)
urls = [
re_path(r'^$', WorkflowJobList.as_view(), name='workflow_job_list'),
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowJobDetail.as_view(), name='workflow_job_detail'),

View File

@@ -14,7 +14,6 @@ from awx.api.views import (
WorkflowJobNodeInstanceGroupsList,
)
urls = [
re_path(r'^$', WorkflowJobNodeList.as_view(), name='workflow_job_node_list'),
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowJobNodeDetail.as_view(), name='workflow_job_node_detail'),

View File

@@ -22,7 +22,6 @@ from awx.api.views import (
WorkflowJobTemplateLabelList,
)
urls = [
re_path(r'^$', WorkflowJobTemplateList.as_view(), name='workflow_job_template_list'),
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowJobTemplateDetail.as_view(), name='workflow_job_template_detail'),

View File

@@ -15,7 +15,6 @@ from awx.api.views import (
WorkflowJobTemplateNodeInstanceGroupsList,
)
urls = [
re_path(r'^$', WorkflowJobTemplateNodeList.as_view(), name='workflow_job_template_node_list'),
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowJobTemplateNodeDetail.as_view(), name='workflow_job_template_node_detail'),

File diff suppressed because it is too large Load Diff

View File

@@ -15,6 +15,8 @@ from rest_framework import status
from collections import OrderedDict
from ansible_base.lib.utils.schema import extend_schema_if_available
AUTOMATION_ANALYTICS_API_URL_PATH = "/api/tower-analytics/v1"
AWX_ANALYTICS_API_PREFIX = 'analytics'
@@ -38,6 +40,8 @@ class MissingSettings(Exception):
class GetNotAllowedMixin(object):
skip_ai_description = True
def get(self, request, format=None):
return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED)
@@ -46,7 +50,9 @@ class AnalyticsRootView(APIView):
permission_classes = (AnalyticsPermission,)
name = _('Automation Analytics')
swagger_topic = 'Automation Analytics'
resource_purpose = 'automation analytics endpoints'
@extend_schema_if_available(extensions={"x-ai-description": "A list of additional API endpoints related to analytics"})
def get(self, request, format=None):
data = OrderedDict()
data['authorized'] = reverse('api:analytics_authorized', request=request)
@@ -99,6 +105,8 @@ class AnalyticsGenericView(APIView):
return Response(response.json(), status=response.status_code)
"""
resource_purpose = 'base view for analytics api proxy'
permission_classes = (AnalyticsPermission,)
@staticmethod
@@ -257,67 +265,91 @@ class AnalyticsGenericView(APIView):
class AnalyticsGenericListView(AnalyticsGenericView):
resource_purpose = 'analytics api proxy list view'
@extend_schema_if_available(extensions={"x-ai-description": "Get analytics data from Red Hat Insights"})
def get(self, request, format=None):
return self._send_to_analytics(request, method="GET")
@extend_schema_if_available(extensions={"x-ai-description": "Post query to Red Hat Insights analytics"})
def post(self, request, format=None):
return self._send_to_analytics(request, method="POST")
@extend_schema_if_available(extensions={"x-ai-description": "Get analytics endpoint options"})
def options(self, request, format=None):
return self._send_to_analytics(request, method="OPTIONS")
class AnalyticsGenericDetailView(AnalyticsGenericView):
resource_purpose = 'analytics api proxy detail view'
@extend_schema_if_available(extensions={"x-ai-description": "Get specific analytics resource from Red Hat Insights"})
def get(self, request, slug, format=None):
return self._send_to_analytics(request, method="GET")
@extend_schema_if_available(extensions={"x-ai-description": "Post query for specific analytics resource to Red Hat Insights"})
def post(self, request, slug, format=None):
return self._send_to_analytics(request, method="POST")
@extend_schema_if_available(extensions={"x-ai-description": "Get options for specific analytics resource"})
def options(self, request, slug, format=None):
return self._send_to_analytics(request, method="OPTIONS")
@extend_schema_if_available(
extensions={'x-ai-description': 'Check if the user has access to Red Hat Insights'},
)
class AnalyticsAuthorizedView(AnalyticsGenericListView):
name = _("Authorized")
resource_purpose = 'red hat insights authorization status'
class AnalyticsReportsList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Reports")
swagger_topic = "Automation Analytics"
resource_purpose = 'automation analytics reports'
class AnalyticsReportDetail(AnalyticsGenericDetailView):
name = _("Report")
resource_purpose = 'automation analytics report detail'
class AnalyticsReportOptionsList(AnalyticsGenericListView):
name = _("Report Options")
resource_purpose = 'automation analytics report options'
class AnalyticsAdoptionRateList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Adoption Rate")
resource_purpose = 'automation analytics adoption rate data'
class AnalyticsEventExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Event Explorer")
resource_purpose = 'automation analytics event explorer data'
class AnalyticsHostExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Host Explorer")
resource_purpose = 'automation analytics host explorer data'
class AnalyticsJobExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Job Explorer")
resource_purpose = 'automation analytics job explorer data'
class AnalyticsProbeTemplatesList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Probe Templates")
resource_purpose = 'automation analytics probe templates'
class AnalyticsProbeTemplateForHostsList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Probe Template For Hosts")
resource_purpose = 'automation analytics probe templates for hosts'
class AnalyticsRoiTemplatesList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("ROI Templates")
resource_purpose = 'automation analytics roi templates'

View File

@@ -1,5 +1,7 @@
from collections import OrderedDict
from ansible_base.lib.utils.schema import extend_schema_if_available
from django.utils.translation import gettext_lazy as _
from rest_framework.permissions import IsAuthenticated
@@ -30,6 +32,7 @@ class BulkView(APIView):
]
allowed_methods = ['GET', 'OPTIONS']
@extend_schema_if_available(extensions={"x-ai-description": "Retrieves a list of available bulk actions"})
def get(self, request, format=None):
'''List top level resources'''
data = OrderedDict()
@@ -45,11 +48,13 @@ class BulkJobLaunchView(GenericAPIView):
serializer_class = serializers.BulkJobLaunchSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
@extend_schema_if_available(extensions={"x-ai-description": "Get information about bulk job launch endpoint"})
def get(self, request):
data = OrderedDict()
data['detail'] = "Specify a list of unified job templates to launch alongside their launchtime parameters"
return Response(data, status=status.HTTP_200_OK)
@extend_schema_if_available(extensions={"x-ai-description": "Bulk launch job templates"})
def post(self, request):
bulkjob_serializer = serializers.BulkJobLaunchSerializer(data=request.data, context={'request': request})
if bulkjob_serializer.is_valid():
@@ -64,9 +69,11 @@ class BulkHostCreateView(GenericAPIView):
serializer_class = serializers.BulkHostCreateSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
@extend_schema_if_available(extensions={"x-ai-description": "Get information about bulk host create endpoint"})
def get(self, request):
return Response({"detail": "Bulk create hosts with this endpoint"}, status=status.HTTP_200_OK)
@extend_schema_if_available(extensions={"x-ai-description": "Bulk create hosts"})
def post(self, request):
serializer = serializers.BulkHostCreateSerializer(data=request.data, context={'request': request})
if serializer.is_valid():
@@ -81,9 +88,11 @@ class BulkHostDeleteView(GenericAPIView):
serializer_class = serializers.BulkHostDeleteSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
@extend_schema_if_available(extensions={"x-ai-description": "Get information about bulk host delete endpoint"})
def get(self, request):
return Response({"detail": "Bulk delete hosts with this endpoint"}, status=status.HTTP_200_OK)
@extend_schema_if_available(extensions={"x-ai-description": "Bulk delete hosts"})
def post(self, request):
serializer = serializers.BulkHostDeleteSerializer(data=request.data, context={'request': request})
if serializer.is_valid():

View File

@@ -5,6 +5,7 @@ from django.conf import settings
from rest_framework.permissions import AllowAny
from rest_framework.response import Response
from awx.api.generics import APIView
from ansible_base.lib.utils.schema import extend_schema_if_available
from awx.main.scheduler import TaskManager, DependencyManager, WorkflowManager
@@ -14,7 +15,9 @@ class TaskManagerDebugView(APIView):
exclude_from_schema = True
permission_classes = [AllowAny]
prefix = 'Task'
resource_purpose = 'debug task manager'
@extend_schema_if_available(extensions={"x-ai-description": "Trigger task manager scheduling"})
def get(self, request):
TaskManager().schedule()
if not settings.AWX_DISABLE_TASK_MANAGERS:
@@ -29,7 +32,9 @@ class DependencyManagerDebugView(APIView):
exclude_from_schema = True
permission_classes = [AllowAny]
prefix = 'Dependency'
resource_purpose = 'debug dependency manager'
@extend_schema_if_available(extensions={"x-ai-description": "Trigger dependency manager scheduling"})
def get(self, request):
DependencyManager().schedule()
if not settings.AWX_DISABLE_TASK_MANAGERS:
@@ -44,7 +49,9 @@ class WorkflowManagerDebugView(APIView):
exclude_from_schema = True
permission_classes = [AllowAny]
prefix = 'Workflow'
resource_purpose = 'debug workflow manager'
@extend_schema_if_available(extensions={"x-ai-description": "Trigger workflow manager scheduling"})
def get(self, request):
WorkflowManager().schedule()
if not settings.AWX_DISABLE_TASK_MANAGERS:
@@ -58,7 +65,9 @@ class DebugRootView(APIView):
_ignore_model_permissions = True
exclude_from_schema = True
permission_classes = [AllowAny]
resource_purpose = 'debug endpoints root'
@extend_schema_if_available(extensions={"x-ai-description": "List available debug endpoints"})
def get(self, request, format=None):
'''List of available debug urls'''
data = OrderedDict()

View File

@@ -10,6 +10,7 @@ import time
import re
import asn1
from ansible_base.lib.utils.schema import extend_schema_if_available
from awx.api import serializers
from awx.api.generics import GenericAPIView, Response
from awx.api.permissions import IsSystemAdmin
@@ -49,7 +50,9 @@ class InstanceInstallBundle(GenericAPIView):
model = models.Instance
serializer_class = serializers.InstanceSerializer
permission_classes = (IsSystemAdmin,)
resource_purpose = 'install bundle'
@extend_schema_if_available(extensions={"x-ai-description": "Generate and download install bundle for an instance"})
def get(self, request, *args, **kwargs):
instance_obj = self.get_object()
@@ -195,8 +198,8 @@ def generate_receptor_tls(instance_obj):
.issuer_name(ca_cert.issuer)
.public_key(csr.public_key())
.serial_number(x509.random_serial_number())
.not_valid_before(datetime.datetime.utcnow())
.not_valid_after(datetime.datetime.utcnow() + datetime.timedelta(days=3650))
.not_valid_before(datetime.datetime.now(datetime.UTC))
.not_valid_after(datetime.datetime.now(datetime.UTC) + datetime.timedelta(days=3650))
.add_extension(
csr.extensions.get_extension_for_class(x509.SubjectAlternativeName).value,
critical=csr.extensions.get_extension_for_class(x509.SubjectAlternativeName).critical,

View File

@@ -19,6 +19,8 @@ from rest_framework import serializers
# AWX
from awx.main.models import ActivityStream, Inventory, JobTemplate, Role, User, InstanceGroup, InventoryUpdateEvent, InventoryUpdate
from ansible_base.lib.utils.schema import extend_schema_if_available
from awx.api.generics import (
ListCreateAPIView,
RetrieveUpdateDestroyAPIView,
@@ -43,7 +45,6 @@ from awx.api.views.mixin import RelatedJobsPreventDeleteMixin
from awx.api.pagination import UnifiedJobEventPagination
logger = logging.getLogger('awx.api.views.organization')
@@ -55,6 +56,7 @@ class InventoryUpdateEventsList(SubListAPIView):
name = _('Inventory Update Events List')
search_fields = ('stdout',)
pagination_class = UnifiedJobEventPagination
resource_purpose = 'events of an inventory update'
def get_queryset(self):
iu = self.get_parent_object()
@@ -69,11 +71,17 @@ class InventoryUpdateEventsList(SubListAPIView):
class InventoryList(ListCreateAPIView):
model = Inventory
serializer_class = InventorySerializer
resource_purpose = 'inventories'
@extend_schema_if_available(extensions={"x-ai-description": "A list of inventories."})
def get(self, request, *args, **kwargs):
return super().get(request, *args, **kwargs)
class InventoryDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Inventory
serializer_class = InventorySerializer
resource_purpose = 'inventory detail'
def update(self, request, *args, **kwargs):
obj = self.get_object()
@@ -100,33 +108,39 @@ class InventoryDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIVie
class ConstructedInventoryDetail(InventoryDetail):
serializer_class = ConstructedInventorySerializer
resource_purpose = 'constructed inventory detail'
class ConstructedInventoryList(InventoryList):
serializer_class = ConstructedInventorySerializer
resource_purpose = 'constructed inventories'
def get_queryset(self):
r = super().get_queryset()
return r.filter(kind='constructed')
@extend_schema_if_available(extensions={"x-ai-description": "Get or create input inventory inventory"})
class InventoryInputInventoriesList(SubListAttachDetachAPIView):
model = Inventory
serializer_class = InventorySerializer
parent_model = Inventory
relationship = 'input_inventories'
resource_purpose = 'input inventories of a constructed inventory'
def is_valid_relation(self, parent, sub, created=False):
if sub.kind == 'constructed':
raise serializers.ValidationError({'error': 'You cannot add a constructed inventory to another constructed inventory.'})
@extend_schema_if_available(extensions={"x-ai-description": "Get activity stream for an inventory"})
class InventoryActivityStreamList(SubListAPIView):
model = ActivityStream
serializer_class = ActivityStreamSerializer
parent_model = Inventory
relationship = 'activitystream_set'
search_fields = ('changes',)
resource_purpose = 'activity stream for an inventory'
def get_queryset(self):
parent = self.get_parent_object()
@@ -140,11 +154,13 @@ class InventoryInstanceGroupsList(SubListAttachDetachAPIView):
serializer_class = InstanceGroupSerializer
parent_model = Inventory
relationship = 'instance_groups'
resource_purpose = 'instance groups of an inventory'
class InventoryAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's
parent_model = Inventory
resource_purpose = 'users who can access the inventory'
class InventoryObjectRolesList(SubListAPIView):
@@ -153,6 +169,7 @@ class InventoryObjectRolesList(SubListAPIView):
parent_model = Inventory
search_fields = ('role_field', 'content_type__model')
deprecated = True
resource_purpose = 'roles of an inventory'
def get_queryset(self):
po = self.get_parent_object()
@@ -165,6 +182,7 @@ class InventoryJobTemplateList(SubListAPIView):
serializer_class = JobTemplateSerializer
parent_model = Inventory
relationship = 'jobtemplates'
resource_purpose = 'job templates using an inventory'
def get_queryset(self):
parent = self.get_parent_object()
@@ -175,8 +193,10 @@ class InventoryJobTemplateList(SubListAPIView):
class InventoryLabelList(LabelSubListCreateAttachDetachView):
parent_model = Inventory
resource_purpose = 'labels of an inventory'
class InventoryCopy(CopyAPIView):
model = Inventory
copy_return_serializer_class = InventorySerializer
resource_purpose = 'copy of an inventory'

View File

@@ -2,6 +2,7 @@
from awx.api.generics import SubListCreateAttachDetachAPIView, RetrieveUpdateAPIView, ListCreateAPIView
from awx.main.models import Label
from awx.api.serializers import LabelSerializer
from ansible_base.lib.utils.schema import extend_schema_if_available
# Django
from django.utils.translation import gettext_lazy as _
@@ -24,9 +25,10 @@ class LabelSubListCreateAttachDetachView(SubListCreateAttachDetachAPIView):
model = Label
serializer_class = LabelSerializer
relationship = 'labels'
resource_purpose = 'labels of a resource'
def unattach(self, request, *args, **kwargs):
(sub_id, res) = super().unattach_validate(request)
sub_id, res = super().unattach_validate(request)
if res:
return res
@@ -39,6 +41,7 @@ class LabelSubListCreateAttachDetachView(SubListCreateAttachDetachAPIView):
return res
@extend_schema_if_available(extensions={"x-ai-description": "Create or attach a label to a resource"})
def post(self, request, *args, **kwargs):
# If a label already exists in the database, attach it instead of erroring out
# that it already exists
@@ -61,9 +64,11 @@ class LabelSubListCreateAttachDetachView(SubListCreateAttachDetachAPIView):
class LabelDetail(RetrieveUpdateAPIView):
model = Label
serializer_class = LabelSerializer
resource_purpose = 'label detail'
class LabelList(ListCreateAPIView):
name = _("Labels")
model = Label
serializer_class = LabelSerializer
resource_purpose = 'labels'

View File

@@ -2,6 +2,7 @@
# All Rights Reserved.
from django.utils.translation import gettext_lazy as _
from ansible_base.lib.utils.schema import extend_schema_if_available
from awx.api.generics import APIView, Response
from awx.api.permissions import IsSystemAdminOrAuditor
@@ -13,7 +14,9 @@ class MeshVisualizer(APIView):
name = _("Mesh Visualizer")
permission_classes = (IsSystemAdminOrAuditor,)
swagger_topic = "System Configuration"
resource_purpose = 'mesh network topology visualization data'
@extend_schema_if_available(extensions={"x-ai-description": "Get mesh network topology visualization data"})
def get(self, request, format=None):
data = {
'nodes': InstanceNodeSerializer(Instance.objects.all(), many=True).data,

View File

@@ -7,13 +7,13 @@ import logging
# Django
from django.conf import settings
from django.utils.translation import gettext_lazy as _
from ansible_base.lib.utils.schema import extend_schema_if_available
# Django REST Framework
from rest_framework.permissions import AllowAny
from rest_framework.response import Response
from rest_framework.exceptions import PermissionDenied
# AWX
# from awx.main.analytics import collectors
import awx.main.analytics.subsystem_metrics as s_metrics
@@ -22,13 +22,13 @@ from awx.api import renderers
from awx.api.generics import APIView
logger = logging.getLogger('awx.analytics')
class MetricsView(APIView):
name = _('Metrics')
swagger_topic = 'Metrics'
resource_purpose = 'prometheus metrics data'
renderer_classes = [renderers.PlainTextRenderer, renderers.PrometheusJSONRenderer, renderers.BrowsableAPIRenderer]
@@ -37,6 +37,7 @@ class MetricsView(APIView):
self.permission_classes = (AllowAny,)
return super(APIView, self).initialize_request(request, *args, **kwargs)
@extend_schema_if_available(extensions={"x-ai-description": "Get Prometheus metrics data"})
def get(self, request):
'''Show Metrics Details'''
if settings.ALLOW_METRICS_FOR_ANONYMOUS_USERS or request.user.is_superuser or request.user.is_system_auditor:

View File

@@ -60,11 +60,13 @@ logger = logging.getLogger('awx.api.views.organization')
class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization
serializer_class = OrganizationSerializer
resource_purpose = 'organizations'
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization
serializer_class = OrganizationSerializer
resource_purpose = 'organization detail'
def get_serializer_context(self, *args, **kwargs):
full_context = super(OrganizationDetail, self).get_serializer_context(*args, **kwargs)
@@ -102,6 +104,7 @@ class OrganizationInventoriesList(SubListAPIView):
serializer_class = InventorySerializer
parent_model = Organization
relationship = 'inventories'
resource_purpose = 'inventories of an organization'
class OrganizationUsersList(BaseUsersList):
@@ -110,6 +113,7 @@ class OrganizationUsersList(BaseUsersList):
parent_model = Organization
relationship = 'member_role.members'
ordering = ('username',)
resource_purpose = 'users of an organization'
class OrganizationAdminsList(BaseUsersList):
@@ -118,6 +122,7 @@ class OrganizationAdminsList(BaseUsersList):
parent_model = Organization
relationship = 'admin_role.members'
ordering = ('username',)
resource_purpose = 'administrators of an organization'
class OrganizationProjectsList(SubListCreateAPIView):
@@ -125,6 +130,7 @@ class OrganizationProjectsList(SubListCreateAPIView):
serializer_class = ProjectSerializer
parent_model = Organization
parent_key = 'organization'
resource_purpose = 'projects of an organization'
class OrganizationExecutionEnvironmentsList(SubListCreateAttachDetachAPIView):
@@ -134,6 +140,7 @@ class OrganizationExecutionEnvironmentsList(SubListCreateAttachDetachAPIView):
relationship = 'executionenvironments'
parent_key = 'organization'
swagger_topic = "Execution Environments"
resource_purpose = 'execution environments of an organization'
class OrganizationJobTemplatesList(SubListCreateAPIView):
@@ -141,6 +148,7 @@ class OrganizationJobTemplatesList(SubListCreateAPIView):
serializer_class = JobTemplateSerializer
parent_model = Organization
parent_key = 'organization'
resource_purpose = 'job templates of an organization'
class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
@@ -148,6 +156,7 @@ class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
serializer_class = WorkflowJobTemplateSerializer
parent_model = Organization
parent_key = 'organization'
resource_purpose = 'workflow job templates of an organization'
class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
@@ -156,6 +165,7 @@ class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
parent_model = Organization
relationship = 'teams'
parent_key = 'organization'
resource_purpose = 'teams of an organization'
class OrganizationActivityStreamList(SubListAPIView):
@@ -164,6 +174,7 @@ class OrganizationActivityStreamList(SubListAPIView):
parent_model = Organization
relationship = 'activitystream_set'
search_fields = ('changes',)
resource_purpose = 'activity stream for an organization'
class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView):
@@ -172,28 +183,34 @@ class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView):
parent_model = Organization
relationship = 'notification_templates'
parent_key = 'organization'
resource_purpose = 'notification templates of an organization'
class OrganizationNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
resource_purpose = 'base view for notification templates of an organization'
class OrganizationNotificationTemplatesStartedList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_started'
resource_purpose = 'notification templates for job started events of an organization'
class OrganizationNotificationTemplatesErrorList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_error'
resource_purpose = 'notification templates for job error events of an organization'
class OrganizationNotificationTemplatesSuccessList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_success'
resource_purpose = 'notification templates for job success events of an organization'
class OrganizationNotificationTemplatesApprovalList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_approvals'
resource_purpose = 'notification templates for workflow approval events of an organization'
class OrganizationInstanceGroupsList(OrganizationInstanceGroupMembershipMixin, SubListAttachDetachAPIView):
@@ -202,6 +219,7 @@ class OrganizationInstanceGroupsList(OrganizationInstanceGroupMembershipMixin, S
parent_model = Organization
relationship = 'instance_groups'
filter_read_permission = False
resource_purpose = 'instance groups of an organization'
class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
@@ -210,6 +228,7 @@ class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
parent_model = Organization
relationship = 'galaxy_credentials'
filter_read_permission = False
resource_purpose = 'galaxy credentials of an organization'
def is_valid_relation(self, parent, sub, created=False):
if sub.kind != 'galaxy_api_token':
@@ -219,6 +238,7 @@ class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
class OrganizationAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's
parent_model = Organization
resource_purpose = 'users who can access the organization'
class OrganizationObjectRolesList(SubListAPIView):
@@ -227,6 +247,7 @@ class OrganizationObjectRolesList(SubListAPIView):
parent_model = Organization
search_fields = ('role_field', 'content_type__model')
deprecated = True
resource_purpose = 'roles of an organization'
def get_queryset(self):
po = self.get_parent_object()

View File

@@ -23,6 +23,8 @@ from rest_framework import status
import requests
from ansible_base.lib.utils.schema import extend_schema_if_available
from awx import MODE
from awx.api.generics import APIView
from awx.conf.registry import settings_registry
@@ -46,8 +48,10 @@ class ApiRootView(APIView):
name = _('REST API')
versioning_class = URLPathVersioning
swagger_topic = 'Versioning'
resource_purpose = 'api root and version information'
@method_decorator(ensure_csrf_cookie)
@extend_schema_if_available(extensions={"x-ai-description": "List supported API versions"})
def get(self, request, format=None):
'''List supported API versions'''
v2 = reverse('api:api_v2_root_view', request=request, kwargs={'version': 'v2'})
@@ -59,14 +63,16 @@ class ApiRootView(APIView):
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
data['login_redirect_override'] = settings.LOGIN_REDIRECT_OVERRIDE
if MODE == 'development':
data['swagger'] = drf_reverse('api:schema-swagger-ui')
data['docs'] = drf_reverse('api:schema-swagger-ui')
return Response(data)
class ApiVersionRootView(APIView):
permission_classes = (AllowAny,)
swagger_topic = 'Versioning'
resource_purpose = 'api top-level resources'
@extend_schema_if_available(extensions={"x-ai-description": "List top-level API resources"})
def get(self, request, format=None):
'''List top level resources'''
data = OrderedDict()
@@ -126,6 +132,7 @@ class ApiVersionRootView(APIView):
class ApiV2RootView(ApiVersionRootView):
name = _('Version 2')
resource_purpose = 'api v2 root'
class ApiV2PingView(APIView):
@@ -137,7 +144,11 @@ class ApiV2PingView(APIView):
authentication_classes = ()
name = _('Ping')
swagger_topic = 'System Configuration'
resource_purpose = 'basic instance information'
@extend_schema_if_available(
extensions={'x-ai-description': 'Return basic information about this instance'},
)
def get(self, request, format=None):
"""Return some basic information about this instance
@@ -172,24 +183,59 @@ class ApiV2SubscriptionView(APIView):
permission_classes = (IsAuthenticated,)
name = _('Subscriptions')
swagger_topic = 'System Configuration'
resource_purpose = 'aap subscription validation'
def check_permissions(self, request):
super(ApiV2SubscriptionView, self).check_permissions(request)
if not request.user.is_superuser and request.method.lower() not in {'options', 'head'}:
self.permission_denied(request) # Raises PermissionDenied exception.
@extend_schema_if_available(
extensions={'x-ai-description': 'List valid AAP subscriptions'},
)
def post(self, request):
data = request.data.copy()
if data.get('subscriptions_client_secret') == '$encrypted$':
data['subscriptions_client_secret'] = settings.SUBSCRIPTIONS_CLIENT_SECRET
try:
user, pw = data.get('subscriptions_client_id'), data.get('subscriptions_client_secret')
user = None
pw = None
basic_auth = False
# determine if the credentials are for basic auth or not
if data.get('subscriptions_client_id'):
user, pw = data.get('subscriptions_client_id'), data.get('subscriptions_client_secret')
if pw == '$encrypted$':
pw = settings.SUBSCRIPTIONS_CLIENT_SECRET
elif data.get('subscriptions_username'):
user, pw = data.get('subscriptions_username'), data.get('subscriptions_password')
if pw == '$encrypted$':
pw = settings.SUBSCRIPTIONS_PASSWORD
basic_auth = True
if not user or not pw:
return Response({"error": _("Missing subscription credentials")}, status=status.HTTP_400_BAD_REQUEST)
with set_environ(**settings.AWX_TASK_ENV):
validated = get_licenser().validate_rh(user, pw)
if user:
settings.SUBSCRIPTIONS_CLIENT_ID = data['subscriptions_client_id']
if pw:
settings.SUBSCRIPTIONS_CLIENT_SECRET = data['subscriptions_client_secret']
validated = get_licenser().validate_rh(user, pw, basic_auth)
# update settings if the credentials were valid
if basic_auth:
if user:
settings.SUBSCRIPTIONS_USERNAME = user
if pw:
settings.SUBSCRIPTIONS_PASSWORD = pw
# mutual exclusion for basic auth and service account
# only one should be set at a given time so that
# config/attach/ knows which credentials to use
settings.SUBSCRIPTIONS_CLIENT_ID = ""
settings.SUBSCRIPTIONS_CLIENT_SECRET = ""
else:
if user:
settings.SUBSCRIPTIONS_CLIENT_ID = user
if pw:
settings.SUBSCRIPTIONS_CLIENT_SECRET = pw
# mutual exclusion for basic auth and service account
settings.SUBSCRIPTIONS_USERNAME = ""
settings.SUBSCRIPTIONS_PASSWORD = ""
except Exception as exc:
msg = _("Invalid Subscription")
if isinstance(exc, TokenError) or (
@@ -213,28 +259,37 @@ class ApiV2AttachView(APIView):
permission_classes = (IsAuthenticated,)
name = _('Attach Subscription')
swagger_topic = 'System Configuration'
resource_purpose = 'subscription attachment'
def check_permissions(self, request):
super(ApiV2AttachView, self).check_permissions(request)
if not request.user.is_superuser and request.method.lower() not in {'options', 'head'}:
self.permission_denied(request) # Raises PermissionDenied exception.
@extend_schema_if_available(
extensions={'x-ai-description': 'Attach a subscription'},
)
def post(self, request):
data = request.data.copy()
subscription_id = data.get('subscription_id', None)
if not subscription_id:
return Response({"error": _("No subscription ID provided.")}, status=status.HTTP_400_BAD_REQUEST)
# Ensure we always use the latest subscription credentials
cache.delete_many(['SUBSCRIPTIONS_CLIENT_ID', 'SUBSCRIPTIONS_CLIENT_SECRET'])
cache.delete_many(['SUBSCRIPTIONS_CLIENT_ID', 'SUBSCRIPTIONS_CLIENT_SECRET', 'SUBSCRIPTIONS_USERNAME', 'SUBSCRIPTIONS_PASSWORD'])
user = getattr(settings, 'SUBSCRIPTIONS_CLIENT_ID', None)
pw = getattr(settings, 'SUBSCRIPTIONS_CLIENT_SECRET', None)
basic_auth = False
if not (user and pw):
user = getattr(settings, 'SUBSCRIPTIONS_USERNAME', None)
pw = getattr(settings, 'SUBSCRIPTIONS_PASSWORD', None)
basic_auth = True
if not (user and pw):
return Response({"error": _("Missing subscription credentials")}, status=status.HTTP_400_BAD_REQUEST)
if subscription_id and user and pw:
data = request.data.copy()
try:
with set_environ(**settings.AWX_TASK_ENV):
validated = get_licenser().validate_rh(user, pw)
validated = get_licenser().validate_rh(user, pw, basic_auth)
except Exception as exc:
msg = _("Invalid Subscription")
if isinstance(exc, requests.exceptions.HTTPError) and getattr(getattr(exc, 'response', None), 'status_code', None) == 401:
@@ -248,6 +303,7 @@ class ApiV2AttachView(APIView):
else:
logger.exception(smart_str(u"Invalid subscription submitted."), extra=dict(actor=request.user.username))
return Response({"error": msg}, status=status.HTTP_400_BAD_REQUEST)
for sub in validated:
if sub['subscription_id'] == subscription_id:
sub['valid_key'] = True
@@ -262,12 +318,16 @@ class ApiV2ConfigView(APIView):
permission_classes = (IsAuthenticated,)
name = _('Configuration')
swagger_topic = 'System Configuration'
resource_purpose = 'system configuration and license management'
def check_permissions(self, request):
super(ApiV2ConfigView, self).check_permissions(request)
if not request.user.is_superuser and request.method.lower() not in {'options', 'head', 'get'}:
self.permission_denied(request) # Raises PermissionDenied exception.
@extend_schema_if_available(
extensions={'x-ai-description': 'Return various configuration settings'},
)
def get(self, request, format=None):
'''Return various sitewide configuration settings'''
@@ -306,6 +366,7 @@ class ApiV2ConfigView(APIView):
return Response(data)
@extend_schema_if_available(extensions={"x-ai-description": "Add or update a subscription manifest license"})
def post(self, request):
if not isinstance(request.data, dict):
return Response({"error": _("Invalid subscription data")}, status=status.HTTP_400_BAD_REQUEST)
@@ -351,6 +412,9 @@ class ApiV2ConfigView(APIView):
logger.warning(smart_str(u"Invalid subscription submitted."), extra=dict(actor=request.user.username))
return Response({"error": _("Invalid subscription")}, status=status.HTTP_400_BAD_REQUEST)
@extend_schema_if_available(
extensions={'x-ai-description': 'Remove the current subscription'},
)
def delete(self, request):
try:
settings.LICENSE = {}

View File

@@ -11,6 +11,7 @@ from rest_framework import status
from rest_framework.exceptions import PermissionDenied
from rest_framework.permissions import AllowAny
from rest_framework.response import Response
from ansible_base.lib.utils.schema import extend_schema_if_available
from awx.api import serializers
from awx.api.generics import APIView, GenericAPIView
@@ -24,6 +25,7 @@ logger = logging.getLogger('awx.api.views.webhooks')
class WebhookKeyView(GenericAPIView):
serializer_class = serializers.EmptySerializer
permission_classes = (WebhookKeyPermission,)
resource_purpose = 'webhook key management'
def get_queryset(self):
qs_models = {'job_templates': JobTemplate, 'workflow_job_templates': WorkflowJobTemplate}
@@ -31,11 +33,13 @@ class WebhookKeyView(GenericAPIView):
return super().get_queryset()
@extend_schema_if_available(extensions={"x-ai-description": "Get the webhook key for a template"})
def get(self, request, *args, **kwargs):
obj = self.get_object()
return Response({'webhook_key': obj.webhook_key})
@extend_schema_if_available(extensions={"x-ai-description": "Rotate the webhook key for a template"})
def post(self, request, *args, **kwargs):
obj = self.get_object()
obj.rotate_webhook_key()
@@ -52,6 +56,7 @@ class WebhookReceiverBase(APIView):
authentication_classes = ()
ref_keys = {}
resource_purpose = 'webhook receiver for triggering jobs'
def get_queryset(self):
qs_models = {'job_templates': JobTemplate, 'workflow_job_templates': WorkflowJobTemplate}
@@ -127,7 +132,8 @@ class WebhookReceiverBase(APIView):
raise PermissionDenied
@csrf_exempt
def post(self, request, *args, **kwargs):
@extend_schema_if_available(extensions={"x-ai-description": "Receive a webhook event and trigger a job"})
def post(self, request, *args, **kwargs_in):
# Ensure that the full contents of the request are captured for multiple uses.
request.body
@@ -175,6 +181,7 @@ class WebhookReceiverBase(APIView):
class GithubWebhookReceiver(WebhookReceiverBase):
service = 'github'
resource_purpose = 'github webhook receiver'
ref_keys = {
'pull_request': 'pull_request.head.sha',
@@ -212,6 +219,7 @@ class GithubWebhookReceiver(WebhookReceiverBase):
class GitlabWebhookReceiver(WebhookReceiverBase):
service = 'gitlab'
resource_purpose = 'gitlab webhook receiver'
ref_keys = {'Push Hook': 'checkout_sha', 'Tag Push Hook': 'checkout_sha', 'Merge Request Hook': 'object_attributes.last_commit.id'}
@@ -250,6 +258,7 @@ class GitlabWebhookReceiver(WebhookReceiverBase):
class BitbucketDcWebhookReceiver(WebhookReceiverBase):
service = 'bitbucket_dc'
resource_purpose = 'bitbucket data center webhook receiver'
ref_keys = {
'repo:refs_changed': 'changes.0.toHash',

View File

@@ -6,7 +6,7 @@ import urllib.parse as urlparse
from collections import OrderedDict
# Django
from django.core.validators import URLValidator, _lazy_re_compile
from django.core.validators import URLValidator, DomainNameValidator, _lazy_re_compile
from django.utils.translation import gettext_lazy as _
# Django REST Framework
@@ -160,10 +160,11 @@ class StringListIsolatedPathField(StringListField):
class URLField(CharField):
# these lines set up a custom regex that allow numbers in the
# top-level domain
tld_re = (
r'\.' # dot
r'(?!-)' # can't start with a dash
r'(?:[a-z' + URLValidator.ul + r'0-9' + '-]{2,63}' # domain label, this line was changed from the original URLValidator
r'(?:[a-z' + DomainNameValidator.ul + r'0-9' + '-]{2,63}' # domain label, this line was changed from the original URLValidator
r'|xn--[a-z0-9]{1,59})' # or punycode label
r'(?<!-)' # can't end with a dash
r'\.?' # may have a trailing dot

View File

@@ -5,7 +5,6 @@ from django.urls import re_path
from awx.conf.views import SettingCategoryList, SettingSingletonDetail, SettingLoggingTest
urlpatterns = [
re_path(r'^$', SettingCategoryList.as_view(), name='setting_category_list'),
re_path(r'^(?P<category_slug>[a-z0-9-]+)/$', SettingSingletonDetail.as_view(), name='setting_singleton_detail'),

View File

@@ -31,7 +31,7 @@ from awx.conf.models import Setting
from awx.conf.serializers import SettingCategorySerializer, SettingSingletonSerializer
from awx.conf import settings_registry
from awx.main.utils.external_logging import reconfigure_rsyslog
from ansible_base.lib.utils.schema import extend_schema_if_available
SettingCategory = collections.namedtuple('SettingCategory', ('url', 'slug', 'name'))
@@ -42,6 +42,10 @@ class SettingCategoryList(ListAPIView):
filter_backends = []
name = _('Setting Categories')
@extend_schema_if_available(extensions={"x-ai-description": "A list of additional API endpoints related to settings."})
def get(self, request, *args, **kwargs):
return super().get(request, *args, **kwargs)
def get_queryset(self):
setting_categories = []
categories = settings_registry.get_registered_categories()
@@ -63,6 +67,10 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
filter_backends = []
name = _('Setting Detail')
@extend_schema_if_available(extensions={"x-ai-description": "Update system settings."})
def patch(self, request, *args, **kwargs):
return super().patch(request, *args, **kwargs)
def get_queryset(self):
self.category_slug = self.kwargs.get('category_slug', 'all')
all_category_slugs = list(settings_registry.get_registered_categories().keys())

View File

@@ -1,15 +1,17 @@
# Python
import logging
# Dispatcherd
from dispatcherd.publish import task
# AWX
from awx.main.analytics.subsystem_metrics import DispatcherMetrics, CallbackReceiverMetrics
from awx.main.dispatch.publish import task as task_awx
from awx.main.dispatch import get_task_queuename
logger = logging.getLogger('awx.main.scheduler')
@task_awx(queue=get_task_queuename)
@task(queue=get_task_queuename, timeout=300, on_duplicate='discard')
def send_subsystem_metrics():
DispatcherMetrics().send_metrics()
CallbackReceiverMetrics().send_metrics()

View File

@@ -1,8 +1,6 @@
import datetime
import asyncio
import logging
import redis
import redis.asyncio
import re
from prometheus_client import (
@@ -15,7 +13,7 @@ from prometheus_client import (
)
from django.conf import settings
from awx.main.utils.redis import get_redis_client, get_redis_client_async
BROADCAST_WEBSOCKET_REDIS_KEY_NAME = 'broadcast_websocket_stats'
@@ -66,6 +64,8 @@ class FixedSlidingWindow:
class RelayWebsocketStatsManager:
_redis_client = None # Cached Redis client for get_stats_sync()
def __init__(self, local_hostname):
self._local_hostname = local_hostname
self._stats = dict()
@@ -80,7 +80,7 @@ class RelayWebsocketStatsManager:
async def run_loop(self):
try:
redis_conn = await redis.asyncio.Redis.from_url(settings.BROKER_URL)
redis_conn = get_redis_client_async()
while True:
stats_data_str = ''.join(stat.serialize() for stat in self._stats.values())
await redis_conn.set(self._redis_key, stats_data_str)
@@ -103,8 +103,10 @@ class RelayWebsocketStatsManager:
"""
Stringified verion of all the stats
"""
redis_conn = redis.Redis.from_url(settings.BROKER_URL)
stats_str = redis_conn.get(BROADCAST_WEBSOCKET_REDIS_KEY_NAME) or b''
# Reuse cached Redis client to avoid creating new connection pools on every call
if cls._redis_client is None:
cls._redis_client = get_redis_client()
stats_str = cls._redis_client.get(BROADCAST_WEBSOCKET_REDIS_KEY_NAME) or b''
return parser.text_string_to_metric_families(stats_str.decode('UTF-8'))

View File

@@ -487,9 +487,7 @@ def unified_jobs_table(since, full_path, until, **kwargs):
OR (main_unifiedjob.finished > '{0}' AND main_unifiedjob.finished <= '{1}'))
AND main_unifiedjob.launch_type != 'sync'
ORDER BY main_unifiedjob.id ASC) TO STDOUT WITH CSV HEADER
'''.format(
since.isoformat(), until.isoformat()
)
'''.format(since.isoformat(), until.isoformat())
return _copy_table(table='unified_jobs', query=unified_job_query, path=full_path)
@@ -550,9 +548,7 @@ def workflow_job_node_table(since, full_path, until, **kwargs):
) always_nodes ON main_workflowjobnode.id = always_nodes.from_workflowjobnode_id
WHERE (main_workflowjobnode.modified > '{}' AND main_workflowjobnode.modified <= '{}')
ORDER BY main_workflowjobnode.id ASC) TO STDOUT WITH CSV HEADER
'''.format(
since.isoformat(), until.isoformat()
)
'''.format(since.isoformat(), until.isoformat())
return _copy_table(table='workflow_job_node', query=workflow_job_node_query, path=full_path)

View File

@@ -0,0 +1,41 @@
import http.client
import socket
import urllib.error
import urllib.request
import logging
from django.conf import settings
logger = logging.getLogger(__name__)
def get_dispatcherd_metrics(request):
metrics_cfg = settings.METRICS_SUBSYSTEM_CONFIG.get('server', {}).get(settings.METRICS_SERVICE_DISPATCHER, {})
host = metrics_cfg.get('host', 'localhost')
port = metrics_cfg.get('port', 8015)
metrics_filter = []
if request is not None and hasattr(request, "query_params"):
try:
nodes_filter = request.query_params.getlist("node")
except Exception:
nodes_filter = []
if nodes_filter and settings.CLUSTER_HOST_ID not in nodes_filter:
return ''
try:
metrics_filter = request.query_params.getlist("metric")
except Exception:
metrics_filter = []
if metrics_filter:
# Right now we have no way of filtering the dispatcherd metrics
# so just avoid getting in the way if another metric is filtered for
return ''
url = f"http://{host}:{port}/metrics"
try:
with urllib.request.urlopen(url, timeout=1.0) as response:
payload = response.read()
if not payload:
return ''
return payload.decode('utf-8')
except (urllib.error.URLError, UnicodeError, socket.timeout, TimeoutError, http.client.HTTPException) as exc:
logger.debug(f"Failed to collect dispatcherd metrics from {url}: {exc}")
return ''

View File

@@ -14,6 +14,8 @@ from rest_framework.request import Request
from awx.main.consumers import emit_channel_notification
from awx.main.utils import is_testing
from awx.main.utils.redis import get_redis_client
from .dispatcherd_metrics import get_dispatcherd_metrics
root_key = settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX
logger = logging.getLogger('awx.main.analytics')
@@ -44,11 +46,12 @@ class MetricsServer(MetricsServerSettings):
class BaseM:
def __init__(self, field, help_text):
def __init__(self, field, help_text, labels=None):
self.field = field
self.help_text = help_text
self.current_value = 0
self.metric_has_changed = False
self.labels = labels or {}
def reset_value(self, conn):
conn.hset(root_key, self.field, 0)
@@ -69,12 +72,16 @@ class BaseM:
value = conn.hget(root_key, self.field)
return self.decode_value(value)
def to_prometheus(self, instance_data):
def to_prometheus(self, instance_data, namespace=None):
output_text = f"# HELP {self.field} {self.help_text}\n# TYPE {self.field} gauge\n"
for instance in instance_data:
if self.field in instance_data[instance]:
# Build label string
labels = f'node="{instance}"'
if namespace:
labels += f',subsystem="{namespace}"'
# on upgrade, if there are stale instances, we can end up with issues where new metrics are not present
output_text += f'{self.field}{{node="{instance}"}} {instance_data[instance][self.field]}\n'
output_text += f'{self.field}{{{labels}}} {instance_data[instance][self.field]}\n'
return output_text
@@ -167,14 +174,17 @@ class HistogramM(BaseM):
self.sum.store_value(conn)
self.inf.store_value(conn)
def to_prometheus(self, instance_data):
def to_prometheus(self, instance_data, namespace=None):
output_text = f"# HELP {self.field} {self.help_text}\n# TYPE {self.field} histogram\n"
for instance in instance_data:
# Build label string
node_label = f'node="{instance}"'
subsystem_label = f',subsystem="{namespace}"' if namespace else ''
for i, b in enumerate(self.buckets):
output_text += f'{self.field}_bucket{{le="{b}",node="{instance}"}} {sum(instance_data[instance][self.field]["counts"][0:i+1])}\n'
output_text += f'{self.field}_bucket{{le="+Inf",node="{instance}"}} {instance_data[instance][self.field]["inf"]}\n'
output_text += f'{self.field}_count{{node="{instance}"}} {instance_data[instance][self.field]["inf"]}\n'
output_text += f'{self.field}_sum{{node="{instance}"}} {instance_data[instance][self.field]["sum"]}\n'
output_text += f'{self.field}_bucket{{le="{b}",{node_label}{subsystem_label}}} {sum(instance_data[instance][self.field]["counts"][0:i+1])}\n'
output_text += f'{self.field}_bucket{{le="+Inf",{node_label}{subsystem_label}}} {instance_data[instance][self.field]["inf"]}\n'
output_text += f'{self.field}_count{{{node_label}{subsystem_label}}} {instance_data[instance][self.field]["inf"]}\n'
output_text += f'{self.field}_sum{{{node_label}{subsystem_label}}} {instance_data[instance][self.field]["sum"]}\n'
return output_text
@@ -190,8 +200,8 @@ class Metrics(MetricsNamespace):
def __init__(self, namespace, auto_pipe_execute=False, instance_name=None, metrics_have_changed=True, **kwargs):
MetricsNamespace.__init__(self, namespace)
self.pipe = redis.Redis.from_url(settings.BROKER_URL).pipeline()
self.conn = redis.Redis.from_url(settings.BROKER_URL)
self.conn = get_redis_client()
self.pipe = self.conn.pipeline()
self.last_pipe_execute = time.time()
# track if metrics have been modified since last saved to redis
# start with True so that we get an initial save to redis
@@ -273,20 +283,22 @@ class Metrics(MetricsNamespace):
def pipe_execute(self):
if self.metrics_have_changed is True:
duration_to_save = time.perf_counter()
duration_pipe_exec = time.perf_counter()
for m in self.METRICS:
self.METRICS[m].store_value(self.pipe)
self.pipe.execute()
self.last_pipe_execute = time.time()
self.metrics_have_changed = False
duration_to_save = time.perf_counter() - duration_to_save
self.METRICS['subsystem_metrics_pipe_execute_seconds'].inc(duration_to_save)
self.METRICS['subsystem_metrics_pipe_execute_calls'].inc(1)
duration_pipe_exec = time.perf_counter() - duration_pipe_exec
duration_to_save = time.perf_counter()
duration_send_metrics = time.perf_counter()
self.send_metrics()
duration_to_save = time.perf_counter() - duration_to_save
self.METRICS['subsystem_metrics_send_metrics_seconds'].inc(duration_to_save)
duration_send_metrics = time.perf_counter() - duration_send_metrics
# Increment operational metrics
self.METRICS['subsystem_metrics_pipe_execute_seconds'].inc(duration_pipe_exec)
self.METRICS['subsystem_metrics_pipe_execute_calls'].inc(1)
self.METRICS['subsystem_metrics_send_metrics_seconds'].inc(duration_send_metrics)
def send_metrics(self):
# more than one thread could be calling this at the same time, so should
@@ -352,7 +364,13 @@ class Metrics(MetricsNamespace):
if instance_data:
for field in self.METRICS:
if len(metrics_filter) == 0 or field in metrics_filter:
output_text += self.METRICS[field].to_prometheus(instance_data)
# Add subsystem label only for operational metrics
namespace = (
self._namespace
if field in ['subsystem_metrics_pipe_execute_seconds', 'subsystem_metrics_pipe_execute_calls', 'subsystem_metrics_send_metrics_seconds']
else None
)
output_text += self.METRICS[field].to_prometheus(instance_data, namespace)
return output_text
@@ -381,11 +399,6 @@ class DispatcherMetrics(Metrics):
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
# dispatcher subsystem metrics
SetIntM('dispatcher_pool_scale_up_events', 'Number of times local dispatcher scaled up a worker since startup'),
SetIntM('dispatcher_pool_active_task_count', 'Number of active tasks in the worker pool when last task was submitted'),
SetIntM('dispatcher_pool_max_worker_count', 'Highest number of workers in worker pool in last collection interval, about 20s'),
SetFloatM('dispatcher_availability', 'Fraction of time (in last collection interval) dispatcher was able to receive messages'),
]
def __init__(self, *args, **kwargs):
@@ -413,8 +426,12 @@ class CallbackReceiverMetrics(Metrics):
def metrics(request):
output_text = ''
for m in [DispatcherMetrics(), CallbackReceiverMetrics()]:
output_text += m.generate_metrics(request)
output_text += DispatcherMetrics().generate_metrics(request)
output_text += CallbackReceiverMetrics().generate_metrics(request)
dispatcherd_metrics = get_dispatcherd_metrics(request)
if dispatcherd_metrics:
output_text += dispatcherd_metrics
return output_text
@@ -440,7 +457,10 @@ class CustomToPrometheusMetricsCollector(prometheus_client.registry.Collector):
logger.debug(f"No metric data not found in redis for metric namespace '{self._metrics._namespace}'")
return None
host_metrics = instance_data.get(my_hostname)
if not (host_metrics := instance_data.get(my_hostname)):
logger.debug(f"Metric data for this node '{my_hostname}' not found in redis for metric namespace '{self._metrics._namespace}'")
return None
for _, metric in self._metrics.METRICS.items():
entry = host_metrics.get(metric.field)
if not entry:
@@ -461,13 +481,6 @@ class CallbackReceiverMetricsServer(MetricsServer):
super().__init__(settings.METRICS_SERVICE_CALLBACK_RECEIVER, registry)
class DispatcherMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
registry.register(CustomToPrometheusMetricsCollector(DispatcherMetrics(metrics_have_changed=False)))
super().__init__(settings.METRICS_SERVICE_DISPATCHER, registry)
class WebsocketsMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)

View File

@@ -82,7 +82,7 @@ class MainConfig(AppConfig):
def configure_dispatcherd(self):
"""This implements the default configuration for dispatcherd
If running the tasking service like awx-manage run_dispatcher,
If running the tasking service like awx-manage dispatcherd,
some additional config will be applied on top of this.
This configuration provides the minimum such that code can submit
tasks to pg_notify to run those tasks.

View File

@@ -144,6 +144,35 @@ register(
category_slug='system',
)
register(
'SUBSCRIPTIONS_USERNAME',
field_class=fields.CharField,
default='',
allow_blank=True,
encrypted=False,
read_only=False,
label=_('Red Hat Username for Subscriptions'),
help_text=_('Username used to retrieve subscription and content information'), # noqa
category=_('System'),
category_slug='system',
hidden=True,
)
register(
'SUBSCRIPTIONS_PASSWORD',
field_class=fields.CharField,
default='',
allow_blank=True,
encrypted=True,
read_only=False,
label=_('Red Hat Password for Subscriptions'),
help_text=_('Password used to retrieve subscription and content information'), # noqa
category=_('System'),
category_slug='system',
hidden=True,
)
register(
'SUBSCRIPTIONS_CLIENT_ID',
field_class=fields.CharField,
@@ -155,6 +184,7 @@ register(
help_text=_('Client ID used to retrieve subscription and content information'), # noqa
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -168,6 +198,7 @@ register(
help_text=_('Client secret used to retrieve subscription and content information'), # noqa
category=_('System'),
category_slug='system',
hidden=True,
)
register(

View File

@@ -3,7 +3,6 @@ import logging
import time
import hmac
import asyncio
import redis
from django.core.serializers.json import DjangoJSONEncoder
from django.conf import settings
@@ -14,6 +13,8 @@ from channels.generic.websocket import AsyncJsonWebsocketConsumer
from channels.layers import get_channel_layer
from channels.db import database_sync_to_async
from awx.main.utils.redis import get_redis_client_async
logger = logging.getLogger('awx.main.consumers')
XRF_KEY = '_auth_user_xrf'
@@ -40,10 +41,10 @@ class WebsocketSecretAuthHelper:
@classmethod
def verify_secret(cls, s, nonce_tolerance=300):
try:
(prefix, payload) = s.split(' ')
prefix, payload = s.split(' ')
if prefix != 'HMAC-SHA256':
raise ValueError('Unsupported encryption algorithm')
(nonce_parsed, secret_parsed) = payload.split(':')
nonce_parsed, secret_parsed = payload.split(':')
except Exception:
raise ValueError("Failed to parse secret")
@@ -94,6 +95,9 @@ class RelayConsumer(AsyncJsonWebsocketConsumer):
await self.channel_layer.group_add(settings.BROADCAST_WEBSOCKET_GROUP_NAME, self.channel_name)
logger.info(f"client '{self.channel_name}' joined the broadcast group.")
# Initialize Redis client once for reuse across all message handling
self._redis_conn = get_redis_client_async()
async def disconnect(self, code):
logger.info(f"client '{self.channel_name}' disconnected from the broadcast group.")
await self.channel_layer.group_discard(settings.BROADCAST_WEBSOCKET_GROUP_NAME, self.channel_name)
@@ -102,11 +106,12 @@ class RelayConsumer(AsyncJsonWebsocketConsumer):
await self.send(event['text'])
async def receive_json(self, data):
(group, message) = unwrap_broadcast_msg(data)
group, message = unwrap_broadcast_msg(data)
if group == "metrics":
message = json.loads(message['text'])
conn = redis.Redis.from_url(settings.BROKER_URL)
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "-" + message['metrics_namespace'] + "_instance_" + message['instance'], message['metrics'])
await self._redis_conn.set(
settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "-" + message['metrics_namespace'] + "_instance_" + message['instance'], message['metrics']
)
else:
await self.channel_layer.group_send(group, message)

View File

@@ -77,14 +77,13 @@ class PubSub(object):
n = psycopg.connection.Notify(pgn.relname.decode(enc), pgn.extra.decode(enc), pgn.be_pid)
yield n
def events(self, yield_timeouts=False):
def events(self):
if not self.conn.autocommit:
raise RuntimeError('Listening for events can only be done in autocommit mode')
while True:
if select.select([self.conn], [], [], self.select_timeout) == NOT_READY:
if yield_timeouts:
yield None
yield None
else:
notification_generator = self.current_notifies(self.conn)
for notification in notification_generator:

View File

@@ -2,7 +2,7 @@ from django.conf import settings
from ansible_base.lib.utils.db import get_pg_notify_params
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.pool import get_auto_max_workers
from awx.main.utils.common import get_auto_max_workers
def get_dispatcherd_config(for_service: bool = False, mock_publish: bool = False) -> dict:
@@ -11,28 +11,35 @@ def get_dispatcherd_config(for_service: bool = False, mock_publish: bool = False
Parameters:
for_service: if True, include dynamic options needed for running the dispatcher service
this will require database access, you should delay evaluation until after app setup
mock_publish: if True, use mock values that don't require database access
this is used during tests to avoid database queries during app initialization
"""
# When mock_publish=True (e.g., during tests), use a default value to avoid
# database access in get_auto_max_workers() which queries settings.IS_K8S
if mock_publish:
max_workers = 20 # Reasonable default for tests
else:
max_workers = get_auto_max_workers()
config = {
"version": 2,
"service": {
"pool_kwargs": {
"min_workers": settings.JOB_EVENT_WORKERS,
"max_workers": get_auto_max_workers(),
"max_workers": max_workers,
},
"main_kwargs": {"node_id": settings.CLUSTER_HOST_ID},
"process_manager_cls": "ForkServerManager",
"process_manager_kwargs": {"preload_modules": ['awx.main.dispatch.hazmat']},
"process_manager_kwargs": {"preload_modules": ['awx.main.dispatch.prefork']},
},
"brokers": {
"socket": {"socket_path": settings.DISPATCHERD_DEBUGGING_SOCKFILE},
},
"publish": {"default_control_broker": "socket"},
"brokers": {},
"publish": {},
"worker": {"worker_cls": "awx.main.dispatch.worker.dispatcherd.AWXTaskWorker"},
}
if mock_publish:
config["brokers"]["noop"] = {}
config["publish"]["default_broker"] = "noop"
config["brokers"]["dispatcherd.testing.brokers.noop"] = {}
config["publish"]["default_broker"] = "dispatcherd.testing.brokers.noop"
else:
config["brokers"]["pg_notify"] = {
"config": get_pg_notify_params(),
@@ -49,5 +56,11 @@ def get_dispatcherd_config(for_service: bool = False, mock_publish: bool = False
}
config["brokers"]["pg_notify"]["channels"] = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()]
metrics_cfg = settings.METRICS_SUBSYSTEM_CONFIG.get('server', {}).get(settings.METRICS_SERVICE_DISPATCHER)
if metrics_cfg:
config["service"]["metrics_kwargs"] = {
"host": metrics_cfg.get("host", "localhost"),
"port": metrics_cfg.get("port", 8015),
}
return config

View File

@@ -1,78 +0,0 @@
import logging
import uuid
import json
from django.conf import settings
from django.db import connection
import redis
from awx.main.dispatch import get_task_queuename
from . import pg_bus_conn
logger = logging.getLogger('awx.main.dispatch')
class Control(object):
services = ('dispatcher', 'callback_receiver')
result = None
def __init__(self, service, host=None):
if service not in self.services:
raise RuntimeError('{} must be in {}'.format(service, self.services))
self.service = service
self.queuename = host or get_task_queuename()
def status(self, *args, **kwargs):
r = redis.Redis.from_url(settings.BROKER_URL)
if self.service == 'dispatcher':
stats = r.get(f'awx_{self.service}_statistics') or b''
return stats.decode('utf-8')
else:
workers = []
for key in r.keys('awx_callback_receiver_statistics_*'):
workers.append(r.get(key).decode('utf-8'))
return '\n'.join(workers)
def running(self, *args, **kwargs):
return self.control_with_reply('running', *args, **kwargs)
def cancel(self, task_ids, with_reply=True):
if with_reply:
return self.control_with_reply('cancel', extra_data={'task_ids': task_ids})
else:
self.control({'control': 'cancel', 'task_ids': task_ids, 'reply_to': None}, extra_data={'task_ids': task_ids})
def schedule(self, *args, **kwargs):
return self.control_with_reply('schedule', *args, **kwargs)
@classmethod
def generate_reply_queue_name(cls):
return f"reply_to_{str(uuid.uuid4()).replace('-','_')}"
def control_with_reply(self, command, timeout=5, extra_data=None):
logger.warning('checking {} {} for {}'.format(self.service, command, self.queuename))
reply_queue = Control.generate_reply_queue_name()
self.result = None
if not connection.get_autocommit():
raise RuntimeError('Control-with-reply messages can only be done in autocommit mode')
with pg_bus_conn(select_timeout=timeout) as conn:
conn.listen(reply_queue)
send_data = {'control': command, 'reply_to': reply_queue}
if extra_data:
send_data.update(extra_data)
conn.notify(self.queuename, json.dumps(send_data))
for reply in conn.events(yield_timeouts=True):
if reply is None:
logger.error(f'{self.service} did not reply within {timeout}s')
raise RuntimeError(f"{self.service} did not reply within {timeout}s")
break
return json.loads(reply.payload)
def control(self, msg, **kwargs):
with pg_bus_conn() as conn:
conn.notify(self.queuename, json.dumps(msg))

View File

@@ -1,147 +0,0 @@
import logging
import time
import yaml
from datetime import datetime
logger = logging.getLogger('awx.main.dispatch.periodic')
class ScheduledTask:
"""
Class representing schedules, very loosely modeled after python schedule library Job
the idea of this class is to:
- only deal in relative times (time since the scheduler global start)
- only deal in integer math for target runtimes, but float for current relative time
Missed schedule policy:
Invariant target times are maintained, meaning that if interval=10s offset=0
and it runs at t=7s, then it calls for next run in 3s.
However, if a complete interval has passed, that is counted as a missed run,
and missed runs are abandoned (no catch-up runs).
"""
def __init__(self, name: str, data: dict):
# parameters need for schedule computation
self.interval = int(data['schedule'].total_seconds())
self.offset = 0 # offset relative to start time this schedule begins
self.index = 0 # number of periods of the schedule that has passed
# parameters that do not affect scheduling logic
self.last_run = None # time of last run, only used for debug
self.completed_runs = 0 # number of times schedule is known to run
self.name = name
self.data = data # used by caller to know what to run
@property
def next_run(self):
"Time until the next run with t=0 being the global_start of the scheduler class"
return (self.index + 1) * self.interval + self.offset
def due_to_run(self, relative_time):
return bool(self.next_run <= relative_time)
def expected_runs(self, relative_time):
return int((relative_time - self.offset) / self.interval)
def mark_run(self, relative_time):
self.last_run = relative_time
self.completed_runs += 1
new_index = self.expected_runs(relative_time)
if new_index > self.index + 1:
logger.warning(f'Missed {new_index - self.index - 1} schedules of {self.name}')
self.index = new_index
def missed_runs(self, relative_time):
"Number of times job was supposed to ran but failed to, only used for debug"
missed_ct = self.expected_runs(relative_time) - self.completed_runs
# if this is currently due to run do not count that as a missed run
if missed_ct and self.due_to_run(relative_time):
missed_ct -= 1
return missed_ct
class Scheduler:
def __init__(self, schedule):
"""
Expects schedule in the form of a dictionary like
{
'job1': {'schedule': timedelta(seconds=50), 'other': 'stuff'}
}
Only the schedule nearest-second value is used for scheduling,
the rest of the data is for use by the caller to know what to run.
"""
self.jobs = [ScheduledTask(name, data) for name, data in schedule.items()]
min_interval = min(job.interval for job in self.jobs)
num_jobs = len(self.jobs)
# this is intentionally oppioniated against spammy schedules
# a core goal is to spread out the scheduled tasks (for worker management)
# and high-frequency schedules just do not work with that
if num_jobs > min_interval:
raise RuntimeError(f'Number of schedules ({num_jobs}) is more than the shortest schedule interval ({min_interval} seconds).')
# even space out jobs over the base interval
for i, job in enumerate(self.jobs):
job.offset = (i * min_interval) // num_jobs
# internally times are all referenced relative to startup time, add grace period
self.global_start = time.time() + 2.0
def get_and_mark_pending(self, reftime=None):
if reftime is None:
reftime = time.time() # mostly for tests
relative_time = reftime - self.global_start
to_run = []
for job in self.jobs:
if job.due_to_run(relative_time):
to_run.append(job)
logger.debug(f'scheduler found {job.name} to run, {relative_time - job.next_run} seconds after target')
job.mark_run(relative_time)
return to_run
def time_until_next_run(self, reftime=None):
if reftime is None:
reftime = time.time() # mostly for tests
relative_time = reftime - self.global_start
next_job = min(self.jobs, key=lambda j: j.next_run)
delta = next_job.next_run - relative_time
if delta <= 0.1:
# careful not to give 0 or negative values to the select timeout, which has unclear interpretation
logger.warning(f'Scheduler next run of {next_job.name} is {-delta} seconds in the past')
return 0.1
elif delta > 20.0:
logger.warning(f'Scheduler next run unexpectedly over 20 seconds in future: {delta}')
return 20.0
logger.debug(f'Scheduler next run is {next_job.name} in {delta} seconds')
return delta
def debug(self, *args, **kwargs):
data = dict()
data['title'] = 'Scheduler status'
reftime = time.time()
now = datetime.fromtimestamp(reftime).strftime('%Y-%m-%d %H:%M:%S UTC')
start_time = datetime.fromtimestamp(self.global_start).strftime('%Y-%m-%d %H:%M:%S UTC')
relative_time = reftime - self.global_start
data['started_time'] = start_time
data['current_time'] = now
data['current_time_relative'] = round(relative_time, 3)
data['total_schedules'] = len(self.jobs)
data['schedule_list'] = dict(
[
(
job.name,
dict(
last_run_seconds_ago=round(relative_time - job.last_run, 3) if job.last_run else None,
next_run_in_seconds=round(job.next_run - relative_time, 3),
offset_in_seconds=job.offset,
completed_runs=job.completed_runs,
missed_runs=job.missed_runs(relative_time),
),
)
for job in sorted(self.jobs, key=lambda job: job.interval)
]
)
return yaml.safe_dump(data, default_flow_style=False, sort_keys=False)

View File

@@ -1,583 +1,54 @@
import logging
import os
import random
import signal
import sys
import time
import traceback
from datetime import datetime
from uuid import uuid4
import json
import collections
from multiprocessing import Process
from multiprocessing import Queue as MPQueue
from queue import Full as QueueFull, Empty as QueueEmpty
from django.conf import settings
from django.db import connection as django_connection, connections
from django.db import connection as django_connection
from django.core.cache import cache as django_cache
from django.utils.timezone import now as tz_now
from django_guid import set_guid
from jinja2 import Template
import psutil
from ansible_base.lib.logging.runtime import log_excess_runtime
from awx.main.models import UnifiedJob
from awx.main.dispatch import reaper
from awx.main.utils.common import get_mem_effective_capacity, get_corrected_memory, get_corrected_cpu, get_cpu_effective_capacity
# ansible-runner
from ansible_runner.utils.capacity import get_mem_in_bytes, get_cpu_count
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
else:
logger = logging.getLogger('awx.main.dispatch')
RETIRED_SENTINEL_TASK = "[retired]"
class NoOpResultQueue(object):
def put(self, item):
pass
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
class PoolWorker(object):
"""
Used to track a worker child process and its pending and finished messages.
A simple wrapper around a multiprocessing.Process that tracks a worker child process.
This class makes use of two distinct multiprocessing.Queues to track state:
- self.queue: this is a queue which represents pending messages that should
be handled by this worker process; as new AMQP messages come
in, a pool will put() them into this queue; the child
process that is forked will get() from this queue and handle
received messages in an endless loop
- self.finished: this is a queue which the worker process uses to signal
that it has finished processing a message
When a message is put() onto this worker, it is tracked in
self.managed_tasks.
Periodically, the worker will call .calculate_managed_tasks(), which will
cause messages in self.finished to be removed from self.managed_tasks.
In this way, self.managed_tasks represents a view of the messages assigned
to a specific process. The message at [0] is the least-recently inserted
message, and it represents what the worker is running _right now_
(self.current_task).
A worker is "busy" when it has at least one message in self.managed_tasks.
It is "idle" when self.managed_tasks is empty.
The worker process runs the provided target function.
"""
track_managed_tasks = False
def __init__(self, queue_size, target, args, **kwargs):
self.messages_sent = 0
self.messages_finished = 0
self.managed_tasks = collections.OrderedDict()
self.finished = MPQueue(queue_size) if self.track_managed_tasks else NoOpResultQueue()
self.queue = MPQueue(queue_size)
self.process = Process(target=target, args=(self.queue, self.finished) + args)
def __init__(self, target, args):
self.process = Process(target=target, args=args)
self.process.daemon = True
self.creation_time = time.monotonic()
self.retiring = False
def start(self):
self.process.start()
def put(self, body):
if self.retiring:
uuid = body.get('uuid', 'N/A') if isinstance(body, dict) else 'N/A'
logger.info(f"Worker pid:{self.pid} is retiring. Refusing new task {uuid}.")
raise QueueFull("Worker is retiring and not accepting new tasks") # AutoscalePool.write handles QueueFull
uuid = '?'
if isinstance(body, dict):
if not body.get('uuid'):
body['uuid'] = str(uuid4())
uuid = body['uuid']
if self.track_managed_tasks:
self.managed_tasks[uuid] = body
self.queue.put(body, block=True, timeout=5)
self.messages_sent += 1
self.calculate_managed_tasks()
def quit(self):
"""
Send a special control message to the worker that tells it to exit
gracefully.
"""
self.queue.put('QUIT')
@property
def age(self):
"""Returns the current age of the worker in seconds."""
return time.monotonic() - self.creation_time
@property
def pid(self):
return self.process.pid
@property
def qsize(self):
return self.queue.qsize()
@property
def alive(self):
return self.process.is_alive()
@property
def mb(self):
if self.alive:
return '{:0.3f}'.format(psutil.Process(self.pid).memory_info().rss / 1024.0 / 1024.0)
return '0'
@property
def exitcode(self):
return str(self.process.exitcode)
def calculate_managed_tasks(self):
if not self.track_managed_tasks:
return
# look to see if any tasks were finished
finished = []
for _ in range(self.finished.qsize()):
try:
finished.append(self.finished.get(block=False))
except QueueEmpty:
break # qsize is not always _totally_ up to date
# if any tasks were finished, removed them from the managed tasks for
# this worker
for uuid in finished:
try:
del self.managed_tasks[uuid]
self.messages_finished += 1
except KeyError:
# ansible _sometimes_ appears to send events w/ duplicate UUIDs;
# UUIDs for ansible events are *not* actually globally unique
# when this occurs, it's _fine_ to ignore this KeyError because
# the purpose of self.managed_tasks is to just track internal
# state of which events are *currently* being processed.
logger.warning('Event UUID {} appears to be have been duplicated.'.format(uuid))
if self.retiring:
self.managed_tasks[RETIRED_SENTINEL_TASK] = {'task': RETIRED_SENTINEL_TASK}
@property
def current_task(self):
if not self.track_managed_tasks:
return None
self.calculate_managed_tasks()
# the task at [0] is the one that's running right now (or is about to
# be running)
if len(self.managed_tasks):
return self.managed_tasks[list(self.managed_tasks.keys())[0]]
return None
@property
def orphaned_tasks(self):
if not self.track_managed_tasks:
return []
orphaned = []
if not self.alive:
# if this process had a running task that never finished,
# requeue its error callbacks
current_task = self.current_task
if isinstance(current_task, dict):
orphaned.extend(current_task.get('errbacks', []))
# if this process has any pending messages requeue them
for _ in range(self.qsize):
try:
message = self.queue.get(block=False)
if message != 'QUIT':
orphaned.append(message)
except QueueEmpty:
break # qsize is not always _totally_ up to date
if len(orphaned):
logger.error('requeuing {} messages from gone worker pid:{}'.format(len(orphaned), self.pid))
return orphaned
@property
def busy(self):
self.calculate_managed_tasks()
return len(self.managed_tasks) > 0
@property
def idle(self):
return not self.busy
class StatefulPoolWorker(PoolWorker):
track_managed_tasks = True
class WorkerPool(object):
"""
Creates a pool of forked PoolWorkers.
As WorkerPool.write(...) is called (generally, by a kombu consumer
implementation when it receives an AMQP message), messages are passed to
one of the multiprocessing Queues where some work can be done on them.
Each worker process runs the provided target function in an isolated process.
The pool manages spawning, tracking, and stopping worker processes.
class MessagePrinter(awx.main.dispatch.worker.BaseWorker):
def perform_work(self, body):
print(body)
pool = WorkerPool(min_workers=4) # spawn four worker processes
pool.init_workers(MessagePrint().work_loop)
pool.write(
0, # preferred worker 0
'Hello, World!'
)
Example:
pool = WorkerPool(workers_num=4) # spawn four worker processes
"""
pool_cls = PoolWorker
debug_meta = ''
def __init__(self, workers_num=None):
self.workers_num = workers_num or settings.JOB_EVENT_WORKERS
def __init__(self, min_workers=None, queue_size=None):
self.name = settings.CLUSTER_HOST_ID
self.pid = os.getpid()
self.min_workers = min_workers or settings.JOB_EVENT_WORKERS
self.queue_size = queue_size or settings.JOB_EVENT_MAX_QUEUE_SIZE
self.workers = []
def __len__(self):
return len(self.workers)
def init_workers(self, target, *target_args):
self.target = target
self.target_args = target_args
for idx in range(self.min_workers):
self.up()
def up(self):
idx = len(self.workers)
# It's important to close these because we're _about_ to fork, and we
# don't want the forked processes to inherit the open sockets
# for the DB and cache connections (that way lies race conditions)
django_connection.close()
django_cache.close()
worker = self.pool_cls(self.queue_size, self.target, (idx,) + self.target_args)
self.workers.append(worker)
try:
worker.start()
except Exception:
logger.exception('could not fork')
else:
logger.debug('scaling up worker pid:{}'.format(worker.pid))
return idx, worker
def debug(self, *args, **kwargs):
tmpl = Template(
'Recorded at: {{ dt }} \n'
'{{ pool.name }}[pid:{{ pool.pid }}] workers total={{ workers|length }} {{ meta }} \n'
'{% for w in workers %}'
'. worker[pid:{{ w.pid }}]{% if not w.alive %} GONE exit={{ w.exitcode }}{% endif %}'
' sent={{ w.messages_sent }}'
' age={{ "%.0f"|format(w.age) }}s'
' retiring={{ w.retiring }}'
'{% if w.messages_finished %} finished={{ w.messages_finished }}{% endif %}'
' qsize={{ w.managed_tasks|length }}'
' rss={{ w.mb }}MB'
'{% for task in w.managed_tasks.values() %}'
'\n - {% if loop.index0 == 0 %}running {% if "age" in task %}for: {{ "%.1f" % task["age"] }}s {% endif %}{% else %}queued {% endif %}'
'{{ task["uuid"] }} '
'{% if "task" in task %}'
'{{ task["task"].rsplit(".", 1)[-1] }}'
# don't print kwargs, they often contain launch-time secrets
'(*{{ task.get("args", []) }})'
'{% endif %}'
'{% endfor %}'
'{% if not w.managed_tasks|length %}'
' [IDLE]'
'{% endif %}'
'\n'
'{% endfor %}'
)
now = datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S UTC')
return tmpl.render(pool=self, workers=self.workers, meta=self.debug_meta, dt=now)
def write(self, preferred_queue, body):
queue_order = sorted(range(len(self.workers)), key=lambda x: -1 if x == preferred_queue else x)
write_attempt_order = []
for queue_actual in queue_order:
def init_workers(self, target):
for idx in range(self.workers_num):
# It's important to close these because we're _about_ to fork, and we
# don't want the forked processes to inherit the open sockets
# for the DB and cache connections (that way lies race conditions)
django_connection.close()
django_cache.close()
worker = PoolWorker(target, (idx,))
try:
self.workers[queue_actual].put(body)
return queue_actual
except QueueFull:
pass
worker.start()
except Exception:
tb = traceback.format_exc()
logger.warning("could not write to queue %s" % preferred_queue)
logger.warning("detail: {}".format(tb))
write_attempt_order.append(preferred_queue)
logger.error("could not write payload to any queue, attempted order: {}".format(write_attempt_order))
return None
def stop(self, signum):
try:
for worker in self.workers:
os.kill(worker.pid, signum)
except Exception:
logger.exception('could not kill {}'.format(worker.pid))
def get_auto_max_workers():
"""Method we normally rely on to get max_workers
Uses almost same logic as Instance.local_health_check
The important thing is to be MORE than Instance.capacity
so that the task-manager does not over-schedule this node
Ideally we would just use the capacity from the database plus reserve workers,
but this poses some bootstrap problems where OCP task containers
register themselves after startup
"""
# Get memory from ansible-runner
total_memory_gb = get_mem_in_bytes()
# This may replace memory calculation with a user override
corrected_memory = get_corrected_memory(total_memory_gb)
# Get same number as max forks based on memory, this function takes memory as bytes
mem_capacity = get_mem_effective_capacity(corrected_memory, is_control_node=True)
# Follow same process for CPU capacity constraint
cpu_count = get_cpu_count()
corrected_cpu = get_corrected_cpu(cpu_count)
cpu_capacity = get_cpu_effective_capacity(corrected_cpu, is_control_node=True)
# Here is what is different from health checks,
auto_max = max(mem_capacity, cpu_capacity)
# add magic number of extra workers to ensure
# we have a few extra workers to run the heartbeat
auto_max += 7
return auto_max
class AutoscalePool(WorkerPool):
"""
An extended pool implementation that automatically scales workers up and
down based on demand
"""
pool_cls = StatefulPoolWorker
def __init__(self, *args, **kwargs):
self.max_workers = kwargs.pop('max_workers', None)
self.max_worker_lifetime_seconds = kwargs.pop(
'max_worker_lifetime_seconds', getattr(settings, 'WORKER_MAX_LIFETIME_SECONDS', 14400)
) # Default to 4 hours
super(AutoscalePool, self).__init__(*args, **kwargs)
if self.max_workers is None:
self.max_workers = get_auto_max_workers()
# max workers can't be less than min_workers
self.max_workers = max(self.min_workers, self.max_workers)
# the task manager enforces settings.TASK_MANAGER_TIMEOUT on its own
# but if the task takes longer than the time defined here, we will force it to stop here
self.task_manager_timeout = settings.TASK_MANAGER_TIMEOUT + settings.TASK_MANAGER_TIMEOUT_GRACE_PERIOD
# initialize some things for subsystem metrics periodic gathering
# the AutoscalePool class does not save these to redis directly, but reports via produce_subsystem_metrics
self.scale_up_ct = 0
self.worker_count_max = 0
# last time we wrote current tasks, to avoid too much log spam
self.last_task_list_log = time.monotonic()
def produce_subsystem_metrics(self, metrics_object):
metrics_object.set('dispatcher_pool_scale_up_events', self.scale_up_ct)
metrics_object.set('dispatcher_pool_active_task_count', sum(len(w.managed_tasks) for w in self.workers))
metrics_object.set('dispatcher_pool_max_worker_count', self.worker_count_max)
self.worker_count_max = len(self.workers)
@property
def should_grow(self):
if len(self.workers) < self.min_workers:
# If we don't have at least min_workers, add more
return True
# If every worker is busy doing something, add more
return all([w.busy for w in self.workers])
@property
def full(self):
return len(self.workers) == self.max_workers
@property
def debug_meta(self):
return 'min={} max={}'.format(self.min_workers, self.max_workers)
@log_excess_runtime(logger, debug_cutoff=0.05, cutoff=0.2)
def cleanup(self):
"""
Perform some internal account and cleanup. This is run on
every cluster node heartbeat:
1. Discover worker processes that exited, and recover messages they
were handling.
2. Clean up unnecessary, idle workers.
IMPORTANT: this function is one of the few places in the dispatcher
(aside from setting lookups) where we talk to the database. As such,
if there's an outage, this method _can_ throw various
django.db.utils.Error exceptions. Act accordingly.
"""
orphaned = []
for w in self.workers[::]:
is_retirement_age = self.max_worker_lifetime_seconds is not None and w.age > self.max_worker_lifetime_seconds
if not w.alive:
# the worker process has exited
# 1. take the task it was running and enqueue the error
# callbacks
# 2. take any pending tasks delivered to its queue and
# send them to another worker
logger.error('worker pid:{} is gone (exit={})'.format(w.pid, w.exitcode))
if w.current_task:
if w.current_task == {'task': RETIRED_SENTINEL_TASK}:
logger.debug('scaling down worker pid:{} due to worker age: {}'.format(w.pid, w.age))
self.workers.remove(w)
continue
if w.current_task != 'QUIT':
try:
for j in UnifiedJob.objects.filter(celery_task_id=w.current_task['uuid']):
reaper.reap_job(j, 'failed')
except Exception:
logger.exception('failed to reap job UUID {}'.format(w.current_task['uuid']))
else:
logger.warning(f'Worker was told to quit but has not, pid={w.pid}')
orphaned.extend(w.orphaned_tasks)
self.workers.remove(w)
elif w.idle and len(self.workers) > self.min_workers:
# the process has an empty queue (it's idle) and we have
# more processes in the pool than we need (> min)
# send this process a message so it will exit gracefully
# at the next opportunity
logger.debug('scaling down worker pid:{}'.format(w.pid))
w.quit()
self.workers.remove(w)
elif w.idle and is_retirement_age:
logger.debug('scaling down worker pid:{} due to worker age: {}'.format(w.pid, w.age))
w.quit()
self.workers.remove(w)
elif is_retirement_age and not w.retiring and not w.idle:
logger.info(
f"Worker pid:{w.pid} (age: {w.age:.0f}s) exceeded max lifetime ({self.max_worker_lifetime_seconds:.0f}s). "
"Signaling for graceful retirement."
)
# Send QUIT signal; worker will finish current task then exit.
w.quit()
# mark as retiring to reject any future tasks that might be assigned in meantime
w.retiring = True
if w.alive:
# if we discover a task manager invocation that's been running
# too long, reap it (because otherwise it'll just hold the postgres
# advisory lock forever); the goal of this code is to discover
# deadlocks or other serious issues in the task manager that cause
# the task manager to never do more work
current_task = w.current_task
if current_task and isinstance(current_task, dict):
endings = ('tasks.task_manager', 'tasks.dependency_manager', 'tasks.workflow_manager')
current_task_name = current_task.get('task', '')
if current_task_name.endswith(endings):
if 'started' not in current_task:
w.managed_tasks[current_task['uuid']]['started'] = time.time()
age = time.time() - current_task['started']
w.managed_tasks[current_task['uuid']]['age'] = age
if age > self.task_manager_timeout:
logger.error(f'{current_task_name} has held the advisory lock for {age}, sending SIGUSR1 to {w.pid}')
os.kill(w.pid, signal.SIGUSR1)
for m in orphaned:
# if all the workers are dead, spawn at least one
if not len(self.workers):
self.up()
idx = random.choice(range(len(self.workers)))
self.write(idx, m)
def add_bind_kwargs(self, body):
bind_kwargs = body.pop('bind_kwargs', [])
body.setdefault('kwargs', {})
if 'dispatch_time' in bind_kwargs:
body['kwargs']['dispatch_time'] = tz_now().isoformat()
if 'worker_tasks' in bind_kwargs:
worker_tasks = {}
for worker in self.workers:
worker.calculate_managed_tasks()
worker_tasks[worker.pid] = list(worker.managed_tasks.keys())
body['kwargs']['worker_tasks'] = worker_tasks
def up(self):
if self.full:
# if we can't spawn more workers, just toss this message into a
# random worker's backlog
idx = random.choice(range(len(self.workers)))
return idx, self.workers[idx]
else:
self.scale_up_ct += 1
ret = super(AutoscalePool, self).up()
new_worker_ct = len(self.workers)
if new_worker_ct > self.worker_count_max:
self.worker_count_max = new_worker_ct
return ret
@staticmethod
def fast_task_serialization(current_task):
try:
return str(current_task.get('task')) + ' - ' + str(sorted(current_task.get('args', []))) + ' - ' + str(sorted(current_task.get('kwargs', {})))
except Exception:
# just make sure this does not make things worse
return str(current_task)
def write(self, preferred_queue, body):
if 'guid' in body:
set_guid(body['guid'])
try:
if isinstance(body, dict) and body.get('bind_kwargs'):
self.add_bind_kwargs(body)
if self.should_grow:
self.up()
# we don't care about "preferred queue" round robin distribution, just
# find the first non-busy worker and claim it
workers = self.workers[:]
random.shuffle(workers)
for w in workers:
if not w.busy:
w.put(body)
break
logger.exception('could not fork')
else:
task_name = 'unknown'
if isinstance(body, dict):
task_name = body.get('task')
logger.warning(f'Workers maxed, queuing {task_name}, load: {sum(len(w.managed_tasks) for w in self.workers)} / {len(self.workers)}')
# Once every 10 seconds write out task list for debugging
if time.monotonic() - self.last_task_list_log >= 10.0:
task_counts = {}
for worker in self.workers:
task_slug = self.fast_task_serialization(worker.current_task)
task_counts.setdefault(task_slug, 0)
task_counts[task_slug] += 1
logger.info(f'Running tasks by count:\n{json.dumps(task_counts, indent=2)}')
self.last_task_list_log = time.monotonic()
return super(AutoscalePool, self).write(preferred_queue, body)
except Exception:
for conn in connections.all():
# If the database connection has a hiccup, re-establish a new
# connection
conn.close_if_unusable_or_obsolete()
logger.exception('failed to write inbound message')
logger.debug('scaling up worker pid:{}'.format(worker.process.pid))

View File

@@ -10,7 +10,6 @@ from awx import prepare_env
from dispatcherd.utils import resolve_callable
prepare_env()
django.setup() # noqa
@@ -18,9 +17,8 @@ django.setup() # noqa
from django.conf import settings
# Preload all periodic tasks so their imports will be in shared memory
for name, options in settings.CELERYBEAT_SCHEDULE.items():
for name, options in settings.DISPATCHER_SCHEDULE.items():
resolve_callable(options['task'])
@@ -31,6 +29,5 @@ from awx.main.scheduler.kubernetes import PodManager # noqa
from django.core.cache import cache as django_cache
from django.db import connection
connection.close()
django_cache.close()

View File

@@ -1,146 +0,0 @@
import inspect
import logging
import json
import time
from uuid import uuid4
from dispatcherd.publish import submit_task
from dispatcherd.utils import resolve_callable
from django_guid import get_guid
from django.conf import settings
from . import pg_bus_conn
logger = logging.getLogger('awx.main.dispatch')
def serialize_task(f):
return '.'.join([f.__module__, f.__name__])
class task:
"""
Used to decorate a function or class so that it can be run asynchronously
via the task dispatcher. Tasks can be simple functions:
@task()
def add(a, b):
return a + b
...or classes that define a `run` method:
@task()
class Adder:
def run(self, a, b):
return a + b
# Tasks can be run synchronously...
assert add(1, 1) == 2
assert Adder().run(1, 1) == 2
# ...or published to a queue:
add.apply_async([1, 1])
Adder.apply_async([1, 1])
# Tasks can also define a specific target queue or use the special fan-out queue tower_broadcast:
@task(queue='slow-tasks')
def snooze():
time.sleep(10)
@task(queue='tower_broadcast')
def announce():
print("Run this everywhere!")
# The special parameter bind_kwargs tells the main dispatcher process to add certain kwargs
@task(bind_kwargs=['dispatch_time'])
def print_time(dispatch_time=None):
print(f"Time I was dispatched: {dispatch_time}")
"""
def __init__(self, queue=None, bind_kwargs=None):
self.queue = queue
self.bind_kwargs = bind_kwargs
def __call__(self, fn=None):
queue = self.queue
bind_kwargs = self.bind_kwargs
class PublisherMixin(object):
queue = None
@classmethod
def delay(cls, *args, **kwargs):
return cls.apply_async(args, kwargs)
@classmethod
def get_async_body(cls, args=None, kwargs=None, uuid=None, **kw):
"""
Get the python dict to become JSON data in the pg_notify message
This same message gets passed over the dispatcher IPC queue to workers
If a task is submitted to a multiprocessing pool, skipping pg_notify, this might be used directly
"""
task_id = uuid or str(uuid4())
args = args or []
kwargs = kwargs or {}
obj = {'uuid': task_id, 'args': args, 'kwargs': kwargs, 'task': cls.name, 'time_pub': time.time()}
guid = get_guid()
if guid:
obj['guid'] = guid
if bind_kwargs:
obj['bind_kwargs'] = bind_kwargs
obj.update(**kw)
return obj
@classmethod
def apply_async(cls, args=None, kwargs=None, queue=None, uuid=None, **kw):
try:
from flags.state import flag_enabled
if flag_enabled('FEATURE_DISPATCHERD_ENABLED'):
# At this point we have the import string, and submit_task wants the method, so back to that
actual_task = resolve_callable(cls.name)
return submit_task(actual_task, args=args, kwargs=kwargs, queue=queue, uuid=uuid, **kw)
except Exception:
logger.exception(f"[DISPATCHER] Failed to check for alternative dispatcherd implementation for {cls.name}")
# Continue with original implementation if anything fails
pass
# Original implementation follows
queue = queue or getattr(cls.queue, 'im_func', cls.queue)
if not queue:
msg = f'{cls.name}: Queue value required and may not be None'
logger.error(msg)
raise ValueError(msg)
obj = cls.get_async_body(args=args, kwargs=kwargs, uuid=uuid, **kw)
if callable(queue):
queue = queue()
if not settings.DISPATCHER_MOCK_PUBLISH:
with pg_bus_conn() as conn:
conn.notify(queue, json.dumps(obj))
return (obj, queue)
# If the object we're wrapping *is* a class (e.g., RunJob), return
# a *new* class that inherits from the wrapped class *and* BaseTask
# In this way, the new class returned by our decorator is the class
# being decorated *plus* PublisherMixin so cls.apply_async() and
# cls.delay() work
bases = []
ns = {'name': serialize_task(fn), 'queue': queue}
if inspect.isclass(fn):
bases = list(fn.__bases__)
ns.update(fn.__dict__)
cls = type(fn.__name__, tuple(bases + [PublisherMixin]), ns)
if inspect.isclass(fn):
return cls
# if the object being decorated is *not* a class (it's a Python
# function), make fn.apply_async and fn.delay proxy through to the
# PublisherMixin we dynamically created above
setattr(fn, 'name', cls.name)
setattr(fn, 'apply_async', cls.apply_async)
setattr(fn, 'delay', cls.delay)
setattr(fn, 'get_async_body', cls.get_async_body)
return fn

View File

@@ -1,9 +1,6 @@
from datetime import timedelta
import logging
from django.db.models import Q
from django.conf import settings
from django.utils.timezone import now as tz_now
from django.contrib.contenttypes.models import ContentType
from awx.main.models import Instance, UnifiedJob, WorkflowJob
@@ -50,26 +47,6 @@ def reap_job(j, status, job_explanation=None):
logger.error(f'{j.log_format} is no longer {status_before}; reaping')
def reap_waiting(instance=None, status='failed', job_explanation=None, grace_period=None, excluded_uuids=None, ref_time=None):
"""
Reap all jobs in waiting for this instance.
"""
if grace_period is None:
grace_period = settings.JOB_WAITING_GRACE_PERIOD + settings.TASK_MANAGER_TIMEOUT
if instance is None:
hostname = Instance.objects.my_hostname()
else:
hostname = instance.hostname
if ref_time is None:
ref_time = tz_now()
jobs = UnifiedJob.objects.filter(status='waiting', modified__lte=ref_time - timedelta(seconds=grace_period), controller_node=hostname)
if excluded_uuids:
jobs = jobs.exclude(celery_task_id__in=excluded_uuids)
for j in jobs:
reap_job(j, status, job_explanation=job_explanation)
def reap(instance=None, status='failed', job_explanation=None, excluded_uuids=None, ref_time=None):
"""
Reap all jobs in running for this instance.

View File

@@ -1,3 +1,2 @@
from .base import AWXConsumerRedis, AWXConsumerPG, BaseWorker # noqa
from .base import AWXConsumerRedis # noqa
from .callback import CallbackBrokerWorker # noqa
from .task import TaskWorker # noqa

View File

@@ -4,341 +4,39 @@
import os
import logging
import signal
import sys
import redis
import json
import psycopg
import time
from uuid import UUID
from queue import Empty as QueueEmpty
from datetime import timedelta
from django import db
from django.conf import settings
import redis.exceptions
from ansible_base.lib.logging.runtime import log_excess_runtime
from awx.main.utils.redis import get_redis_client
from awx.main.dispatch.pool import WorkerPool
from awx.main.dispatch.periodic import Scheduler
from awx.main.dispatch import pg_bus_conn
from awx.main.utils.db import set_connection_name
import awx.main.analytics.subsystem_metrics as s_metrics
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
else:
logger = logging.getLogger('awx.main.dispatch')
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
def signame(sig):
return dict((k, v) for v, k in signal.__dict__.items() if v.startswith('SIG') and not v.startswith('SIG_'))[sig]
class WorkerSignalHandler:
def __init__(self):
self.kill_now = False
signal.signal(signal.SIGTERM, signal.SIG_DFL)
signal.signal(signal.SIGINT, self.exit_gracefully)
def exit_gracefully(self, *args, **kwargs):
self.kill_now = True
class AWXConsumerBase(object):
last_stats = time.time()
def __init__(self, name, worker, queues=[], pool=None):
self.should_stop = False
class AWXConsumerRedis(object):
def __init__(self, name, worker):
self.name = name
self.total_messages = 0
self.queues = queues
self.worker = worker
self.pool = pool
if pool is None:
self.pool = WorkerPool()
self.pool.init_workers(self.worker.work_loop)
self.redis = redis.Redis.from_url(settings.BROKER_URL)
self.pool = WorkerPool()
self.pool.init_workers(worker.work_loop)
self.redis = get_redis_client()
@property
def listening_on(self):
return f'listening on {self.queues}'
def control(self, body):
logger.warning(f'Received control signal:\n{body}')
control = body.get('control')
if control in ('status', 'schedule', 'running', 'cancel'):
reply_queue = body['reply_to']
if control == 'status':
msg = '\n'.join([self.listening_on, self.pool.debug()])
if control == 'schedule':
msg = self.scheduler.debug()
elif control == 'running':
msg = []
for worker in self.pool.workers:
worker.calculate_managed_tasks()
msg.extend(worker.managed_tasks.keys())
elif control == 'cancel':
msg = []
task_ids = set(body['task_ids'])
for worker in self.pool.workers:
task = worker.current_task
if task and task['uuid'] in task_ids:
logger.warn(f'Sending SIGTERM to task id={task["uuid"]}, task={task.get("task")}, args={task.get("args")}')
os.kill(worker.pid, signal.SIGTERM)
msg.append(task['uuid'])
if task_ids and not msg:
logger.info(f'Could not locate running tasks to cancel with ids={task_ids}')
if reply_queue is not None:
with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
elif control == 'reload':
for worker in self.pool.workers:
worker.quit()
else:
logger.error('unrecognized control message: {}'.format(control))
def dispatch_task(self, body):
"""This will place the given body into a worker queue to run method decorated as a task"""
if isinstance(body, dict):
body['time_ack'] = time.time()
if len(self.pool):
if "uuid" in body and body['uuid']:
try:
queue = UUID(body['uuid']).int % len(self.pool)
except Exception:
queue = self.total_messages % len(self.pool)
else:
queue = self.total_messages % len(self.pool)
else:
queue = 0
self.pool.write(queue, body)
self.total_messages += 1
def process_task(self, body):
"""Routes the task details in body as either a control task or a task-task"""
if 'control' in body:
try:
return self.control(body)
except Exception:
logger.exception(f"Exception handling control message: {body}")
return
self.dispatch_task(body)
@log_excess_runtime(logger, debug_cutoff=0.05, cutoff=0.2)
def record_statistics(self):
if time.time() - self.last_stats > 1: # buffer stat recording to once per second
save_data = self.pool.debug()
try:
self.redis.set(f'awx_{self.name}_statistics', save_data)
except redis.exceptions.ConnectionError as exc:
logger.warning(f'Redis connection error saving {self.name} status data:\n{exc}\nmissed data:\n{save_data}')
except Exception:
logger.exception(f"Unknown redis error saving {self.name} status data:\nmissed data:\n{save_data}")
self.last_stats = time.time()
def run(self, *args, **kwargs):
def run(self):
signal.signal(signal.SIGINT, self.stop)
signal.signal(signal.SIGTERM, self.stop)
# Child should implement other things here
def stop(self, signum, frame):
self.should_stop = True
logger.warning('received {}, stopping'.format(signame(signum)))
self.worker.on_stop()
raise SystemExit()
class AWXConsumerRedis(AWXConsumerBase):
def run(self, *args, **kwargs):
super(AWXConsumerRedis, self).run(*args, **kwargs)
self.worker.on_start()
logger.info(f'Callback receiver started with pid={os.getpid()}')
db.connection.close() # logs use database, so close connection
while True:
time.sleep(60)
class AWXConsumerPG(AWXConsumerBase):
def __init__(self, *args, schedule=None, **kwargs):
super().__init__(*args, **kwargs)
self.pg_max_wait = getattr(settings, 'DISPATCHER_DB_DOWNTOWN_TOLLERANCE', settings.DISPATCHER_DB_DOWNTIME_TOLERANCE)
# if no successful loops have ran since startup, then we should fail right away
self.pg_is_down = True # set so that we fail if we get database errors on startup
init_time = time.time()
self.pg_down_time = init_time - self.pg_max_wait # allow no grace period
self.last_cleanup = init_time
self.subsystem_metrics = s_metrics.DispatcherMetrics(auto_pipe_execute=False)
self.last_metrics_gather = init_time
self.listen_cumulative_time = 0.0
if schedule:
schedule = schedule.copy()
else:
schedule = {}
# add control tasks to be ran at regular schedules
# NOTE: if we run out of database connections, it is important to still run cleanup
# so that we scale down workers and free up connections
schedule['pool_cleanup'] = {'control': self.pool.cleanup, 'schedule': timedelta(seconds=60)}
# record subsystem metrics for the dispatcher
schedule['metrics_gather'] = {'control': self.record_metrics, 'schedule': timedelta(seconds=20)}
self.scheduler = Scheduler(schedule)
@log_excess_runtime(logger, debug_cutoff=0.05, cutoff=0.2)
def record_metrics(self):
current_time = time.time()
self.pool.produce_subsystem_metrics(self.subsystem_metrics)
self.subsystem_metrics.set('dispatcher_availability', self.listen_cumulative_time / (current_time - self.last_metrics_gather))
try:
self.subsystem_metrics.pipe_execute()
except redis.exceptions.ConnectionError as exc:
logger.warning(f'Redis connection error saving dispatcher metrics, error:\n{exc}')
self.listen_cumulative_time = 0.0
self.last_metrics_gather = current_time
def run_periodic_tasks(self):
"""
Run general periodic logic, and return maximum time in seconds before
the next requested run
This may be called more often than that when events are consumed
so this should be very efficient in that
"""
try:
self.record_statistics() # maintains time buffer in method
except Exception as exc:
logger.warning(f'Failed to save dispatcher statistics {exc}')
# Everything benchmarks to the same original time, so that skews due to
# runtime of the actions, themselves, do not mess up scheduling expectations
reftime = time.time()
for job in self.scheduler.get_and_mark_pending(reftime=reftime):
if 'control' in job.data:
try:
job.data['control']()
except Exception:
logger.exception(f'Error running control task {job.data}')
elif 'task' in job.data:
body = self.worker.resolve_callable(job.data['task']).get_async_body()
# bypasses pg_notify for scheduled tasks
self.dispatch_task(body)
if self.pg_is_down:
logger.info('Dispatcher listener connection established')
self.pg_is_down = False
self.listen_start = time.time()
return self.scheduler.time_until_next_run(reftime=reftime)
def run(self, *args, **kwargs):
super(AWXConsumerPG, self).run(*args, **kwargs)
logger.info(f"Running {self.name}, workers min={self.pool.min_workers} max={self.pool.max_workers}, listening to queues {self.queues}")
init = False
while True:
try:
with pg_bus_conn(new_connection=True) as conn:
for queue in self.queues:
conn.listen(queue)
if init is False:
self.worker.on_start()
init = True
# run_periodic_tasks run scheduled actions and gives time until next scheduled action
# this is saved to the conn (PubSub) object in order to modify read timeout in-loop
conn.select_timeout = self.run_periodic_tasks()
# this is the main operational loop for awx-manage run_dispatcher
for e in conn.events(yield_timeouts=True):
self.listen_cumulative_time += time.time() - self.listen_start # for metrics
if e is not None:
self.process_task(json.loads(e.payload))
conn.select_timeout = self.run_periodic_tasks()
if self.should_stop:
return
except psycopg.InterfaceError:
logger.warning("Stale Postgres message bus connection, reconnecting")
continue
except (db.DatabaseError, psycopg.OperationalError):
# If we have attained stady state operation, tolerate short-term database hickups
if not self.pg_is_down:
logger.exception(f"Error consuming new events from postgres, will retry for {self.pg_max_wait} s")
self.pg_down_time = time.time()
self.pg_is_down = True
current_downtime = time.time() - self.pg_down_time
if current_downtime > self.pg_max_wait:
logger.exception(f"Postgres event consumer has not recovered in {current_downtime} s, exiting")
# Sending QUIT to multiprocess queue to signal workers to exit
for worker in self.pool.workers:
try:
worker.quit()
except Exception:
logger.exception(f"Error sending QUIT to worker {worker}")
raise
# Wait for a second before next attempt, but still listen for any shutdown signals
for i in range(10):
if self.should_stop:
return
time.sleep(0.1)
for conn in db.connections.all():
conn.close_if_unusable_or_obsolete()
except Exception:
# Log unanticipated exception in addition to writing to stderr to get timestamps and other metadata
logger.exception('Encountered unhandled error in dispatcher main loop')
# Sending QUIT to multiprocess queue to signal workers to exit
for worker in self.pool.workers:
try:
worker.quit()
except Exception:
logger.exception(f"Error sending QUIT to worker {worker}")
raise
class BaseWorker(object):
def read(self, queue):
return queue.get(block=True, timeout=1)
def work_loop(self, queue, finished, idx, *args):
ppid = os.getppid()
signal_handler = WorkerSignalHandler()
set_connection_name('worker') # set application_name to distinguish from other dispatcher processes
while not signal_handler.kill_now:
# if the parent PID changes, this process has been orphaned
# via e.g., segfault or sigkill, we should exit too
if os.getppid() != ppid:
break
try:
body = self.read(queue)
if body == 'QUIT':
break
except QueueEmpty:
continue
except Exception:
logger.exception("Exception on worker {}, reconnecting: ".format(idx))
continue
try:
for conn in db.connections.all():
# If the database connection has a hiccup during the prior message, close it
# so we can establish a new connection
conn.close_if_unusable_or_obsolete()
self.perform_work(body, *args)
except Exception:
logger.exception(f'Unhandled exception in perform_work in worker pid={os.getpid()}')
finally:
if 'uuid' in body:
uuid = body['uuid']
finished.put(uuid)
logger.debug('worker exiting gracefully pid:{}'.format(os.getpid()))
def perform_work(self, body):
raise NotImplementedError()
def on_start(self):
pass
def on_stop(self):
pass
def stop(self, signum, frame):
logger.warning('received {}, stopping'.format(signame(signum)))
raise SystemExit()

Some files were not shown because too many files have changed in this diff Show More