Compare commits

...

566 Commits

Author SHA1 Message Date
Jake Jackson
6389316206 Change artifact name from coverage-report to coverage.xml
small change since the report is not being found. this is one of many small PRs to try and get this working.
2025-10-28 15:39:13 -04:00
Jake Jackson
d1d3a3471b Add new api schema check workflow (#16143)
* add new file to separate out the schema check so that it is no longer
  part of CI check and won't cacuse the whole workflow to fail
* remove old API schema check from ci.yml
2025-10-27 11:59:40 -04:00
Alan Rominger
a53fdaddae Fix f-string in log that is broken (#16132) 2025-10-20 14:23:38 -04:00
Hao Liu
f72591195e Fix migration failure for 0200 (#16135)
Moved the AddField operation before the RunPython operations for 'rename_jts' and 'rename_projects' in migration 0200_template_name_constraint.py. This ensures the new 'org_unique' field exists before related data migrations are executed.

Fix
```
django.db.utils.ProgrammingError: column main_unifiedjobtemplate.org_unique does not exist
```

while applying migration 0200_template_name_constraint.py

when there's a job template or poject with duplicate name in the same org
2025-10-20 08:50:23 -04:00
TVo
0d9483b54c Added Django and API requirements to AWX Contributor Docs for POC (#16093)
* Requirements POC docs from Claude Code eval

* Removed unnecessary reference.

* Excluded custom DRF configurations per @AlanCoding

* Implement review changes from @chrismeyersfsu

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-10-16 10:38:37 -06:00
jessicamack
f3fd9945d6 Update dependencies (#16122)
* prometheus-client returns an additional value as of v.0.22.0

* add license, remove outdated ones, add new embedded sources

* update requirements and UPGRADE BLOCKERs in README
2025-10-15 15:55:21 +00:00
Jake Jackson
72a42f23d5 Remove 'UI' from PR template component options (#16123)
Removed 'UI' from the list of component names in the PR template.
2025-10-13 10:23:56 -04:00
Jake Jackson
309724b12b Add SonarQube Coverage and report generation (#16112)
* added sonar config file and started cleaning up
* we do not place the report at the root of the repo
* limit scope to only the awx directory and its contents
* update exclusions for things in awx/ that we don't want covered
2025-10-10 10:44:35 -04:00
Seth Foster
300605ff73 Make subscriptions credentials mutually exclusive (#16126)
settings.SUBSCRIPTIONS_USERNAME and
settings.SUBSCRIPTIONS_CLIENT_ID

should be mutually exclusive. This is because
the POST to api/v2/config/attach/ accepts only
a subscription_id, and infers which credentials to
use based on settings. If both are set, it is ambiguous
and can lead to unexpected 400s when attempting
to attach a license.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-10-09 16:57:58 -04:00
Jake Jackson
77fab1c534 Update dependabot to check python deps (#16127)
* add list item to check python deps daily
* move to weekly after we gain confidence
2025-10-09 19:40:34 +00:00
Chris Meyers
51b2524b25 Gracefully handle hostname change in metrics code
* Previously, we would error out because we assumed that when we got a
  metrics payload from redis, that there was data in it and it was for
  the current host.
* Now, we do not assume that since we got a metrics payload, that is
  well formed and for the current hostname because the hostname could
  have changed and we could have not yet collected metrics for the new
  host.
2025-10-09 14:08:01 -04:00
Chris Coutinho
612e8e7688 Fix duplicate metrics in AWX subsystem_metrics (#15964)
Separate out operation subsystem metrics to fix duplicate error

Remove unnecessary comments

Revert to single subsystem_metrics_* metric with labels

Format via black
2025-10-09 10:28:55 +02:00
Andrea Restle-Lay
0d18308112 [AAP-46830]: Fix AWX CLI authentication with AAP Gateway environments (#16113)
* migrate pr #16081 to new fork

* add test coverage - add clearer error messaging

* update tests to use monkeypatch
2025-10-02 10:06:10 -04:00
Chris Meyers
f51af03424 Create system_administrator rbac role in migration
* We had race conditions with the system_administrator role being
  created just-in-time. Instead of fixing the race condition(s), dodge
  them by ensuring the role always exists
2025-10-02 08:25:40 -04:00
Alan Rominger
622f6ea166 AAP-53980 Disconnect logic to fill in role parents (#15462)
* Disconnect logic to fill in role parents

Get tests passing hopefully

Whatever SonarCloud

* remove role parents/children endpoints and related views

* remove duplicate get_queryset method from RoleTeamsList

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-10-02 13:06:37 +02:00
Seth Foster
2729076f7f Add basic auth to subscription management API (#16103)
Allow users to do subscription management using
Red Hat username and password.

In basic auth case, the candlepin API
at subscriptions.rhsm.redhat.com will be used instead
of console.redhat.com.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-10-02 01:06:47 -04:00
Alan Rominger
6db08bfa4e Rewrite the s3 upload step to fix breakage with new Ansible version (#16111)
* Rewrite the s3 upload step to fix breakage with new Ansible version

* Use commit hash for security

* Add the public read flag
2025-09-30 15:47:54 -04:00
Stevenson Michel
ceed41d352 Sharing Credentials Across Organizations (#16106)
* Added tests for cross org sharing of credentials

* added negative testing for sharing of credentials

* added conditions and tests for roleteamslist regarding cross org credentials

* removed redundant codes

* made error message more articulated and specific
2025-09-30 10:44:27 -04:00
jessicamack
98697a8ce7 Fix Grafana notification bug (#16104)
* accept empty string for dashboard and panel IDs

* update grafana tests and add new one
2025-09-29 10:04:58 -04:00
Alan Rominger
f1edbd8ef5 Add npm cache path to fix UI building (push images job) (#16097)
* Add npm cache path to fix UI building

* skip version switch
2025-09-23 09:16:41 -04:00
Alan Rominger
d0a99c37c1 Use action before schema logic, fix failure to upload schema (#16099)
Use action before schema logic
2025-09-23 09:16:07 -04:00
Andrea Restle-Lay
1f06d1bb9a [AAP-44277] License module now validates API responses for subscription IDs. (Moved from Tower) (#16096)
* resolve bug and add simple unit tests

* Update awx_collection/plugins/modules/license.py

Co-authored-by: Andrew Potozniak <tyraziel@gmail.com>

---------

Co-authored-by: Andrew Potozniak <tyraziel@gmail.com>
2025-09-22 15:27:47 -04:00
Alan Rominger
873f5c0ecc Remove some attached methods from User model (#15325)
Remove archaic monkey patches (#15338)

Remove some attached methods from User model

Test user-org sublist URLs we did not test before
2025-09-22 14:19:08 -04:00
AlanCoding
b31da105ad Merge remote-tracking branch 'awx/devel' into merge_26_2 2025-09-18 16:36:03 -04:00
Dirk Jülich
a285843cf2 AAP-35227 Extend role_check.py to delete orphaned InstanceLink objects as well (#7105) 2025-09-18 16:13:06 -04:00
AlanCoding
dd02d56de6 Prefer devel setup.cfg and TODO marks for expected awx-plugin 2025-09-18 15:57:51 -04:00
AlanCoding
b1944ba676 Remove code intended to be removed 2025-09-18 14:41:44 -04:00
AlanCoding
24818b510d Re-apply PR 15862 2025-09-18 09:41:41 -04:00
AlanCoding
55a7591f89 Resolve actions conflicts and delete unwatned files
Bump migrations and delete some files

Resolve remaining conflicts

Fix requirements

Flake8 fixes

Prefer devel changes for schema

Use correct versions

Remove sso connected stuff

Update to modern actions and collection fixes

Remove unwated alias

Version problems in actions

Fix more versioning problems

Update warning string

Messed it up again

Shorten exception

More removals

Remove pbr license

Remove tests deleted in devel

Remove unexpected files

Remove some content missed in the rebase

Use sleep_task from devel

Restore devel live conftest file

Add in settings that got missed

Prefer devel version of collection test

Finish repairing .github path

Remove unintended test file duplication

Undo more unintended file additions
2025-09-17 10:23:19 -04:00
Alan Rominger
38f858303d Use DAB utility to sync RoleDefinition compat role create (#7104) 2025-09-17 10:23:19 -04:00
Stevenson Michel
0fa8135691 Fix Role Definition Reverse Sync (#7097)
* created manual sync for role definition

* made changes for only read role
2025-09-17 10:23:19 -04:00
TVo
e63eba247f AAP-37812 Added mention about setting correct env variable in cli usage (#16091)
Added mention about setting correct env variable in cli usage
2025-09-09 15:22:08 -06:00
AlanCoding
8fb6a3a633 Merge remote-tracking branch 'tower/test_stable-2.6' into merge_26_2 2025-09-04 23:06:53 -04:00
thedoubl3j
7dc4f149a7 Fix rebase merge conflicts
* had to rebase and accept both in some cases
* remove unused imports
2025-09-04 15:17:54 -04:00
John Westcott IV
2c96c48a5c feat: Add ORG_ADMINS_CAN_SEE_ALL_USERS and MANAGE_ORGANIZATION_AUTH to settings migration (#7075)
- Add ORG_ADMINS_CAN_SEE_ALL_USERS and MANAGE_ORGANIZATION_AUTH to the
  settings_to_migrate list in SettingsMigrator
- Create comprehensive unit tests for SettingsMigrator class with
  parameterized test cases
- Tests cover all migration scenarios including the new organizational
  settings
- Refactored tests use pytest.mark.parametrize for better maintainability
  and coverage

Co-authored-by: Claude <claude@anthropic.com>
2025-09-04 15:13:21 -04:00
Hao Liu
58dcd2f5dc Bump setuptools to 80.9.0 (#7076)
Updated setuptools version from 78.1.1 to 80.9.0 in Makefile, requirements.in, and requirements.txt to ensure compatibility and address any potential issues with older versions.
2025-09-04 15:13:21 -04:00
Peter Braun
25896a8772 Fix credential types no org (#7078)
* Allow creating galaxy credential types without an organization (#16077)

* remove requirement for galaxy credentials to belong to an organization

* remove organization check for galaxy credential type

* add functional test
2025-09-04 15:13:20 -04:00
Andrew Potozniak
d96727c3bd Remove 'Controller' from name (#7077) 2025-09-04 15:13:20 -04:00
Hao Liu
d8737435fa [stable-2.6] Bump dependency (#7070)
* Update Python dependencies

Relaxed or updated version constraints for several dependencies in requirements files and Makefile, including Cython, asciichartpy, msgpack, python-daemon, and pyyaml. These changes address build issues, remove unnecessary pins, and update to newer compatible versions.

* remove docutils license

* we no longer have this as a dep so we don't need to carry its license

* Update dependencies to address security vulnerabilities

Bumped versions of cryptography, protobuf, and idna in requirements to address CVE-2024-26130, CVE-2025-4565, and CVE-2024-3651. These updates improve security by resolving known vulnerabilities in the affected packages.

---------

Co-authored-by: thedoubl3j <jljacks93@gmail.com>
2025-09-04 15:13:20 -04:00
Madhu Kanoor
bb46268eec [AAP-52144] Remove AWX Prefix from the SAML migrator (#7072)
We were adding an AWX prefix to SAML migrator
2025-09-04 15:13:20 -04:00
Peter Braun
af2efec2b4 fix: do not create multiple mappers for lists of emails or usernames (#7063)
* fix: do not create multiple mappers for lists of emails or usernames

* fix: create multiple matchers, don't rely on matches_or

* fix tests

* truncate mapper names to a max of 128 chars

* better naming scheme for matchers
2025-09-04 15:13:20 -04:00
John Westcott IV
a7eb1ef763 [AAP-51531] Fix LDAP authentication mapping and bug in LDAP migration (#7061)
* Add LDAP support to gateway_mapping and expand test coverage

- Add new process_ldap_user_list function for LDAP group processing
- Add auth_type parameter to org_map_to_gateway_format and team_map_to_gateway_format
- Support both 'sso' and 'ldap' authentication types in mapping functions
- Fix syntax error and logic bug in existing code
- Add comprehensive unit tests for process_ldap_user_list function (13 test cases)
- Add unit tests for auth_type parameter functionality
- Update helper functions to support new auth_type parameter
- All tests pass and maintain backward compatibility

Technical changes:
- process_ldap_user_list handles None, boolean, string, and list inputs
- Proper type hints with mypy compatibility
- LDAP groups use 'has_or' trigger format vs SSO attribute matching
- Boolean True/False create Always/Never Allow triggers for LDAP
- Maintains proper ordering and mapper structure

Co-authored-by: Claude (Anthropic AI Assistant) <claude@anthropic.com>

* Fix empty list bug in process_ldap_user_list and add comprehensive tests

- Fix process_ldap_user_list to return empty list for empty input instead of creating invalid trigger
- Empty list [] now correctly returns no triggers instead of trigger with empty has_or array
- Add test case for empty list behavior in both LDAP and SSO functions
- Update existing test_empty_list to expect correct behavior (0 triggers)
- Maintain backward compatibility for all other input types
- Comprehensive testing confirms no regression in existing functionality

Bug Details:
- Before: process_ldap_user_list([]) returned [{'name': 'Match User Groups', 'trigger': {'groups': {'has_or': []}}}]
- After: process_ldap_user_list([]) returns [] (correct behavior)
- SSO function already handled this correctly

This prevents potential Gateway issues with empty has_or arrays and ensures logical consistency.

Co-authored-by: Claude (Anthropic AI Assistant) <claude@anthropic.com>

* Add comprehensive LDAP migrator tests and fix category handling

- Add comprehensive unit test suite for LDAPMigrator class (26 tests)
- Test LDAP configuration scenarios including multiple instances, mappings, and edge cases
- Add tests for mixed boolean/group mappings, special characters in org names, and empty configs
- Fix LDAP authenticator category to always be 'ldap' (not 'ldap<suffix>')
- Add auth_type='ldap' parameter to org_map_to_gateway_format and team_map_to_gateway_format calls
- Include AAP-51531 reference comments for specific test cases
- All tests passing (26/26)

Co-authored-by: Claude <claude@anthropic.com>

---------

Co-authored-by: Claude (Anthropic AI Assistant) <claude@anthropic.com>
2025-09-04 15:13:20 -04:00
Zack Kayyali
e9928ff513 Disable SAML Authenticator upon migrate (#7062) 2025-09-04 15:13:20 -04:00
John Westcott IV
df1c453c37 Fix type hints in gateway_mapping.py process_sso_user_list function (#7060)
- Added missing Pattern and Any imports from typing
- Fixed users parameter type hint to include Pattern[str]
- Simplified overly complex return type annotation to use Any
- Added proper type narrowing with isinstance() and cast()
- Resolved mypy errors about incompatible list item types

Co-authored-by: Claude (Anthropic AI Assistant) <claude@anthropic.com>
2025-09-04 15:13:19 -04:00
Madhu Kanoor
5a89d7bc29 fix: order of role and attr in saml user_flags (#7050)
https://issues.redhat.com/browse/AAP-51127

Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-09-04 15:13:19 -04:00
John Westcott IV
505ec560c8 feat: comprehensive refactor of SSO org/team mapping for Gateway authentication export (#7047)
This commit completely refactors how SSO organization and team mappings are processed
and exported for Gateway authentication, moving from a group-based approach to a more
flexible attribute-based system.

Key Changes:
- Introduced new process_sso_user_list() function for centralized user processing
- Enhanced boolean handling to support both native booleans and string representations
- Added email detection and regex pattern support for flexible user matching
- Refactored trigger generation from groups-based to attributes-based system

Gateway Mapping Enhancements (awx/main/utils/gateway_mapping.py):
- Added email regex detection for automatic email vs username classification
- Added pattern_to_slash_format() for regex pattern conversion
- Enhanced process_sso_user_list() with support for:
  - Boolean values: True/False and ["true"]/["false"]
  - String usernames and email addresses with automatic detection
  - Regex patterns with both username and email matching
  - Custom email_attr and username_attr parameters
- Refactored team_map_to_gateway_format() to use new processing system
- Refactored org_map_to_gateway_format() to use new processing system
- Changed trigger structure from {"groups": {"has_or": [...]}} to attribute-based triggers
- Improved naming convention to include trigger type in mapping names

Comprehensive Test Coverage (awx/main/tests/unit/utils/test_auth_migration.py):
- Added complete TestProcessSSOUserList class with 8 comprehensive test methods
- Enhanced TestOrgMapToGatewayFormat with string boolean and new functionality tests
- Enhanced TestTeamMapToGatewayFormat with string boolean and new functionality tests
- Added tests for email detection, regex patterns, and custom attributes
- Verified backward compatibility and integration functionality
- All existing tests updated to work with new attribute-based trigger system

Breaking Changes:
- Trigger structure changed from group-based to attribute-based
- Mapping names now include trigger descriptions for better clarity
- Function signatures updated to include email_attr and username_attr parameters

Co-Authored with Claude-4 via Cursor

Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-09-04 15:13:19 -04:00
Peter Braun
4f2d28db51 Aap 50951 (#7053)
* disable authenticators that require updating the redirect URL and add groups claim to AzureAD migrator

* update tests
2025-09-04 15:13:19 -04:00
Zack Kayyali
0b17007764 AAP-49910 - Delete legacy authenticator code 2025-09-04 15:13:19 -04:00
Stevenson Michel
dfad93cf4c Deprecate legacy OAuth2 Application feature (#7045)
* Marked APIs legacy OAuth applications as deprecated

* Readded deprecation

* Fixed linter

* Added more deprecated mark to Oauth2 Api apps

* Fixed deprecation errors

* Fix tests
2025-09-04 15:13:19 -04:00
Peter Braun
6bd7c3831f enable azure ad authenticator by default (#7043) 2025-09-04 15:13:19 -04:00
Seth Foster
7b56f23c0e Prevent remote sync if rbac sync is disabled (#7044)
Syncing from new rbac to old rbac locally calls the
disable_rbac_sync() context manager.

If rbac sync is disabled, we do not need to remote
sync, as we can assume the remote syncing already
occurred in the viewset.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-09-04 15:13:18 -04:00
Peter Braun
3e1b9b2c88 Improve redirect override (#7042)
* handle login redirect for the oidc migrator

* handle updating login override redirect centrally in the settings migrator

* update unit tests
2025-09-04 15:13:18 -04:00
Bruno Cesar Rocha
cf0bc16cf7 fix: tacacs+ -> TACACSPLUS (#7039)
* fix: tacacs+ -> TACACSPLUS

Gateway doesn't allow `+` to be used in slug.

AAP-50774

* Fixed assertion

---------

Co-authored-by: Andrew Potozniak <potozniak@redhat.com>
2025-09-04 15:13:18 -04:00
Andrew Potozniak
a0b6083d4e Slightly better error handling for non 200 status codes from Gateway. (#7038)
* Slightly better error handling for non 200 status codes from Gateway.

* Apply suggestion from @chrismeyersfsu

Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>

---------

Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>
2025-09-04 15:13:18 -04:00
Andrew Potozniak
d452098123 [AAP-50446] Error handling enhancements and GATEWAY_BASE_URL override (#7037)
* Added better error handling and messaging when the service token authentication is broken.  Allowed for GATEWAY_BASE_URL to override the service token's base url if it is set in the environment variables.
Co-Authored-By: Cursor (claude-4-sonnet)

* Removed GATEWAY_BASE_URL override for service token auth.
2025-09-04 15:13:18 -04:00
Alan Rominger
c5fb0c351d AAP-47283 [2.6] Unified display of RBAC & synchronization (#7001)
* Working branch for testing DAB RBAC changes

* AAP-48392 Handle DAB RBAC either before or after new type model (for merge) (#16045)

* Handle DAB RBAC either before or after new type model

* Translate CT to DAB CT

* Fix for rearrangement of post_migration methods

* Directly include RBAC service URLs

* Add a run before remote permission additions

* Sync old rbac to remote rbac (#7025)

Signed-off-by: Seth Foster <fosterbseth@gmail.com>

* Set DAB requirement back to devel

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2025-09-04 15:13:10 -04:00
Peter Braun
a3f2401740 feat: exit code 1 if any migration fails (#7036)
* feat: exit code 1 if any migration fails

* update tests

* remove unused variables
2025-09-04 15:03:59 -04:00
Peter Braun
ad461a3aab fix: inconsistent return values in github migrator (#7035)
* fix: inconsistent return values in github migrator

* feat: check setting value before updating and report correct status

* fix linter issues
2025-09-04 15:03:59 -04:00
Peter Braun
44c53b02ae Aap 49709 - settings migration (#7023)
* migrate settings using the existing authenticator framework

* add method to get settings value to gateway client

* add transformer functions for settings

* Switched back to PUT for settings updates

* Started wiring in testing changes

* Added settings_* aggregation results.  Added skip-github option.  Added tests.

Assisted-by: Cursor

* Added --skip-all-authenticators command line argument.  Added GoogleOAuth testing.  Added tests for skipping all authenticators.

Assisted-by: Cursor

* wip: migrate other missing settings

* update login_redirect_override in google_oauth2

* impement login redirect for azuread

* implement login redirect for github

* implement login redirect for saml

* set LOGIN_REDIRECT_OVERRIDE even if no authenticator matched

* extract logic for login redirect override to base class

* use urlparse to compare valid redirect urls

* Preserve the original query parameters

* Fix flake8 issues

* Preserve the query parameter in sso_login_url

Gateway sets the sso_login_url to

/api/gateway/social/login/aap-saml-keycloak/?idp=IdP

The idp needs to be preserved when creating the redirect

* Update awx/main/utils/gateway_client.py

Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>

* Update awx/main/management/commands/import_auth_config_to_gateway.py

Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>

* list of settings updated

* Update awx/main/utils/gateway_client.py

Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>

* Update awx/sso/utils/base_migrator.py

Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>

* fix tests

---------

Co-authored-by: Andrew Potozniak <potozniak@redhat.com>
Co-authored-by: Madhu Kanoor <mkanoor@redhat.com>
Co-authored-by: Chris Meyers <chrismeyersfsu@users.noreply.github.com>
2025-09-04 15:03:59 -04:00
Bruno Rocha
58e237a09a fix: address reviewer comments 2025-09-04 15:03:59 -04:00
Bruno Cesar Rocha
052166df39 feat: OIDC Migrator (#7026)
AAP-48486

Add implementation to the existing oidc_migrator
2025-09-04 15:03:59 -04:00
Andrew Potozniak
b4ba7595c6 Unit Test Refactor for test_import_auth_config_to_gateway (#7024)
* Refactoring

Assisted-by: Cursor
2025-09-04 15:03:58 -04:00
Madhu Kanoor
c5211df9ca [AAP-48863] SAML Mapping migration (#7011)
This PR maps the

* SOCIAL_AUTH_SAML_TEAM_ATTR
* SOCIAL_AUTH_SAML_USER_FLAGS_BY_ATTR
* SOCIAL_AUTH_SAML_ORGANIZATION_ATTR

https://issues.redhat.com/browse/AAP-48863
2025-09-04 15:03:58 -04:00
Andrew Potozniak
5e0870a7ec AAP-48510 Enable Service Tokens with the Authentication Migration Management Command (#7017)
* Enabled Service Token Auth for Management Command import_auth_config_to_gateway

Co-authored-by: Peter Braun <pbraun@redhat.com>
Co-authored-by: Zack Kayyali <zkayyali@redhat.com>
Assisted-by: Cursor
2025-09-04 15:03:58 -04:00
jessicamack
0936b28f9b Migrate nested team memberships to direct team memberships (#7005)
* migrate team on team users

add setting to prevent team on team cases. remove tests that should fail now

* adjust tests for disallowing team on teams

* use RoleUserAssignment to retrieve users

* assign users with RoleUserAssignment instead

* fix broken test

* move methods out to utils file. add tests

* add missed positional arg

* test old rbac system also consolidates

* fix test
2025-09-04 15:03:58 -04:00
Fabricio Aguiar
8e58fee49c feat: Add migrator for Google OAuth2 authenticator (#7018)
Signed-off-by: Fabricio Aguiar <fabricio.aguiar@gmail.com>
2025-09-04 15:03:58 -04:00
Peter Braun
e746589019 Aap 49570 (#7022)
* consider global org and team maps for github authenticator

* consider global org and team maps for saml authenticator
2025-09-04 15:03:58 -04:00
Bruno Cesar Rocha
c4a6b28b87 feat: AAP-48499 TACACS+ authenticator migrator (#7014)
* feat: AAP-48499 TACACS+ authenticator migrator

Issue: AAP 48499

* enable by default
2025-09-04 15:03:58 -04:00
Bruno Cesar Rocha
abc4692231 feat: AAP-48498 RADIUS authenticator migrator (#7013)
* feat: AAP-48498 Radius authenticator migrator

Issue: AAP-48498

* fix: Namingm Style and tests

* enabled by default

* test: SECRET is now ignored unless --force is set
2025-09-04 15:03:57 -04:00
Peter Braun
ab9bde3698 add force flag to enforce updates even when authenticator already exists (#7015)
* add force flag to enforce updates even when authenticator already exists

* remove cleartext field

* update list of encrypted fields

* show updated and unchanged authenticators in report
2025-09-04 15:03:57 -04:00
Peter Braun
c5e55fe0f5 Aap 48489 (#7003)
* collect controller ldap configuration

* translate role mapping and submit ldap authenticator

* implement require and deny group mapping

* remove all references of awx in the naming

* fix linter issues

* address PR feedback

* update ldap authenticator naming

* update github authenticator naming

* assume that server_uri is always a string

* update order of evaluation for require and deny groups

* cleanup and move ldap related functions into the ldap migrator

* add skip option for saml

* update saml authenticator to new slug format

* update azuread authenticator to new slug format
2025-09-04 15:03:57 -04:00
John Westcott IV
6b2e9a66d5 Adding Azure AD export command (#7010) 2025-09-04 15:03:57 -04:00
Madhu Kanoor
512857c2a9 [AAP-48496] SAML Migration from Controller to Gateway (#6998)
This PR migrates the SAML configuration from the Controller
to the Gateway, it intentionally skips setting the CALLBACK_URL
so that the Gateway can fill in the appropriate URL.
2025-09-04 15:03:57 -04:00
Seth Foster
c2c0f2b828 [2.6] Remove controller specific role definitions (#7002)
Remove Controller specific roles

Removes
- Controller Organization Admin
- Controller Organization Member
- Controller Team Admin
- Controller Team Member
- Controller System Auditor

Going forward the platform role definitions
will be used, e.g. Organization Member

The migration will take care of any assignments
with those controller specific roles and use
the platform roles instead.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-09-04 15:03:57 -04:00
Jake Jackson
534549139c bump DAB dep to devel from stable-2.5 (#6988)
* update dab dependency for 2.6 development
2025-09-04 15:03:57 -04:00
Peter Braun
d98118a108 compare authenticators and mappers before recreating them (#6989)
* compare authenticators and mappers before recreating them

* add unit tests

* fix linter errors

* refactor and improve: better implementation for get_authenticator_by_slug and removal of redundant code

* add submit_authenticator method to handle create vs. update in a generic way

* remove unused import
2025-09-04 15:03:56 -04:00
Peter Braun
e4758e8b4b Split up migrators (#6986)
* split up migration into classes for each authenticator

* remove unused import

* remove unused code

* remove unused class
2025-09-04 15:03:56 -04:00
Hao Liu
46710c4d86 AAP-48070 Remove ALLOW_LOCAL_RESOURCE_MANAGEMENT setting and enable local resource management (#16033) (#6985)
Remove ALLOW_LOCAL_RESOURCE_MANAGEMENT setting and enable local resource management

This commit removes the ALLOW_LOCAL_RESOURCE_MANAGEMENT setting and all associated
functionality, making the behavior as if the setting is always enabled.

Changes:
- Remove ALLOW_LOCAL_RESOURCE_MANAGEMENT setting from defaults.py
- Remove @immutablesharedfields decorator and all related logic
- Remove decorator applications from Organization, Team, and User API views
- Remove role assignment restrictions in UserRolesList and RoleUsersList
- Remove test file for immutablesharedfields functionality
- Clean up unused imports

Result: Organizations, Teams, and Users can now always be created, modified,
and deleted via the API without platform ingress restrictions.
2025-09-04 15:03:54 -04:00
Hao Liu
b70e884484 AAP-47495 Hide CSRF_TRUSTED_ORIGINS (#16035) (#6984)
Hide CSRF_TRUSTED_ORIGINS
2025-09-04 15:02:40 -04:00
Peter Braun
05b6f4fcb9 Aap 47760 - initial auth migration management command (#6981)
* wip: management command for authenticator export to GateWay

* wip: implement ldap auth config migration

* refactor: split concerns into gathering config and converting / recreating config

* refactor: dry run by default

* use the authenticator slug for idempotency

* move to correct utils path

* use env vars instead of flags, fix linter errors

* remove unused import
2025-09-04 15:02:38 -04:00
Peter Braun
243e27c7a9 Aap 49452 - support CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX in awxkit (#16085)
* fix: awxkit should honor CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX if defined

* add unit tests

* update tests
2025-09-03 15:22:38 -04:00
Dan Leehr
7fe525a533 Fix issue with some modules not honoring Controller API prefix (#16080)
* Fix issue where export module does not honor CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX

* Add unit test and handle leading/trailing slashes

* Reformat

* Refactor for clarity

* Remove unused import
2025-09-03 14:58:07 -04:00
Stevenson Michel
c36ce902db AAP-42929 : Retrieval of Projects of a Team and Teams of a Project (#7086)
* Fixed merge conflicts

* fix linters

* Added test for projectTeamsList
2025-09-03 14:05:17 -04:00
Lila Yasin
44e9dee9c7 [Bug Fix 4.6] AAP-49077 Task stdout escapes quotes twice only with Controller API api/v2/jobs/{id}/stdout/?format=txt (#7071)
* Move logic to unified job model instead of view

* Refine logic to only apply to double escaped characters to prevent touching unicord chars

* Refine logic to only apply to stdout so that it does not impact webhook notifications

* Revise naming to reflect correction to escapes, not just escape quotes

* Update code comments to reflect fixing double escapes vs double escaped quotes specifically

* Add regex for 5 most common python escape chars to make fix more robust
2025-09-02 14:49:13 -04:00
Dan Leehr
51eb109dbe Fix issue with some modules not honoring Controller API prefix (#16080)
* Fix issue where export module does not honor CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX

* Add unit test and handle leading/trailing slashes

* Reformat

* Refactor for clarity

* Remove unused import
2025-09-02 17:48:24 +02:00
Peter Braun
5ca76f3d64 Aap 49452 - support CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX in awxkit (#16085)
* fix: awxkit should honor CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX if defined

* add unit tests

* update tests
2025-09-02 14:47:32 +02:00
jessicamack
e3a9d9fbe8 [AAP-51443]CVE-2025-48432 (#7073)
* bump Django version to patch with additional hardening
2025-08-29 15:57:16 -04:00
Peter Braun
8b13c75f2e Allow creating galaxy credential types without an organization (#16077) (#7074)
* remove requirement for galaxy credentials to belong to an organization

* remove organization check for galaxy credential type
2025-08-28 15:15:36 +02:00
Jake Jackson
36ec5efc88 update work flow to actually fail (#7069)
* the workflow has been failing silently without catching a merge
  conflict. this removes the fail pretty logic previously implemented.
* just fail if a merge conflict is encountered
2025-08-21 18:49:54 +00:00
Lila Yasin
4e332ac2c7 AAP-45933 [2.5 Backport] AAP-4865 bug fact storage (#6945)
* Revise start_fact_cache and finish_fact_cache to use JSON file (#15970)

* Revise start_fact_cache and finish_fact_cache to use JSON file with host list inside it

* Revise artifacts path to be relative to the job private_data_dir

* Update calls to start_fact_cache and finish_fact_cache to agree with new reference to artifacts_dir

* Prevents unnecessary updates to ansible_facts_modified, fixing timestamp-related test failures.

* Import bulk_update_sorted_by_id

* Removed assert that calls ansible_facts_new which was removed in the backported pr

* Add import of Host back
2025-08-20 10:22:15 -04:00
Lila Yasin
b730bfa193 Continue work on collection ci (#16071)
* Fix some patterns in collection test playbooks

* Revert change to ansible.builtin.user

* Revert change to WFJT for dup label error

* Add error handling and fix references

* Add back lookup organization

* Fix all remainingfailing syntax in workflow_job_template

* Allow creating galaxy credential types without an organization (#16077)

* remove requirement for galaxy credentials to belong to an organization

* remove organization check for galaxy credential type

---------

Co-authored-by: AlanCoding <arominge@redhat.com>
Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-08-20 10:19:53 -04:00
Jake Jackson
8fe4223eac [AAP-47384] CVE 2025 47273 (#7054)
* Update requirements for setuptools

* first pass and need to commit

* update makefile and run updater script

* updated makefile per readme
* ran updater script

* Patch irc backend to avoid namespace collision w/ jaraco

When importing the IRC backend, jaraco resolves to
the version vendored inside setuptools:

1) importing irc backend…
irc_backend ERROR: ModuleNotFoundError("No module named 'jaraco.stream'")

2) sys.modules['jaraco'] after failure:
present: True
type: <class 'module'>
__file__: /var/lib/awx/venv/awx/lib64/python3.11/site-packages/setuptools/_vendor/jaraco/__init__.py
__path__: ['/var/lib/awx/venv/awx/lib64/python3.11/site-packages/setuptools/_vendor/jaraco']
__spec__: ModuleSpec(name='jaraco',
loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f006a0eccd0>,
origin='/var/lib/awx/venv/awx/lib64/python3.11/site-packages/setuptools/_vendor/jaraco/__init__.py',
submodule_search_locations=['/var/lib/awx/venv/awx/lib64/python3.11/site-packages/setuptools/_vendor/jaraco'])

Since setuptools does not vendor jaraco.stream, it blew up. This patch ensures
jaraco.stream gets imported *before* attempting to import the irc modules.

* Revert "[4.6][dependency] CVE 2025 47273 (#7020)" (#7027)

This reverts commit e8b2920aec.

* reformatted irc backend with black

* ran black to fix linting issues

* Reapply "[4.6][dependency] CVE 2025 47273 (#7020)" (#7027)

This reverts commit 0c6df9b13398a93569fae7558e1a0e72cbe8fb6c.

* add flake8 ignore since jaraco.stream is needed

* jaraco.stream is not directly called in the file but is needed by irc
  so ignore the linter failure

---------

Co-authored-by: Shane McDonald <me@shanemcd.com>
2025-08-19 15:59:24 +00:00
Peter Braun
461678df08 Allow creating galaxy credential types without an organization (#16077)
* remove requirement for galaxy credentials to belong to an organization

* remove organization check for galaxy credential type
2025-08-18 14:21:24 +02:00
Peter Braun
e8c4b302ad remove requirement for galaxy credentials to belong to an organization (#16075) (#7066) 2025-08-15 16:27:22 -04:00
Chris Meyers
e82de50edb Fix controller_oauthtoken regression and more
* aap_token now functions like controller_oauthtoken
* lookup('awx.awx.controller_api', ...) fixed
2025-08-15 10:00:37 -04:00
Robin Bobbitt
11f31ef796 AAP-43883: clear cached LICENSE setting on change (#16065) (#7064)
* clear LICENSE from cache on change



* Adds tests for license cache clearing

Generated by Cursor (claude-4-sonnet)



* test fixes

Generated with Cursor (claude-4-sonnet)



---------

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>
Co-authored-by: Jake Jackson <jljacks93@gmail.com>
2025-08-14 14:02:34 -04:00
Peter Braun
09b539bc34 remove requirement for galaxy credentials to belong to an organization (#16075) 2025-08-14 14:50:40 +00:00
Robin Bobbitt
9033e829fe fixes UnboundLocalError in POST /attach (#16062) (#7059)
* fixes UnboundLocalError in POST /attach
* bust cache for credentials before attaching subscription
---------

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>
2025-08-14 09:56:25 -04:00
Elyézer Rezende
4757785016 Pin ansible-core for collection tests (#7030)
Signed-off-by: Elyézer Rezende <elyezermr@gmail.com>
2025-08-12 14:43:52 -04:00
Zack Kayyali
902f2634a6 AAP-49910 - Delete legacy authenticator code 2025-08-11 11:25:50 -04:00
Robin Bobbitt
793c85ef24 AAP-43883: clear cached LICENSE setting on change (#16065)
* clear LICENSE from cache on change

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>

* Adds tests for license cache clearing

Generated by Cursor (claude-4-sonnet)

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>

* test fixes

Generated with Cursor (claude-4-sonnet)

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>

---------

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>
Co-authored-by: Jake Jackson <jljacks93@gmail.com>
2025-08-07 03:00:41 +00:00
Robin Bobbitt
290dec8bf8 fixes UnboundLocalError in POST /attach (#16062)
* fixes UnboundLocalError in POST /attach

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>

* bust cache for credentials before attaching subscription

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>

---------

Signed-off-by: Robin Y Bobbitt <rbobbitt@redhat.com>
2025-08-06 21:24:57 +00:00
Lila Yasin
80f9f87181 Bug fix for AAP-47771 data migration update (#16058)
* Bug fix for AAP-47771 this data migration updates existing CredentialType entries
in the database and changes the kind from github_app to github_app_lookup

* Combine migration 0203 into 0202

* Add test to ensure reconciliation issue has been resolved
2025-08-06 15:17:53 -04:00
Lila Yasin
cd12f4dcac Update Collections Syntax to get Collection related CI Checks Passing (#16061)
* Fix collection task breaking collection ci checks

* Patch ansible.module_utils.basic._ANSIBLE_PROFILE directly

* Conditionalize other santity assertions

* Remove added blank lines and identifier from Fail if absent and no identifier set
2025-08-06 14:56:21 -04:00
Jake Jackson
3ccc5e5f2c add stable to release workflows
* we changed branch naming schema so adding in the new name
2025-07-24 15:54:19 -04:00
Jake Jackson
550ae51aec Revert "[4.6][dependency] CVE 2025 47273 (#7020)" (#7027)
This reverts commit e8b2920aec.
2025-07-23 13:22:25 -04:00
Jake Jackson
e8b2920aec [4.6][dependency] CVE 2025 47273 (#7020)
* Update requirements for setuptools

* first pass and need to commit

* update makefile and run updater script
2025-07-22 15:21:06 -04:00
Alan Rominger
7977e8639c Use full slug in DAB RBAC test (#16053) 2025-07-14 11:14:34 -04:00
Jake Jackson
03cd450669 [AAP-47877] Backport collection updates (#6992)
* Update collection args (#16025)

* update collection arguments

* Add integration testing for new param

* fix: sanity check failures

---------

Co-authored-by: Sean Sullivan <ssulliva@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>

* update formatting for sanity testing

* fixing indentation for sanity suite

* adjust tests to use new token name

* update tests to use aap_token instead of controller_oauthtoken

* add back aliases for backward compat

* we have integration tests that still leverage the old token name
* while we can rename these, this tells me that customers might still
  have them in the wild and breaking them in a z stream is no bueno

* revert alias changes

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
Co-authored-by: Sean Sullivan <ssulliva@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-07-10 10:14:40 -04:00
Jake Jackson
1d4b555a2c Update feature_branch_sync.yml (#7006)
fix typo in workflow title
2025-07-10 02:37:35 +00:00
Luis Villa
69df7d0e27 [AAP-48771]wfjt migration to catch renaming (#6991)
* wfjt migration to catch renaming

* Added rename_wfjt function to template constraint migration
* Add test to add duplicate names and verify that the duplicates are renamed

* move object creation

* add missing rename_wfjt operation

* fix linter issues

* fix tox issues

* test manually and move operation

* added back credential type validation code
2025-07-09 15:51:55 -04:00
Alan Rominger
bf0567ca41 AAP-48392 Handle DAB RBAC either before or after new type model (for merge) (#16045)
* Handle DAB RBAC either before or after new type model

* Translate CT to DAB CT

* Fixes for content type switch

* Use more compatible coding pattern

* Deeper purge of content_type_id

* revert, turns out that did not work

* More content type replacements

* Revert changes to serializer

* Revert another content_type change

* Fix for rearrangement of post_migration methods

* Remove thing I am not going to do

* Revert branch pin that was temporary
2025-07-02 14:28:43 -04:00
Jake Jackson
ec0732ce94 AAP-48139 add branch sync between release_4.6 and stable-2.6 (#6982)
* add branch sync between release_4.6 and stable-2.6

* add a new workflow to force push commits in release_4.6 to
  stable-2.6

* Update workflow to use matrix keyword


---------

Co-authored-by: Jake Jackson
2025-06-30 19:56:08 -04:00
Hao Liu
d6482d3898 AAP-48070 Remove ALLOW_LOCAL_RESOURCE_MANAGEMENT setting and enable local resource management (#16033)
Remove ALLOW_LOCAL_RESOURCE_MANAGEMENT setting and enable local resource management

This commit removes the ALLOW_LOCAL_RESOURCE_MANAGEMENT setting and all associated
functionality, making the behavior as if the setting is always enabled.

Changes:
- Remove ALLOW_LOCAL_RESOURCE_MANAGEMENT setting from defaults.py
- Remove @immutablesharedfields decorator and all related logic
- Remove decorator applications from Organization, Team, and User API views
- Remove role assignment restrictions in UserRolesList and RoleUsersList
- Remove test file for immutablesharedfields functionality
- Clean up unused imports

Result: Organizations, Teams, and Users can now always be created, modified,
and deleted via the API without platform ingress restrictions.
2025-06-30 10:15:26 -04:00
Hao Liu
20b203ea8e AAP-47495 Hide CSRF_TRUSTED_ORIGINS (#16035)
Hide CSRF_TRUSTED_ORIGINS
2025-06-30 09:58:19 -04:00
jessicamack
1afd23043d Remove api version from hardcoded inventory url (#16039) (#6980)
* update url endpoints

* reformat line for length
2025-06-25 22:53:03 +02:00
jessicamack
1330a1b353 Remove api version from hardcoded inventory url (#16039)
* update url endpoints

* reformat line for length
2025-06-25 21:54:21 +02:00
Matthew Sandoval
11a9a2b066 Pin receptorctl 1.5.7 (#6979) 2025-06-24 19:48:55 +00:00
Alan Rominger
022314b542 Mark the collection role module as deprecated (#15455)
* Mark the collection role module as deprecated

* Mark deprecated in DOCUMENTATION

* Add deprecation info

* Resolve validate-modules deprecation errors

---------

Co-authored-by: Luis <lvilla@redhat.com>
2025-06-18 12:09:56 -04:00
Lila Yasin
5752c7a8e2 [2.5 Backport] AAP-46038 database deadlock (#6947)
Sort both bulk updates and add batch size to facts bulk update to resolve deadlock issue

Update tests to expect batch_size to agree with changes

Add utility method to bulk update and sort hosts and applied that to the appropriate locations

Update functional tests to use bulk_update_sorted_by_id since update_hosts has been deleted

Add comment NOSONAR to get rid of Sonarqube warning since this is just a test and it's not actually a security issue

Fix failing test test_finish_job_fact_cache_clear & test_finish_job_fact_cache_with_existing_data

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
Co-authored-by: Seth Foster <fosterbseth@gmail.com>
2025-06-16 15:32:55 -04:00
🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко)
3db2e04efe 🧪 Hide false negative warnings by coveragepy (#16021)
They are only surfaced under pytest 8.4, with `pytest-cov` and
`pytest-xdist` being both active [[1]]. Or equivalent situations

This is a follow-up for #16015 which attempted ignoring the warning
on the runtime level in pytest. Instead, the patch tells `coveragepy`
not to emit said warnings in the first place.

[1]: pytest-dev/pytest-cov#693
2025-06-12 11:45:55 -04:00
🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко)
db874f5aea 🧪 Bump the expected Codecov uploads number to 9 (#16023)
It should ideally match perfectly or at least come close, for best
responsiveness. This setting is currently used to prevent Codecov
from publishing incomplete coverage metrics too early.
2025-06-12 11:45:23 -04:00
Alan Rominger
c975b1aa22 Do not apply ANSIBLE_STANDARD_SETTINGS_FILES to job environment variables (#15962) 2025-06-11 23:15:00 -04:00
🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко)
d005402205 🧪 Recover full-source coverage in pytest-cov (#16020) 2025-06-11 23:11:02 -04:00
Alan Rominger
635e947413 Add placeholder migration (#16010) 2025-06-11 16:28:34 -04:00
Alan Rominger
3d027bafd0 AAP-44233 Create credential types in new migration step (#6969)
* Update database to credential types in new migration file

* bump migration

* Add assertion

* Pre-delete credentials so we test recreation
2025-06-11 16:26:42 -04:00
Jake Jackson
ee19ee0c10 Update workflow to allow the workflow to write (#6975)
* Update the workflow to allow the action to write our branches from it.
* Also added username and email as git by default will want to know who
  is performing the action (edge case). Using github actions bot is
standard practice
2025-06-11 20:07:38 +00:00
Alan Rominger
024fe55047 Fix collection whitespace issue (#16028) 2025-06-11 15:23:31 -04:00
Alan Rominger
a909083792 Fix failing api-test (#16027) 2025-06-11 14:15:41 -04:00
Peter Braun
873e6a084c Update collection args (#16025)
* update collection arguments

* Add integration testing for new param

* fix: sanity check failures

---------

Co-authored-by: Sean Sullivan <ssulliva@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-06-11 18:43:29 +02:00
Alan Rominger
6182d68b74 Add back PYTEST_ARGS as PYTEST_ADOPTS (#16024)
* Add back PYTEST_ARGS

* use pytest adopts
2025-06-11 12:05:53 -04:00
Alan Rominger
f1e5cadce7 🧪 Delegate artifact merge and garbage collection to GH (#16019) (#6973)
* 🧪 Unpersist Git creds @ cov combine job

This is one of the things Zizmor [[1]] warns about.

[1]: https://docs.zizmor.sh

* 🧪 Download all coverage artifacts in one go

* 🧪 Delegate artifact garbage collection to GH

This is implemented by setting the retention days input to 1 on the
initial upload.

Co-authored-by: 🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко) <webknjaz@redhat.com>
2025-06-10 16:54:59 -04:00
🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко)
1a4dbcfe2e 🧪 Delegate artifact merge and garbage collection to GH (#16019)
* 🧪 Unpersist Git creds @ cov combine job

This is one of the things Zizmor [[1]] warns about.

[1]: https://docs.zizmor.sh

* 🧪 Download all coverage artifacts in one go

* 🧪 Delegate artifact garbage collection to GH

This is implemented by setting the retention days input to 1 on the
initial upload.
2025-06-10 14:55:06 -04:00
Satoe Imaishi
a238c5dd09 Bump django to 4.2.21 (#6964) 2025-06-10 10:11:43 -04:00
Jake Jackson
d26c7fedb8 Add workflow to rebase release branches (#6968)
* Adds a workflow that rebases release_4.6 onto release_4.6-next
2025-06-10 10:11:43 -04:00
Jiří Jeřábek (Jiri Jerabek)
f4347d05a9 cherry-pick 222f387 to release_4.6 (#6971) 2025-06-10 10:11:42 -04:00
Hao Liu
4eefce622d Fixes pytest CI error (#6970)
```
  /var/lib/awx/venv/awx/lib64/python3.11/site-packages/_pytest/python.py:163:
  PytestReturnNotNoneWarning: Expected None, but
  awx/main/tests/unit/test_tasks.py::TestJobCredentials::test_custom_environment_injectors_with_boolean_extra_vars
  returned ['successful', 0], which will be an error in a future version
  of pytest.  Did you mean to use `assert` instead of `return`?
```

* Dug into the git blame for this one
  060585434a is the commit for any
  historians. It was wrongfully carried over from a mock pexpect
  implementation. Our new tests are nice. They don't go as far as trying
  to run the task so they do not need to mock pexpect. That is why it is
  safe to remove this code without finding it a new home.

Co-authored-by: Chris Meyers <chris.meyers.fsu@gmail.com>
2025-06-10 10:11:42 -04:00
TVo
57b8773613 [2.5/4.6 Backport] AAP-40782 Reduce queued stuck jobs (#6962)
* [2.5/4.6 Backport] AAP-40782 Reduce queued stuck jobs

* [2.5/4.6 Backport] AAP-40782 Reduce queued stuck jobs

* Incrp'd review feedback from @AlanCoding

* Reformatted 4 files per CI-check for api-linters
2025-06-10 10:11:42 -04:00
Alan Rominger
d0776dabdf AAP-32143 Make the JT name uniqueness enforced at the database level (#15956) (#6958)
* Make the JT name uniqueness enforced at the database level

* Forgot demo project fixture

* New approach, done by adding a new field

* Update for linters and failures

* Fix logical error in migration test

* Revert some test changes based on review comment

* Do not rename first template, add test

* Avoid name-too-long rename errors

* Insert migration into place

* Move existing files with git

* Bump migrations of existing

* Update migration test

* Awkward bump

* Fix migration file link

* update test reference again
2025-06-10 10:11:42 -04:00
Peter Braun
2d730abb82 fix: allow unknown keyword arguments (#6972)
Co-authored-by: Peter Braun <pbranu@redhat.com>
2025-06-10 10:11:42 -04:00
Seth Foster
8896f75f9b Restore basic auth for subscriptions API (#6961)
When POSTing to console.redhat.com, fallback
to using basic auth method if OAUTH via
service accounts fails

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-06-10 10:11:42 -04:00
Alan Rominger
c449c4c41a Combine Django bump with other fixes to github checks (#16015)
* Add no cov on fail flag to fix CI

* Bump django to 4.2.21

* Coverage args

* Try cov equals awx

* Run migration test in parallel

* Ignore error and clean up commands

* Try to make schema check not hang

---------

Co-authored-by: Satoe Imaishi <simaishi@redhat.com>
2025-06-06 16:06:55 -04:00
Alan Rominger
31ee509dd5 Combine migration numbers 201-205 into 201 for ease of management (#16008)
* Add new squashed migrations file

* Squash migrations related to recent removals
2025-06-03 16:23:09 -04:00
Jiří Jeřábek (Jiri Jerabek)
222f387d65 Remove FEATURE_POLICY_AS_CODE_ENABLED flag (#16006)
* remove FEATURE_POLICY_AS_CODE_ENABLED flag

* rename to OpaQueryPathMixin

* add OpaQueryPath docs to awx collection

* bypass test for awx collection
2025-06-03 08:52:16 -04:00
Alan Rominger
d7ca19f9f0 Clean up logging for receptor down & unregistered cases (#15990)
* Clean up logging when receptor not running

* Make logging more concise for unregistered case

* Silence another unwanted traceback

* Silence a few more tracebacks
2025-06-02 13:51:44 -04:00
David Danielsson
a655a3f127 standardizing how host gets validated (#16003)
* removing the requirement for re and changing to startswith which the other AAP collections use

* telling sonarqube to ignore this line

* fixing lint error
2025-05-30 16:25:10 -04:00
Seth Foster
9520c83da9 Restore basic auth for subscriptions API
Fallback to basic auth if OAUTH to console.redhat.com fails

Notes:
Envoy has a timeout of 30 seconds, so
the total timeout should be less than that.

(5, 20) means 5 seconds to connect to server,
20 seconds to start reading data.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-05-30 13:37:14 -04:00
Alan Rominger
144f08f762 AAP-45937 Make settings not required because they are not (#15998)
* Make settings not required because they are not, or should not be
2025-05-28 18:50:16 +00:00
Alan Rominger
6aea699284 Fix bug where collectstatic could error due to dispatcherd config (#15999)
* Fix bug where collectstatic could error due to dispatcherd config

* Revert test because it will not work in test suite

* New publish mocking system

* Remove import of unused

* Fix default publish broker
2025-05-21 15:11:04 -04:00
Mauricio Magnani Jr
bb6bf33b9e fix: ensure temp files are cleaned up after failed HCC (#6952) 2025-05-21 13:18:24 -04:00
Mauricio Magnani Jr
7ee0aab856 fix: ensure temp files are cleaned up after failed HCC (#15996) 2025-05-21 13:17:46 -04:00
Marc Hassan
3eb809696a cli: fix TypeError in HelpfulArgumentParser for python 3.12.8 and 3.13.1 (#15692)
cli: fix `TypeError` in `HelpfulArgumentParser` for python 3.12.8 and 3.13.1 (related #15687)

Signed-off-by: mhassan1 <marc.j.hassan@gmail.com>
2025-05-20 14:45:06 +00:00
Dirk Jülich
5cf3a09163 AAP-17690 Inventory variables sourced from git project are not getting deleted after being removed from source (#15928) (#6946)
* Delete existing all-group vars on inventory sync (with overwrite-vars=True) instead of merging them.

* Implementation of inv var handling with file as db.

* Improve serialization to file of inv vars for src update

* Include inventory-level variable editing into inventory source update handling

* Add group vars to inventory source update handling

* Add support for overwrite_vars to new inventory source handling

* Persist inventory var history in the database instead of a file.

* Remove logging which was needed during development.

* Remove further debugging code and improve comments

* Move special handling for user edits of variables into serializers

* Relate the inventory variable history model to its inventory

* Allow for inventory variables to have the value 'None'

* Fix KeyError in new inventory variable handling

* Add unique-together constraint for new model InventoryGroupVariablesWithHistory

* Use only one special invsrc_id for initial update and manual updates

* Fix internal server error when creating a new inventory

* Print the empty string for a variable with value 'None'

* Fix comment which incorrectly states old behaviour

* Fix inventory_group_variables_update tests which did not take the new handling of None into account

* Allow any type for Ansible-core variable values

* Refactor misleading method names

* Fix internal server error when savig vars from group form

* Remove superfluous json conversion in front of JSONField

* Call variable update from create/update instead from validate

* Use group_id instead of group_name in model InventoryGroupVariablesWithHistory

* Disable new variable update handling for all regular (non-'all') groups

* Add live test to verify AAP-17690 (inv var deleted from source)

* Add functional tests to verify inventory variables update logic

* Fix migration which was corrupted by a rebase

* Add a more complex live test and resolve linter complaints

* Force overwrite_vars=False for updates from source on all-group

* Change behavior with respect to overwrite_vars
2025-05-19 21:38:33 +02:00
Seth Foster
54db6c792b Revert "unpin sqlparse dependency (#6911)" (#6950)
This reverts commit 3e122778e4.
2025-05-16 22:53:54 +00:00
TEMAndroid
7995196cff Fast fix for old version nodejs (#15912)
* Fast fix for old version nodejs

Fixing error
required: { node: '^18.0.0 || >=20.0.0' },
current: { node: 'v16.13.1', npm: '8.5.0' }

* Use node js 18 by default to align with official docs

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2025-05-16 20:09:02 +00:00
Sean Sullivan
eb96d5d984 fix chicken egg issue with workflow nodes and inventory sources for parents that do not exist (#15982)
fix chicken egg issue with workflow nodes and inventory sources for parents that do not exist
2025-05-16 14:49:40 -04:00
Alan Rominger
94764a1f17 AAP-42649 Flag-gated use of "dispatcherd" as its own library (#15981)
Use dynamic AWX max_workers value

Make basic --status and --running commands work

Make feature flag enabled true by default for development

* [dispatcherd] Dispatcher socket-based `--status` demo working (#15908)

* Fix Task Decorator to Work With and Without Feature Flag (AAP-41775) (#15911)

* refactor(system): extract common heartbeat helpers and split cluster_node_heartbeat

Extract common heartbeat logic into helper functions:  _heartbeat_instance_management: consolidates instance management, health checks, and lost-instance detection.  _heartbeat_check_versions: compares instance versions and initiates shutdown when necessary.  _heartbeat_handle_lost_instances: reaps jobs and marks lost instances offline.

Refactor the original cluster_node_heartbeat to use these helpers and retain legacy behavior (using bind_kwargs).

Introduce adispatch_cluster_node_heartbeat for dispatcherd: uses the control API to retrieve running tasks and reaps them.

Link the two implementations by attaching adispatch_cluster_node_heartbeat as the _new_method on cluster_node_heartbeat.

* feat(publish): delegate heartbeat task submission to new dispatcherd implementation

Update apply_async to check at runtime if FEATURE_NEW_DISPATCHER is enabled.

When the task is cluster_node_heartbeat and a _new_method is attached, delegate the task submission to the new dispatcherd implementation.

Preserve the original behavior for all other tasks and fallback on error.

* refactor(system): extract task ID retrieval from dispatcherd into helper function

Improves readability of adispatch_cluster_node_heartbeat by extracting
the complex UUID parsing logic into a dedicated helper function.
Adds clearer error handling and follows established code patterns.

* fix(dispatcher): Enable task decorator to work with and without feature flag

Implemented a new approach for handling task execution with feature flags
by attaching alternative implementations to apply_async._new_method. This
allows cluster_node_heartbeat to work correctly with both the legacy and
new dispatcher systems without modifying core decorator logic.

AAP-41775

* fix(dispatcher): Improve error handling and logging in feature flag implementation

- Add error handling when attaching alternative dispatcher implementation
- Fix method self-reference in apply_async to properly use cls.apply_async
- Document limitations of this targeted approach for specific tasks
- Add logging for better debugging of dispatcher selection
- Ensure decorator timing by keeping method attachment after function definitions

This completes the robust implementation for switching between dispatcher
implementations based on feature flags.

AAP-41775

* fix(dispatcher): Implement registry pattern for dispatcher feature flag compatibility

Replaces direct method attribute assignment with a global registry for
alternative implementations. The original approach tried to attach new
methods directly to apply_async bound methods, which fails because bound
methods don't support attribute assignment in Python.

The registry pattern:
- Creates a global ALTERNATIVE_TASK_IMPLEMENTATIONS dict in publish.py
- Registers alternative implementations by task name
- Modifies apply_async to check the registry when feature flag is enabled
- Adds extensive logging throughout the process for debugging

This enables cluster_node_heartbeat to work correctly with both the legacy
and new dispatcher implementations based on the FEATURE_NEW_DISPATCHER flag.

AAP-41775

* refactor(dispatcher): Remove excessive logging from dispatcher implementation

Reduces verbose debugging logs while maintaining essential logging for critical
operations. Preserves:
- Task implementation selection based on feature flag
- Registration success/failure messages
- Critical error reporting

Removed:
- Registry content debugging messages
- Repetitive task diagnostics
- Non-essential information logging

AAP-41775

* fix(dispatcher): Fix shallow copy in dispatcher schedule conversion

This resolves "AttributeError: 'float' object has no attribute 'total_seconds'"
errors when the dispatcher is restarted.

Refs: AAP-41775

* Use IPC mechanism to get running tasks (#15926)
* Allow tasks from tasks
* Fix failure to limit to waiting jobs
* Get job record with lock
* Fix failures in dispatcherd feature branch (#15930)
* Fully handle DispatcherCancel
* Complete rest of preload import work
* Complete dispatcherd integration & job cancellation (AAP-43033) (#15941)
* feat(dispatcher): Implement job cancellation for new dispatcher

Adds feature-flag-aware job cancellation that routes cancel requests to either
the legacy dispatcher or the new dispatcherd library based on the
FEATURE_NEW_DISPATCHER flag.

- Updates cancel_dispatcher_process() to use dispatcherd's control API when enabled
- Handles both direct cancellation and task manager workflow cancellation cases
- Works with DispatcherCancel exception handling to properly handle SIGUSR1 signals

AAP-43033

* fix(dispatcher): Update run_dispatcher.py to properly handle task cancellation

Modifies the cancel command in run_dispatcher.py to properly cancel tasks
when the FEATURE_NEW_DISPATCHER flag is enabled, rather than just listing
running tasks.

The implementation translates each task UUID to the appropriate
filter format expected by the dispatcherd control API, maintaining the same
behavior as the original implementation.

Part of: AAP-43033

* refactor(system): Refactor dispatch_startup() to extract common startup logic and branch based on feature flag

This commit refactors the dispatch_startup() function to improve clarity and consistency across the legacy
and new dispatcherd flows.

No dispatcher-specific functionality is needed beyond the changes made, so this refactoring improves robustness without
altering core behavior.

* refactor(system): Refactor inform_cluster_of_shutdown() for clarity

* refactor(tasks): Replace @task with @task_awx across 22 tasks for dispatcher compatibility

- Migrated all task decorators to use @task_awx, ensuring dispatcher-aware behavior.
- Tested each task with the new dispatcherd, verifying that tasks using the registry pattern execute correctly without needing binder‐based alternative implementations.
- Removed redundant logging and outdated comments.
- Legacy tasks that do not require special parameter extraction continue to use their original logic.
- This commit reflects our complete journey of testing and verifying dispatcherd compatibility across all 22 tasks.

* refactor(publish): fix linter

* Fix bug from the branch rebase

* AAP-43763 Add tests for connection management in dispatcherd workers (#15949)

* Add test for job cancel in live tests
* Fix bug from the branch rebase
* Add test for connection recovery after connection broke
* Add test for breaking connection

* Fix dispatcherd bugs: schedule aliases, job kwargs handling, cancel handling (#15960)

* Put in job kwargs handling, not done before

* AAP-44382 [dispatcherd] Fixes for running with feature flag off (#15973)

* Use correct decorator for test of tasks

* Finalize dispatcherd feature branch (#15975)

* Work dispatcherd into dependency management system

* Use util methods from DAB

* Rename the dispatcherd feature flag, and flip default to not-enabled

* Move to new submit_task method

* Update the location of the sock file

* AAP-44381 Make dispatcherd config loading more lazy (#15979)

* Make dispatcherd config loading more lazy

* Make submission error more obvious

* Fix signal handling gap, hijack SIGUSR1 from dispatcherd (#15983)

* Fix signal handling gap, hijack SIGUSR1 from dispatcherd

* Minor adjustments to dispatcherd status command

* [dispatcherd] Get rid of alternative task registry (#15984)

Get rid of alternative task registry

* Fix deadlock error and other cleanup errors (#15987)
* Move to proper error handling location

---------

Co-authored-by: artem_tiupin <70763601+art-tapin@users.noreply.github.com>
2025-05-16 09:39:22 -04:00
Peter Braun
3e122778e4 unpin sqlparse dependency (#6911)
* unpin sqlparse dependency

* remove sqlparse license
2025-05-15 15:51:34 -04:00
Elijah DeLee
f98b2e2455 introduce age for workers and mandatory retirement
Retire workers after a certain age, allowing them to finish their
current task if they are not idle.

This mitigates any issues like memory leaks in long running workers,
especially if systems stay busy for months at a time.

Introduce new optional setting WORKER_MAX_LIFETIME_SECONDS, defaulting to 4 hours
2025-05-15 14:15:50 -04:00
Emmanuel Ferdman
c1b6f9a786 Use modern distro API for platform info (#15954)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-05-15 12:21:11 -04:00
Neev Geffen
32bbf3a0c3 Add License Expiry Metric (#15483)
* add license expiry metric

* Update metrics test with default value to the new license metrics

* Add the changes of the black-lint command

* Update awx/main/analytics/metrics.py

---------

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2025-05-15 16:19:33 +00:00
Steffen Scheib
c76ae8a2ac docs: Fix schedule documentation (#15972)
With the "recent" changes making the lookup plugin `awx.awx.schedule_rrule` and
`awx.awx.schedule_rruleset` returning a list instead of string (see #15625), the
returned list (which will *always* carry only 1 item) needs to be transformed
to a string either adding `| join` or `| first`. I found `first` to be more
fitting as the list will *always* return a list with 1 item.

Additionally, the documentation that references `awx.awx.schedule_rruleset`
in the `awx.awx.schedule` module was wrong, which is also fixed by this PR.

Signed-off-by: Steffen Scheib <sscheib@redhat.com>
Co-authored-by: Steffen Scheib <steffen@scheib.me>
2025-05-15 08:55:50 -07:00
Elijah DeLee
6accd1e5e6 introduce age for workers and mandatory retirement
Retire workers after a certain age, allowing them to finish their
current task if they are not idle.

This mitigates any issues like memory leaks in long running workers,
especially if systems stay busy for months at a time.

Introduce new optional setting WORKER_MAX_LIFETIME_SECONDS, defaulting to 4 hours.
2025-05-15 09:57:30 -04:00
Alan Rominger
01eb162378 AAP-32143 Make the JT name uniqueness enforced at the database level (#15956)
* Make the JT name uniqueness enforced at the database level

* Forgot demo project fixture

* New approach, done by adding a new field

* Update for linters and failures

* Fix logical error in migration test

* Revert some test changes based on review comment

* Do not rename first template, add test

* Avoid name-too-long rename errors

* Insert migration into place

* Move existing files with git

* Bump migrations of existing

* Update migration test

* Awkward bump

* Fix migration file link

* update test reference again
2025-05-14 23:30:23 -04:00
Jake Jackson
20a512bdd9 Update the PR template to include JIRA (#15985)
* update template to include steps to add JIRA to the PR Title
2025-05-14 14:13:15 -04:00
Lila Yasin
f734d8bf19 Revise start_fact_cache and finish_fact_cache to use JSON file (#15970)
* Revise start_fact_cache and finish_fact_cache to use JSON file with host list inside it

* Revise artifacts path to be relative to the job private_data_dir

* Update calls to start_fact_cache and finish_fact_cache to agree with new reference to artifacts_dir

* Prevents unnecessary updates to ansible_facts_modified, fixing timestamp-related test failures.
2025-05-14 13:20:47 -04:00
Dirk Jülich
872349ac75 AAP-17690 Inventory variables sourced from git project are not getting deleted after being removed from source (#15928)
* Delete existing all-group vars on inventory sync (with overwrite-vars=True) instead of merging them.

* Implementation of inv var handling with file as db.

* Improve serialization to file of inv vars for src update

* Include inventory-level variable editing into inventory source update handling

* Add group vars to inventory source update handling

* Add support for overwrite_vars to new inventory source handling

* Persist inventory var history in the database instead of a file.

* Remove logging which was needed during development.

* Remove further debugging code and improve comments

* Move special handling for user edits of variables into serializers

* Relate the inventory variable history model to its inventory

* Allow for inventory variables to have the value 'None'

* Fix KeyError in new inventory variable handling

* Add unique-together constraint for new model InventoryGroupVariablesWithHistory

* Use only one special invsrc_id for initial update and manual updates

* Fix internal server error when creating a new inventory

* Print the empty string for a variable with value 'None'

* Fix comment which incorrectly states old behaviour

* Fix inventory_group_variables_update tests which did not take the new handling of None into account

* Allow any type for Ansible-core variable values

* Refactor misleading method names

* Fix internal server error when savig vars from group form

* Remove superfluous json conversion in front of JSONField

* Call variable update from create/update instead from validate

* Use group_id instead of group_name in model InventoryGroupVariablesWithHistory

* Disable new variable update handling for all regular (non-'all') groups

* Add live test to verify AAP-17690 (inv var deleted from source)

* Add functional tests to verify inventory variables update logic

* Fix migration which was corrupted by a rebase

* Add a more complex live test and resolve linter complaints

* Force overwrite_vars=False for updates from source on all-group

* Change behavior with respect to overwrite_vars
2025-05-14 18:02:25 +02:00
Seth Foster
6377824af5 Fix Subscriptions credentials fallback (#15980)
Ensure service account authentication is being used
when falling back to using SUBSCRIPTIONS_CLIENT_ID.

Additional change:
Subscription data can return two types of capacities:
Sockets and Nodes

For determining overall capacity
if capacity name is Nodes:
  capacity quantity x subscription quantity
if capacity name is Sockets:
  capacity quantity / 2 (minimum of 1) x subscription quantity

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-05-14 11:59:19 -04:00
Seth Foster
12dcc10416 [4.6] Update subscription API to use service accounts (#6927)
* Update subscription API to use service accounts

Update code to pull subscriptions from
console.redhat.com instead of
subscription.rhsm.redhat.com

Uses service account client ID and client secret
instead of username/password, which is being
deprecated in July 2025.

Additional changes:

- In awx.awx.subscriptions module, use new service
account params rather than old basic auth params

- Update awx.awx.license module to use subscription_id
instead of pool_id. This is due to using a different API,
which identifies unique subscriptions by subscriptionID
instead of pool ID.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Chris Meyers <chris.meyers.fsu@gmail.com>
Co-authored-by: Peter Braun <pbraun@redhat.com>

* fix token name

Signed-off-by: Seth Foster <fosterbseth@gmail.com>

* Fix Subscriptions credentials fallback

Ensure service account authentication is being used
when falling back to using SUBSCRIPTIONS_CLIENT_ID.

Additional change:
Subscription data can return two types of capacities:
Sockets and Nodes

For determining overall capacity
if capacity name is Nodes:
  capacity quantity x subscription quantity
if capacity name is Sockets:
  capacity quantity / 2 (minimum of 1) x subscription quantity

Signed-off-by: Seth Foster <fosterbseth@gmail.com>

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Chris Meyers <chris.meyers.fsu@gmail.com>
Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-05-13 15:41:35 -04:00
Mauricio Magnani Jr
6bd39aea4b OPA server hostname with https or http results in connection errors (#6921)
Signed-off-by: Mauricio Magnani <magnani@redhat.com>
2025-05-13 14:41:29 -04:00
Hao Liu
b7a3c6b025 Delete UI test from CI (#6831)
In release_4.6 we no longer ship UI in AAP 2.5 so there's no reason to waste time on CI test for UIs
2025-05-13 14:40:07 -04:00
jessicamack
ba7ee23298 [backport][4.6] Update Azure Key Vault plugin to use Managed Identity (#6939)
* Bug on file name. Commiting to remove it.

* Update azure_kv plugin to use ManagedIdentity. Add testing.
2025-05-13 10:05:24 -04:00
Alan Rominger
537850c650 Race condition and refactoring for facts tests (#15938)
* Race condition and refactoring for facts tests
2025-05-12 21:48:09 -04:00
Alan Rominger
0d85dc5fc5 Add retry loop to deletions in collection tests (#15966)
* Add retry loop to deletions in collection tests

* Fix bad result stored var use
2025-05-09 09:48:23 -04:00
Jake Jackson
825a48bb32 [4.6] Insights Credential Help Text Update (#6937)
* Update help text for insights cred

* update help text for the insights cred per new mock ups
2025-05-08 10:45:14 -04:00
thedoubl3j
eb6aebff00 Update logic to not over write ec2 replace
* fix replace logic so that we don't over write and stay only at vmware
  when ec2 is selected
* add an env.json for functional testing
2025-05-08 15:06:58 +02:00
thedoubl3j
60114ab929 add logic to replace cred name when using esxi
* similar to aws, allow the use of the standard vmware cred
2025-05-08 15:06:58 +02:00
jessicamack
2ba6603436 Convert skip tests to xfail (#15511) 2025-05-07 11:00:44 -04:00
Hao Liu
21c463c0dd [Feature] Pre job run OPA policy enforcement (#15947)
Co-authored-by: Jiří Jeřábek (Jiri Jerabek) <Jerabekjirka@email.cz>
Co-authored-by: Alexander Saprykin <cutwatercore@gmail.com>
2025-05-07 14:51:23 +00:00
Alan Rominger
c3bf843ad7 Bump migration history again (#15914)
* Bump migration history again

* Fix weird random naming
2025-05-05 14:57:03 -04:00
Peter Braun
0e28d2590a fix: keep processing events, even if previous event data cannot be pa… (#15965) (#6922)
* fix: keep processing events, even if previous event data cannot be parsed

* change log level to warning
2025-05-05 13:26:42 +02:00
Lila Yasin
de4e707bb2 [Bug] AAP 42572 database deadlock (#15953)
* Demo of sorting hosts live test

* Sort both bulk updates and add batch size to facts bulk update to resolve deadlock issue

* Update tests to expect batch_size to agree with changes

* Add utility method to bulk update and sort hosts and applied that to the appropriate locations

Remove unused imports

Add utility method for sorting bulk updates

Remove try except OperationalError for loop

Remove unused import of django.db.OperationalError

Remove batch size as it is now on the bulk update utility method as 100

Remove batch size here since it is specified in sortedbulkupdate

Add transaction.atomic to have entire transaction is run as a signle transaction before committing to the db

Revert change to bulk update as it's not needed here and just sort instead

Move bulk_sorted utility method into db.py and updated name to not be specific to Hosts

Revise to import bulk_update_sorted.. rather than calling it as an argument

Fix way I'm importing bulk_update_sorted.. Remove unneeded Host import and remove calls to bul_update as args

Rebise calls to bulk_update_sorted.. to include Host in the args

REmove raw_update_hosts method and replace with bulk_update_sorted_by_id in update_hosts

Remove update_hosts function and replace with  bulk_update_sorted_by_id

Update live tests to use bulk_update_sorted_by_id

Fix the fields in bulk_update to agree with test

* Update functional tests to use bulk_update_sorted_by_id since update_hosts has been deleted

Replace update_hosts with bulk_update_sorted_by_id

Remove referenes to update_hosts

Update corresponding fact cachin tests to use bulk_update_sorted_by_id

Remove import of bulk_sorted_update

Add code comment to live test to silence Sonarqube hotspot

* Add comment NOSONAR to get rid of Sonarqube warning since this is just a test and it's not actually a security issue

Get test_finish_job_fact_cache_with_existing_data passing

Get test_finish_job_fact_cache_clear passing

Remove reference to raw_update and replace with new bulk update utility method

Add pytest.mark.django_db to appropriate tests

Corrent which model is called in bulk_update_sorted_by_id

Remove now unused Host import

Point to where bulk_update_sorted_by_id to where that is actually being used

Correct import of bulk_update_sorted_by_id

Revert changes in this file to avoid db calls issue

Remove @pytest.mark.django_db from unit tests

Remove commented out host sorting suggested fix

Fix failing tests test_pre_post_run_hook_facts_deleted_sliced & test_pre_post_run_hook_facts

Remove atomic transaction line, add return, and add docstring

* Fix failing test test_finish_job_fact_cache_clear & test_finish_job_fact_cache_with_existing_data

---------

Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-05-02 17:35:41 -04:00
Seth Foster
95289ff28c Update subscription API to use service accounts
Update code to pull subscriptions from
console.redhat.com instead of
subscription.rhsm.redhat.com

Uses service account client ID and client secret
instead of username/password, which is being
deprecated in July 2025.

Additional changes:

- In awx.awx.subscriptions module, use new service
account params rather than old basic auth params

- Update awx.awx.license module to use subscription_id
instead of pool_id. This is due to using a different API,
which identifies unique subscriptions by subscriptionID
instead of pool ID.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Chris Meyers <chris.meyers.fsu@gmail.com>
Co-authored-by: Peter Braun <pbraun@redhat.com>
2025-04-30 15:44:38 -04:00
jessicamack
2bc08b421d Bring WFJT job access to parity with UnifiedJobAccess (#15344) (#6935)
* Bring WFJT job access to parity with UnifiedJobAccess

* Run linters

Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-04-30 13:43:47 -04:00
Matthew Sandoval
cae8a4e16c Pin receptorctl 1.5.5 (#6931) 2025-04-29 11:01:22 -07:00
Chris Meyers
41d3729501 Allow ipv6 address for TOWER_URL_BASE setting
* Django url validators support ipv6, our custom URLField
  allow_plain_hostname feature was messing with the hostname before it
  was passed to Django validators.
* This change dodges the allow_plan_hostname transformations if the
  value looks like it's an ipv6 one.
2025-04-29 09:55:16 -04:00
Chris Meyers
000f6b0708 Allow ipv6 address for TOWER_URL_BASE setting
* Django url validators support ipv6, our custom URLField
  allow_plain_hostname feature was messing with the hostname before it
  was passed to Django validators.
* This change dodges the allow_plan_hostname transformations if the
  value looks like it's an ipv6 one.
2025-04-28 10:14:39 -04:00
Peter Braun
c799d51ec8 fix: keep processing events, even if previous event data cannot be pa… (#15965)
* fix: keep processing events, even if previous event data cannot be parsed

* change log level to warning
2025-04-28 13:21:33 +00:00
TVo
a3303bb74b Pin ansible runner241 and receptorctl154 (#6919)
* Updated pinned runner and receptorctl in controller.

* Bumped receptorctl to 1.5.4
2025-04-22 11:13:58 -07:00
Alan Rominger
db6e8b9bad AAP-40782 Fix too-low max_workers value, dump running at capacity (#15873)
* Dump running tasks when running out of capacity

* Use same logic for max_workers and capacity

* Address case where CPU capacity is the constraint

* Add a test for correspondence

* Fake redis to make tests work
2025-04-16 16:43:21 -04:00
Alan Rominger
6a10e0ea5c AAP-41139 [4.6][dependencies] Bump Django 2 minor versions (#6892)
* Initial requirement bump for Django CVE

* Run updater script
2025-04-16 14:41:05 -04:00
Hao Liu
483417762f Git ignore legacy UI files (#15946) 2025-04-16 14:40:12 -04:00
Bruno Rocha
6690d71357 [4.6][Backport][Feature] feat: Manage Django Settings with Dynaconf (#6910)
Dynaconf is being added from DAB factory to load Django Settings
2025-04-15 15:17:16 -04:00
Hao Liu
ae0a8a80eb [4.6] Fix Cert base authentication for OPA (#6909)
* Remove unused setting

* Fix mTLS auth to OPA server

- Workaround https://github.com/Turall/OPA-python-client/issues/32
- Add tests for `opa_cert_file` context manager
2025-04-15 11:07:33 -04:00
Hao Liu
4532c627e3 Update dependencies for DAB (#6914)
Co-authored-by: Satoe Imaishi <simaishi@redhat.com>
2025-04-15 09:55:45 -04:00
Alan Rominger
49240ca8e8 Fix environment-specific rough edges of logging setup (#15193) 2025-04-14 12:12:06 -04:00
Alan Rominger
5ff3d4b2fc Reduce log noise from next run being in past (#15670) 2025-04-14 09:04:16 -04:00
Fabio Alessandro Locati
3f96ea17d6 Links in README.md should use HTTPS (#15940) 2025-04-12 18:38:17 +00:00
TVo
f59ad4f39c Remove cgi deprecation exception from pytest.ini (#15939)
Remove deprecation exception from pytest.ini
2025-04-11 14:07:23 -07:00
Lila Yasin
87cb6dc0b9 AAP-39365 facts are unintentionally deleted when the inventory is modified during a job execution (#15910) (#6905)
* Added test_jobs.py to the model unit test folder in orther to show the undesired behaviour with fact cache



* Added test_jobs.py to the model unit test folder in orther to show the undesired behaviour with fact cache



* Solved undesired behaviour with fact_cache



* Solved bug with slices



* Remove unused imports

Remove now unused line of code which was commented out by the contributor

Revert "Remove now unused line of code which was commented out by the contributor"

This reverts commit f1a056a2356d56bc7256957a18503bd14dcfd8aa.

* Add back line that had been commented out as this line makes hosts specific to the particular slice when applicable

Revise private_data_dir fixture to see if it improves code coverage

Checked out awx/main/tests/unit/models/test_jobs.py in devel to see if it resolves git diff issue

* Fix formatting in awx/main/tests/unit/models/test_jobs.py

Rename for loop from host in hosts to hosts in hosts_cahced and remove unneeded continue

Revise finish_fact_cache to utilize inventory rather than hosts

Remove local var hosts that was assigned but unused

Revert change in start_fact_cache hosts_cached back to hosts

Revise the way we are handling hosts_cached and joining the file

Revert "Revise the way we are handling hosts_cached and joining the file"

This reverts commit e6e3d2f09c1b79a9bce3647a72e3dad97fe0aed8.

Reapply "Revise the way we are handling hosts_cached and joining the file"

This reverts commit a42b7ae69133fee24d3a5f1b456d9c343d111df9.

Revert some of my changes to get back to a better working state

Rename for loop to host in hosts_cached and remove unneeded continue

Remove jobs job.get_hosts_for_fact_cache() from post run hook, fix if statement after continue block, and revise how we are calling hosts in finish for loop

Add test_invalid_host_facts to test_jobs to increase code coverage

Update method signature to use hosts_cached and updated other references to hosts in finish_facts_cached

Rename hosts iterator to hosts_cached to agree with naming elsewhere

Revise test_invalid_host_facts to get more code coverage

Revise test_invalid_host_facts to increase codecov

Revise test_pre_post_run_hook_facts_deleted_sliced to ensure we are hitting the assertionerror for code cov

Revise  mock_inventory.hosts. to hit assert failure

Add revision of hosts and facts to force failure to satisfy code cov

Fix failure in test_pre_post_run_hook_facts_deleted_sliced

Add back for loop to create failures and add assert to hit them

Remove hosts.iterator() from both start_fact_cache and finish_fact_cache

Remove unused import of Queryset to satisfy api-lint

Fix typo in docstring hasnot to has not

Move hosts_cached.append(host) to outer loop in start_fact_cache

Add class to help support cached hosts resolving host.name issue with hosts_cached

* Add live tests for ansible facts

Remove fixture needed for local work only maybe

Revert "Add class to help support cached hosts resolving host.name issue with hosts_cached"

This reverts commit 99d998cfb9960baafe887de80bd9b01af50513ec.

* Move hosts_cached.append(host) outside of try except

* Move hosts_cached.append(host) to the beginning of start_fact_cache

---------

Signed-off-by: onetti7 <davonebat@gmail.com>
Co-authored-by: onetti7 <davonebat@gmail.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-04-11 11:08:26 -04:00
Lila Yasin
bbcdef18a7 [2.5][Backport] AAP 30045 incorrect deprecation warning for awx.awx.schedule rrule (#6903)
* Make lookup plugins return lists to fix failures (#15625)

* Make lookup plugins return lists to fix failures

* Update unit tests

* Use lookup for test failures, update docs

* Grammar fix from review

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

---------

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

* Fix cherry-pick merge issue, should resolve failing linter

---------

Co-authored-by: Alan Rominger <arominge@redhat.com>
Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>
2025-04-11 11:02:45 -04:00
Alan Rominger
c3ee0c2d8a Sensible log behavior when redis is unavailable (#15466)
* Sensible log behavior when redis is unavailable

* Consistent behavior with dispatcher and callback
2025-04-10 13:45:05 -07:00
Alan Rominger
7a3010f0e6 Bring WFJT job access to parity with UnifiedJobAccess (#15344)
* Bring WFJT job access to parity with UnifiedJobAccess

* Run linters
2025-04-10 15:29:47 -04:00
Jake Jackson
d35d7f62ec Esxi inventory plugin (#6895)
* Add in ESXI plugin as a choice

* added in vmware esxi as an inventory source
* made a migration that may not be needed but will need to circle back

* black formatting

* linter fixes that I missed in the first commit, squash

* Update esxi to use_fqcn to true

* added use_fqcn on the esxi cred to true to correctly lay down
  collection name

* add fqcn true

* updated vmware esxi to use true for fqcn

* Update defaults and re-order migrations

* updated defaults to add correct env var to get empty
* re-ordered migrations to be in line with others

* Add condition to replace vmware_esxi cred

* replace direct name match with vmware cred since source supports old
  cred

* add skeleton test

* quick pass, needs more
* squash this

* Add tests for creating inventory ESXI source

* add test case to test creating an inventory with different cred type
  to source name

* update test and linting

* added correct cred return since esxi uses same cred

* assert on status code

* assert that we received a 204

* Added new folder for vmware_exsi and empty json file.

* Corrected the misspelling of folder name to 'esxi'

* fixed misspelling for `vmware_`

---------

Co-authored-by: Thanhnguyet Vo <thavo@redhat.com>
2025-04-10 14:32:17 -04:00
Lila Yasin
9d9c125e47 [4.6][Backport][Feature] feat: 38589 GitHub App Authentication (#15807) (#6887)
* feat: 38589 GitHub App Authentication (#15807)

* feat: 38589 GitHub App Authentication

Allows both git@<personal-token> and x-access-token@<github-access-token> when authenticating using git.
This allows GitHub App tokens to work without interfering with existing authentication types.

---------

Co-authored-by: Jake Jackson <thedoubl3j@Jakes-MacBook-Pro.local>

* revert change made to allow UI to accept x-access-token, just use htt… (#15851)

revert change made to allow UI to accept x-access-token, just use https:// instead

* Add Github dep for new cred support if used (#15850)

* Add pygithub for new app token support

* fixed git requirements file with new
* added new github dep and relevant deps it needs

* add required licenses

* Add artifacts to satisfy license check

* Remove duplicated license

---------

Co-authored-by: Andrea Restle-Lay <arestlel@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>

* Remove deps update it came with the cherry-pick and is not needed in this version

Remove unneeded deps updates from requirements.in

Remove point to awx-plugins as it is not needed in tower

* Add a credential plugin that uses GitHub Apps to get tokens

* Add github app tests

* Ran requirements updater script

Ran black on github_app_test to fix formatting issue

Add scm_github_app to managed credentials

Ran updater script to reflect new deps

Added github app info to def build_passwords in jobs.py, cred now appears in credential types

Update ManagedCredentialType for GitHub App to match what we have in awx-plugins

Update inputs in maManagedCredentialType to github_app_inputs to communicate with awx/main/credential_plugins/github_app.py

Revert incorrect change in ManagedCredentialType, change github_app_lookup to call inputs instead of github_app_inputs

Updated namespace to github_app_lookup to agree with nomenclature used in the rest of the implementation and to resolve failing API test

Remove import pointing to awx plugins and update to point to credential_plugins

Remove references to gh_app_plugin_mod and change to github_app

Remove from awx_plugins.interfaces._temporary_private_django_api import (  # noqa: WPS436 to resolve failing test

Remove flake8 typing & typing references that do not exist in this version of Tower

Remove references in jobs.py and  __init__.py since this is an external cred type and registered it in setup.cfg instead

Remove blank line

REvise name in cfg from github_app_lookup to github_app to see if that ifxes module not found error

Revise first declaration of github_app to agree with file name to see if that resolves issue

Rename line 174 to agree with what's in config

Fix reference to github_app_lookup to github_app

Linters compliaining about the github_app in __all__ not being defined, renamed to see if that aligns them

Fix naming in test to correspond to naming of cred type

Update naming to be more specific and add blank line to setup.cfg

Remove __all__ from githubapp.py to satisfy linters

Revert formatting change since it is not needed in this repository

* Add blank line at the end of requirements.in

---------

Co-authored-by: Andrea Restle-Lay <andrearestlelay@gmail.com>
Co-authored-by: Jake Jackson <thedoubl3j@Jakes-MacBook-Pro.local>
Co-authored-by: Jake Jackson <jljacks93@gmail.com>
Co-authored-by: Andrea Restle-Lay <arestlel@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-04-10 14:32:17 -04:00
Chris Meyers
5dd81a04ce Aap 41580 indirect host count wildcard query (#15893)
* Support <collection_namespace>.<collection_name>.* indirect host query
  to match ANY module in the <collection_namespace>.<collection_name>
* Add tests for new wildcard indirect host count
* error checking of ansible event name
* error checking of ansible event query
2025-04-10 14:32:15 -04:00
Lila Yasin
05dc9bad1c AAP-39365 facts are unintentionally deleted when the inventory is modified during a job execution (#15910)
* Added test_jobs.py to the model unit test folder in orther to show the undesired behaviour with fact cache

Signed-off-by: onetti7 <davonebat@gmail.com>

* Added test_jobs.py to the model unit test folder in orther to show the undesired behaviour with fact cache

Signed-off-by: onetti7 <davonebat@gmail.com>

* Solved undesired behaviour with fact_cache

Signed-off-by: onetti7 <davonebat@gmail.com>

* Solved bug with slices

Signed-off-by: onetti7 <davonebat@gmail.com>

* Remove unused imports

Remove now unused line of code which was commented out by the contributor

Revert "Remove now unused line of code which was commented out by the contributor"

This reverts commit f1a056a2356d56bc7256957a18503bd14dcfd8aa.

* Add back line that had been commented out as this line makes hosts specific to the particular slice when applicable

Revise private_data_dir fixture to see if it improves code coverage

Checked out awx/main/tests/unit/models/test_jobs.py in devel to see if it resolves git diff issue

* Fix formatting in awx/main/tests/unit/models/test_jobs.py

Rename for loop from host in hosts to hosts in hosts_cahced and remove unneeded continue

Revise finish_fact_cache to utilize inventory rather than hosts

Remove local var hosts that was assigned but unused

Revert change in start_fact_cache hosts_cached back to hosts

Revise the way we are handling hosts_cached and joining the file

Revert "Revise the way we are handling hosts_cached and joining the file"

This reverts commit e6e3d2f09c1b79a9bce3647a72e3dad97fe0aed8.

Reapply "Revise the way we are handling hosts_cached and joining the file"

This reverts commit a42b7ae69133fee24d3a5f1b456d9c343d111df9.

Revert some of my changes to get back to a better working state

Rename for loop to host in hosts_cached and remove unneeded continue

Remove jobs job.get_hosts_for_fact_cache() from post run hook, fix if statement after continue block, and revise how we are calling hosts in finish for loop

Add test_invalid_host_facts to test_jobs to increase code coverage

Update method signature to use hosts_cached and updated other references to hosts in finish_facts_cached

Rename hosts iterator to hosts_cached to agree with naming elsewhere

Revise test_invalid_host_facts to get more code coverage

Revise test_invalid_host_facts to increase codecov

Revise test_pre_post_run_hook_facts_deleted_sliced to ensure we are hitting the assertionerror for code cov

Revise  mock_inventory.hosts. to hit assert failure

Add revision of hosts and facts to force failure to satisfy code cov

Fix failure in test_pre_post_run_hook_facts_deleted_sliced

Add back for loop to create failures and add assert to hit them

Remove hosts.iterator() from both start_fact_cache and finish_fact_cache

Remove unused import of Queryset to satisfy api-lint

Fix typo in docstring hasnot to has not

Move hosts_cached.append(host) to outer loop in start_fact_cache

Add class to help support cached hosts resolving host.name issue with hosts_cached

* Add live tests for ansible facts

Remove fixture needed for local work only maybe

Revert "Add class to help support cached hosts resolving host.name issue with hosts_cached"

This reverts commit 99d998cfb9960baafe887de80bd9b01af50513ec.

* Move hosts_cached.append(host) outside of try except

* Move hosts_cached.append(host) to the beginning of start_fact_cache

---------

Signed-off-by: onetti7 <davonebat@gmail.com>
Co-authored-by: onetti7 <davonebat@gmail.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-04-10 11:46:41 -04:00
Alan Rominger
38f0f8d45f Remove pbr dependency (#15806)
* Remove pbr dependency

* Review comment, remove comment
2025-04-09 17:20:12 -04:00
Alan Rominger
d3ee9a1bfd AAP-27502 Try removing coreapi for deprecation warning (#15804)
Try removing coreapi for deprecation warning
2025-04-09 10:50:07 -07:00
TVo
438aa463d5 Remove L10N deprecation exception (#15925)
* Remove L10N deprecation exception

* Remove L10N from default settings file.
2025-04-09 06:22:01 -07:00
jessicamack
51f9160654 Fix CVE 2025-26699 (#15924)
fix CVE 2025-26699
2025-04-08 12:07:22 -04:00
Dave
ac3123a2ac Error reporting and handling in GH14575/GH12682 (#14802)
Bug Error reporting and handling in GH14575/GH12682

This targets a bug that tries to parse blank string as None for panelid
and dashboardid.

It also prints more errors reporting to the console to diagnose
reporting issues

Co-authored-by: Lila Yasin <lyasin@redhat.com>
2025-04-08 15:27:19 +00:00
Hao Liu
c4ee5127c5 Prevent automountServiceAccountToken in containergroup pod sepc (#15586)
* Prevent job pod from mounting serviceaccount token

* Add serializer validation for cg pod_spec_override

Prevent automountServiceAccountToken to be set to true and provide an error message when automountServiceAccountToken is being set to true
2025-04-03 12:58:16 -04:00
Hao Liu
9ec7540c4b Common setup-python in github action (#15901) 2025-04-03 11:14:52 -04:00
Hao Liu
2389fc691e Common action for setup ssh agent in GHA (#15902) 2025-04-03 11:14:33 -04:00
Konstantin
567f5a2476 Credentials: accept empty description (#15857)
Accept empty description
2025-04-02 17:05:38 +00:00
Jan-Piet Mens
e837535396 Indicate tower_cli.cfg can also be in current directory (#15092)
Update readme to provide more details on how tower_cli.cfg is used.

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2025-04-02 11:32:11 -04:00
Konstantin Kuminsky
1d57f1c355 Ability to remove credentials owned by a user 2025-04-02 16:48:27 +02:00
Dirk Jülich
e060e44b05 AAP-37381 Apply Django password validators correctly. (#6902)
* Move call to django_validate_password to the correct method were the user object is available.
* Added tests for the Django password validation functionality.
2025-04-02 16:45:58 +02:00
Konstantin
7676f14114 Accept empty description 2025-04-02 16:01:38 +02:00
Dirk Jülich
182e5cfaa4 AAP-37381 Apply password validators from settings.AUTH_PASSWORD_VALIDATORS correctly. (#15897)
* Move call to django_validate_password to the correct method were the user object is available.

* Added tests for the Django password validation functionality.
2025-04-01 12:03:11 +02:00
Fabio Alessandro Locati
99be91e939 Add notice of paused releases (#15900)
* Add notice of suspended releases

* Improve following suggestions
2025-03-27 19:14:21 +00:00
Chris Meyers
9ff163b919 Remove AsgiHandler deprecation exception
* Time has passed. Channels (4.2.0) no longer raises a deprecation
  warning for this case. It used to (4.1.0).
* We do NOT serve http requests over daphne, this is the default
  behavior of ProtocolTypeRouter() when the http param is NOT included
2025-03-27 11:40:40 -04:00
Chris Meyers
5d0d0404c7 Remove ProtocolTypeRouter deprecation exception
* Time has passed. Channels (4.2.0) no longer raises a deprecation
  warning for this case. It used to (4.1.0).
* All is good. No code changes needed for this. We do NOT service http
  requests over daphne, just websockets. We, correctly, do NOT supply
  the http key so daphne does NOT service http requests.
2025-03-27 11:17:56 -04:00
Chris Meyers
db5b6d0019 Add changelog to awx collection 2025-03-25 13:41:53 -04:00
Chris Meyers
a2c8ecb4e6 Bump awx collection ansible required version 2025-03-25 13:41:53 -04:00
Chris Meyers
277bc581e7 Remove coarse grain unused import
* It would seem that fine-grain noqa pylint ignores do the job and are
  already in place. Prefer that over the coarse entire file ignore.
2025-03-25 13:41:53 -04:00
Chris Meyers
ef89c59a13 Fix ansible-lint empty lines in module docstrings 2025-03-25 13:41:53 -04:00
Chris Meyers
5872a88a57 Fix ansible-lint truthy in module docstrings 2025-03-25 13:41:53 -04:00
Chris Meyers
7fdd15f115 Fix ansible-lint indentation in module docstrings 2025-03-25 13:41:53 -04:00
Chris Meyers
5d53821ce5 Aap 41580 indirect host count wildcard query (#15893)
* Support <collection_namespace>.<collection_name>.* indirect host query
  to match ANY module in the <collection_namespace>.<collection_name>
* Add tests for new wildcard indirect host count
* error checking of ansible event name
* error checking of ansible event query
2025-03-24 08:15:44 -04:00
Seth Foster
39cd09ce19 Remove django settings module env var (#15896)
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-03-18 20:20:26 -04:00
Peter Braun
353f0adf36 [4.6][backport] do not count dark hosts as updated (#6877)
* feat: do not count dark hosts as updated (#15872)

* feat: do not count dark hosts as updated

* update functional tests

* Fix test flake due to host metric id enumeration (#15875)

---------

Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-03-18 16:34:52 +01:00
Jake Jackson
cd0e27446a Update bug scrub docs (#15894)
* Update docs with a few more things

* update about use of PAT
* update around managing output from the script

* Fix spacing and empty line

* finish run on sentence
* update requirements with extra dep needed
2025-03-18 11:06:37 -04:00
Peter Braun
bdfd9dec74 update: use singular form ANSIBLE_COLLECTIONS_PATH (#15841) (#6876)
* update: use singular form ANSIBLE_COLLECTIONS_PATH

* update functional tests
2025-03-18 14:44:57 +01:00
Hao Liu
628a0e6a36 Add opa_query_path to Organization/Inventory/JobTemplate (#15863) 2025-03-18 09:06:14 -04:00
Hao Liu
bad4e630ba Basic runtime enforcement of policy as code part 2 (#6875)
* Add `opa_query_path field` for Inventory, Organization and JobTemplate models (#6850)

Add `opa_query_path` model field to Inventory, Organizatio and JobTemplate. Add migration file and expose opa_query_path field in the related API serializers.

* Gather and evaluate `opa_query_path` fields and raise violation exceptions (#6864)

gather and evaluate all opa query related to a job execution during policy evaluation phase 

* Add OPA_AUTH_CUSTOM_HEADERS support (#6863)

* Extend policy input data serializers (#6890)

* Extend policy input data serializers

* Update help text for PaC related fields (#6891)

* Remove encrypted from OPA_AUTH_CUSTOMER_HEADER

Unable to encrypt a dict field

---------

Co-authored-by: Jiří Jeřábek (Jiri Jerabek) <Jerabekjirka@email.cz>
Co-authored-by: Alexander Saprykin <cutwatercore@gmail.com>
Co-authored-by: Tina Tien <98424339+tiyiprh@users.noreply.github.com>
2025-03-18 02:39:26 +00:00
Seth Foster
e9f2a14ebd Fix root container path for project updates in K8s
Modifies to_container_path to accept an optional
container_root parameter.

Normally this defaults to /runner, but in K8S
environments, project updates run from the
private_data_dir, e.g. /tmp/awx_1_123abc, not
/runner.

In that situation, we just pass in private_data_dir
as the container_root.

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2025-03-13 13:34:12 -04:00
Alan Rominger
01fae57de2 Fix indirect host counting task test race condition (#15871) 2025-03-12 13:50:19 -04:00
Alan Rominger
c7ac45717b Pin drf-yasg to make api-test pass (#15887)
Ping drf-yasg to make api-test pass
2025-03-12 13:50:19 -04:00
Eric C Chong
8fb5862223 Allow lookup_organization to find teams and resources from different orgs 2025-03-12 13:32:51 -04:00
Sasa Jovicic
6f7d5ca8a3 Implement an option to choose a job type on relaunch (issue #14177) (#15249)
Allows changing the job type (run, check) when relaunching
a job by adding a "job_type" to the relaunch POST payload
2025-03-12 13:27:05 -04:00
Seth Foster
0f0f5aa289 Pass in private_data_dir when project update is on K8S
In OCP/K8S, projects run in the task pod's ee container. The private_data_dir is not extracted to /runner. Instead, the project update runs directly from the mounted in private_data_dir, e.g. /tmp/awx_1_abcd.

When injecting a credential that uses extra vars, we pass the private_data_dir as as the container_root, so that the correct command line argument is generated, e.g. "-e /tmp/awx_1_abcd/env/extra_var_file".

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-03-11 23:12:10 -04:00
Alan Rominger
bc12fa2283 Fix indirect host counting task test race condition (#15871) 2025-03-11 14:46:39 -04:00
Pablo H.
03b37037d6 feat: awx community bug scrub script (#15869)
* feat: awx community bug scrub script

---------

Co-authored-by: thedoubl3j <jljacks93@gmail.com>
2025-03-11 14:07:54 -04:00
Alan Rominger
5668973d70 Allow schema generation on-demand (#15885) 2025-03-11 13:49:02 -04:00
Hao Liu
e6434454ce Fix CI schema gen (#15886)
Fix schema gen failure

```
ERROR: invalid empty ssh agent socket: make sure SSH_AUTH_SOCK is set
```
2025-03-11 16:57:36 +00:00
Alan Rominger
3ba9c026ea Pin drf-yasg to make api-test pass (#15887)
Ping drf-yasg to make api-test pass
2025-03-11 16:39:06 +00:00
Alan Rominger
c7b6b43913 [4.6] Attempt to fix ui-lint check by clearing cache forcefully (main fork) (#6885)
* Try to make ui-lint check pass in 4.6

* Implement solution suggested by Kia
2025-03-11 11:52:30 -04:00
Alan Rominger
1e6a7c0749 Prevent system auditor from downloading install bundle (#6805) 2025-03-11 10:54:02 -04:00
Alan Rominger
b5bc85e639 AAP-41692 [4.6] Update jinja2 for CVE (#6881)
* Initial bump of jinja2 lib

* Run updater script
2025-03-11 10:45:37 -04:00
Konstantin
a206ca22ec Change collection name back to awx 2025-03-11 10:14:30 +01:00
Konstantin Kuminsky
e961cbe46f Few minor changes in the lookup description 2025-03-11 10:14:30 +01:00
Bruno Rocha
0ffe04ed9c feat: Manage Django Settings with Dynaconf
Dynaconf is being added from DAB factory to load Django Settings
2025-03-07 10:18:27 -05:00
Alan Rominger
ee739b5fd9 Fix test flake due to host metric id enumeration (#15875) 2025-03-06 14:35:39 -05:00
Peter Braun
abc04e5c88 feat: do not count dark hosts as updated (#15872)
* feat: do not count dark hosts as updated

* update functional tests
2025-03-06 09:41:12 +01:00
Peter Braun
5b17e5c9c3 update: use singular form ANSIBLE_COLLECTIONS_PATH (#15841)
* update: use singular form ANSIBLE_COLLECTIONS_PATH

* update functional tests
2025-03-05 16:39:34 +01:00
Jake Jackson
f04bf5ccf0 update recetpor (#6869)
* bumped receptor version to latest
2025-03-04 15:16:58 -05:00
Peter Braun
28712a4c6e fix: audit record name should not be the hostname (#15864) (#6866)
* fix: audit record name should not be the hostname

* fix: update tests
2025-03-03 16:49:42 +01:00
Peter Braun
7b8b37d9a8 fix: audit record name should not be the hostname (#15864)
* fix: audit record name should not be the hostname

* fix: update tests
2025-02-27 13:43:59 +01:00
Hao Liu
698d769a7a [Policy as Code] Monkey patch opa_client.base.BaseClient (#6865)
Workaround bug described in https://github.com/Turall/OPA-python-client/issues/29
2025-02-26 21:09:51 +00:00
Alan Rominger
529ee73fcd [4.6] Backport the "live" tests (#6859)
* Create a new pytest folder for live system testing with normal services (#15688)

* PoC for running dev env tests

* Replace in github actions

* Move folder to better location

* Further streamlining of new test folders

* Consolidate fixture, add writeup docs

* Use star import

* Push the wait-for-job to the conftest

Fix misused project cache identifier (#15690)

Fix project cache identifiers for new updates

Finish test and discover viable solution

Add comment on related task code

AAP-37989 Tests for exclude list with multiple jobs (#15722)

* Tests for exclude list with multiple jobs

Create test for using manual & file projects (#15754)

* Create test for using a manual project

* Chang default project factory to git, remove project files monkeypatch

* skip update of factory project

* Initial file scaffolding for feature

* Fill in galaxy and names

* Add README, describe project folders and dependencies

Add ee cleanup tests

* Adds cleanup tests to the live test.

Fix rsyslog permission error in github ubuntu tests from apparmor (#15717)

* Add test to detect rsyslog config problems

* Get dmesg output

* Disable apparmor for rsyslogd

Make awx/main/tests/live dramatically faster (#15780)

* Make awx/main/tests/live dramatically faster

* Add new setting to exclude list

* Fix rebase issues

* Did not want to backport this
2025-02-25 15:22:38 -05:00
jessicamack
ba053dfb51 Ship analytics data using service account token (#15812) (#6856)
Use oidc client to ship analytics data
2025-02-25 14:42:17 -05:00
Hao Liu
43b72161ce Remove requirements_git.credentials.txt (#15862)
Switched to ssh based auth for requirements_git in https://github.com/ansible/awx/pull/15838
2025-02-25 18:44:36 +00:00
Marc Hassan
de4a971cb3 cli: set non-zero return code for canceled status (#15678) 2025-02-25 11:11:47 -05:00
Alan Rominger
b351dfb102 Undo temporary DAB change for requirements generation (#6862) 2025-02-25 09:30:15 -05:00
Alan Rominger
fb4879b2c9 Add missing slash breaking image builds (#15861) 2025-02-24 20:40:09 -05:00
Alan Rominger
b502a9444a [4.6 backport] Feature indirect host counting (#15802) (#6858)
* Feature indirect host counting (#15802)

* AAP-37282 Add parse JQ data and test it for a `job` object in isolation (#15774)

* Add jq dependency

* Add file in progress

* Add license for jq

* Write test and get it passing

* Successfully test collection of `event_query.yml` data (#15761)

* Callback plugin method from cmeyers adapted to global collection list

Get tests passing

Mild rebranding

Put behind feature flag, flip true in dev

Add noqa flag

* Add missing wait_for_events

* feat: try grabbing query files from artifacts directory (#15776)

* Contract changes for the event_query collection callback plugin (#15785)

* Minor import changes to collection processing in callback plugin

* Move agreed location of event_query file

* feat: remaining schema changes for indirect host audits (#15787)

* Re-organize test file and move artifacts processing logic to callback (#15784)

* Rename the indirect host counting test file

* Combine artifacts saving logic

* Connect host audit model to jq logic via new task

* Add unit tests for indirect host counting (#15792)

* Do not get django flags from database (#15794)

* Document, implement, and test remaining indirect host audit fields (#15796)

* Document, implement, and test remaining indirect host audit fields

* Fix hashing

* AAP-39559 Wait for all event processing to finish, add fallback task (#15798)

* Wait for all event processing to finish, add fallback task

* Add flag check to periodic task

* feat: cleanup of old indirect host audit records (#15800)

* By default, do not count indirect hosts (#15801)

* By default, do not count indirect hosts

* Fix copy paste goof

* Fix linter issue from base branch

* prevent multiple tasks from processing the same job events, prevent p… (#15805)

prevent multiple tasks from processing the same job events, prevent periodic task from spawning another task per job

* Fix typos and other bugs found by Pablo review

* fix: rely on resolved_action instead of task, adapt to proposed query… (#15815)

* fix: rely on resolved_action instead of task, adapt to proposed query structure

* tests: update indirect host tests

* update remaining queries to new format

* update live test

* Remove polling loop for job finishing event processing (#15811)

* Remove polling loop for job finishing event processing

* Make awx/main/tests/live dramatically faster (#15780)

* AAP-37282 Add parse JQ data and test it for a `job` object in isolation (#15774)

* Add jq dependency

* Add file in progress

* Add license for jq

* Write test and get it passing

* Successfully test collection of `event_query.yml` data (#15761)

* Callback plugin method from cmeyers adapted to global collection list

Get tests passing

Mild rebranding

Put behind feature flag, flip true in dev

Add noqa flag

* Add missing wait_for_events

* feat: try grabbing query files from artifacts directory (#15776)

* Contract changes for the event_query collection callback plugin (#15785)

* Minor import changes to collection processing in callback plugin

* Move agreed location of event_query file

* feat: remaining schema changes for indirect host audits (#15787)

* Re-organize test file and move artifacts processing logic to callback (#15784)

* Rename the indirect host counting test file

* Combine artifacts saving logic

* Connect host audit model to jq logic via new task

* Document, implement, and test remaining indirect host audit fields (#15796)

* AAP-39559 Wait for all event processing to finish, add fallback task (#15798)

* Wait for all event processing to finish, add fallback task

* Add flag check to periodic task

* feat: cleanup of old indirect host audit records (#15800)

* prevent multiple tasks from processing the same job events, prevent p… (#15805)

prevent multiple tasks from processing the same job events, prevent periodic task from spawning another task per job

* Remove polling loop for job finishing event processing (#15811)

* Make awx/main/tests/live dramatically faster (#15780)

* reorder migrations to allow indirect instances backport

* cleanup for rebase and merge into devel

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
Co-authored-by: jessicamack <jmack@redhat.com>
Co-authored-by: Peter Braun <pbranu@redhat.com>
2025-02-24 21:55:44 +00:00
Hao Liu
2d648d1225 [Feature][release_4.6] Policy as Code MVP part 1 (#6848) 2025-02-24 15:58:57 -05:00
Alan Rominger
7d30dff075 Feature indirect host counting (#15802)
* AAP-37282 Add parse JQ data and test it for a `job` object in isolation (#15774)

* Add jq dependency

* Add file in progress

* Add license for jq

* Write test and get it passing

* Successfully test collection of `event_query.yml` data (#15761)

* Callback plugin method from cmeyers adapted to global collection list

Get tests passing

Mild rebranding

Put behind feature flag, flip true in dev

Add noqa flag

* Add missing wait_for_events

* feat: try grabbing query files from artifacts directory (#15776)

* Contract changes for the event_query collection callback plugin (#15785)

* Minor import changes to collection processing in callback plugin

* Move agreed location of event_query file

* feat: remaining schema changes for indirect host audits (#15787)

* Re-organize test file and move artifacts processing logic to callback (#15784)

* Rename the indirect host counting test file

* Combine artifacts saving logic

* Connect host audit model to jq logic via new task

* Add unit tests for indirect host counting (#15792)

* Do not get django flags from database (#15794)

* Document, implement, and test remaining indirect host audit fields (#15796)

* Document, implement, and test remaining indirect host audit fields

* Fix hashing

* AAP-39559 Wait for all event processing to finish, add fallback task (#15798)

* Wait for all event processing to finish, add fallback task

* Add flag check to periodic task

* feat: cleanup of old indirect host audit records (#15800)

* By default, do not count indirect hosts (#15801)

* By default, do not count indirect hosts

* Fix copy paste goof

* Fix linter issue from base branch

* prevent multiple tasks from processing the same job events, prevent p… (#15805)

prevent multiple tasks from processing the same job events, prevent periodic task from spawning another task per job

* Fix typos and other bugs found by Pablo review

* fix: rely on resolved_action instead of task, adapt to proposed query… (#15815)

* fix: rely on resolved_action instead of task, adapt to proposed query structure

* tests: update indirect host tests

* update remaining queries to new format

* update live test

* Remove polling loop for job finishing event processing (#15811)

* Remove polling loop for job finishing event processing

* Make awx/main/tests/live dramatically faster (#15780)

* AAP-37282 Add parse JQ data and test it for a `job` object in isolation (#15774)

* Add jq dependency

* Add file in progress

* Add license for jq

* Write test and get it passing

* Successfully test collection of `event_query.yml` data (#15761)

* Callback plugin method from cmeyers adapted to global collection list

Get tests passing

Mild rebranding

Put behind feature flag, flip true in dev

Add noqa flag

* Add missing wait_for_events

* feat: try grabbing query files from artifacts directory (#15776)

* Contract changes for the event_query collection callback plugin (#15785)

* Minor import changes to collection processing in callback plugin

* Move agreed location of event_query file

* feat: remaining schema changes for indirect host audits (#15787)

* Re-organize test file and move artifacts processing logic to callback (#15784)

* Rename the indirect host counting test file

* Combine artifacts saving logic

* Connect host audit model to jq logic via new task

* Document, implement, and test remaining indirect host audit fields (#15796)

* Document, implement, and test remaining indirect host audit fields

* Fix hashing

* AAP-39559 Wait for all event processing to finish, add fallback task (#15798)

* Wait for all event processing to finish, add fallback task

* Add flag check to periodic task

* feat: cleanup of old indirect host audit records (#15800)

* prevent multiple tasks from processing the same job events, prevent p… (#15805)

prevent multiple tasks from processing the same job events, prevent periodic task from spawning another task per job

* Remove polling loop for job finishing event processing (#15811)

* Remove polling loop for job finishing event processing

* Make awx/main/tests/live dramatically faster (#15780)

* temp

* remove test

* reorder migrations to allow indirect instances backport

* cleanup for rebase and merge into devel

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
Co-authored-by: jessicamack <jmack@redhat.com>
Co-authored-by: Peter Braun <pbranu@redhat.com>
2025-02-24 16:39:51 +00:00
Alan Rominger
0ba9fc6980 More PyGithub dep and license housekeeping (#15853) 2025-02-24 08:41:54 -05:00
Andrea Restle-Lay
70ea0a785b revert change made to allow UI to accept x-access-token, just use htt… (#15851)
revert change made to allow UI to accept x-access-token, just use https:// instead
2025-02-21 21:10:22 +00:00
Jake Jackson
fa099fe737 Add Github dep for new cred support if used (#15850)
* Add pygithub for new app token support

* fixed git requirements file with new
* added new github dep and relevant deps it needs

* add required licenses

* Add artifacts to satisfy license check

* Remove duplicated license

---------

Co-authored-by: Andrea Restle-Lay <arestlel@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-02-20 21:16:02 +00:00
Andrea Restle-Lay
bf4d45452c feat: 38589 GitHub App Authentication (#15807)
* feat: 38589 GitHub App Authentication

Allows both git@<personal-token> and x-access-token@<github-access-token> when authenticating using git.
This allows GitHub App tokens to work without interfering with existing authentication types.

---------

Co-authored-by: Jake Jackson <thedoubl3j@Jakes-MacBook-Pro.local>
2025-02-19 23:13:45 +00:00
jessicamack
e56752d55b Ship analytics data using service account token (#15812)
Use oidc client to ship analytics data
2025-02-19 16:38:47 -05:00
Alan Rominger
3495c421c1 Fix: do not use current apps in migrations (#15839) 2025-02-19 07:58:24 -05:00
Hao Liu
b8a1e90b06 [CI][release_4.6] Fix schema upload (#6840) 2025-02-18 09:19:50 -05:00
Hao Liu
8145de3917 Switch to ssh key for private requirements_git (#15838) 2025-02-17 23:58:12 -05:00
Hao Liu
c0b9d3f428 Switch to ssh for private git requirements (#6838) 2025-02-17 22:44:29 -05:00
Hao Liu
376a791052 [CI][release_4.6] backport push development image base on repo name (#6837)
* Publish image base on git repo name instead of hard coded to AWX (#15828)

* Fix git credential for devel_image build (#15834)

* Continue if pre-warm cache fail in container build (#15835)

* Use correct devel image for docker-compose (#15836)
2025-02-17 16:26:19 -05:00
TVo
cb2df43580 [4.6_Backport] Added helper method for fetching serviceaccount token (#6823) 2025-02-17 15:31:33 -05:00
Hao Liu
4487f2afa7 Use correct devel image for docker-compose (#15836) 2025-02-13 21:16:40 -05:00
Hao Liu
c886f57119 Continue if pre-warm cache fail in container build (#15835)
Continue if pre-warm cache fail
2025-02-13 21:16:25 -05:00
Hao Liu
2e9fd7bd67 Fix git credential for devel_image build (#15834) 2025-02-13 16:08:57 -05:00
Hao Liu
ccb6360a96 AAP-39778[Backport][release_4.6] Add DAB Feature Flag common API (#6833)
* [AAP-39138] - Add DAB Feature Flag common API (#15786)
* Update django-ansible-base reference to ansible-automation-platform/django-ansible-base@stable-2.5

---------

Co-authored-by: Zack Kayyali <zkayyali@redhat.com>
2025-02-12 15:47:06 -05:00
Hao Liu
397fb297bf Add ability to provide token for private repo for requirements_git in container build (#15831) (#6830)
Add ability to provide auth to private repo for requirements_git
2025-02-12 20:00:37 +00:00
Hao Liu
f8ff48fe5c Add ability to provide token for private repo for requirements_git in container build (#15831)
Add ability to provide auth to private repo for requirements_git
2025-02-12 19:20:13 +00:00
Hao Liu
69a60493a3 Update feature flag list test (#15830) 2025-02-12 18:29:52 +00:00
Hao Liu
e7440c6074 Publish image base on git repo name instead of hard coded to AWX (#15828) 2025-02-10 20:12:06 +00:00
Alan Rominger
63bb4d66ef Do not get django flags from database (#15794) (#6820) 2025-02-10 10:46:22 -05:00
Alan Rominger
7d2b2d672c Make awx/main/tests/live dramatically faster (#15780)
* Make awx/main/tests/live dramatically faster

* Add new setting to exclude list
2025-02-08 21:07:56 -05:00
Alan Rominger
7017c28706 Limit to python 3.12 for 4.6 branch (#6817) 2025-02-05 10:48:07 -05:00
Alan Rominger
26346d237d Fix rsyslog permission error in github ubuntu tests from apparmor (#15717)
* Add test to detect rsyslog config problems

* Get dmesg output

* Disable apparmor for rsyslogd
2025-02-05 08:38:55 -05:00
Hao Liu
48ee5b05ee Set feature flag base on setting (#15808) (#6811) 2025-02-05 09:30:32 +01:00
Seth Foster
386f85c59f Fix rrule fast forwarding across DST boundaries (#6815)
Fixes an issue where schedules were not running at
the correct time.

Details:

DST is Daylights Saving Time

If the rrule dtstart is "in" a DST period (i.e., March to November)
and the current date is outside of the DST, then the fast forwarding
is not correct.

This is because datetime timedeltas do not honor DST boundaries

The Fix:

Convert the rrule dtstart to UTC before doing operations. Then,
convert back to the original timezone at the end.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-02-04 15:27:21 -05:00
Seth Foster
c2e5425d93 Fix rrule fast forwarding across DST boundaries (#15809)
Fixes an issue where schedules were not running at
the correct time.

Details:

DST is Daylights Saving Time

If the rrule dtstart is "in" a DST period (i.e., March to November)
and the current date is outside of the DST, then the fast forwarding
is not correct.

This is because datetime timedeltas do not honor DST boundaries

The Fix:

Convert the rrule dtstart to UTC before doing operations. Then,
convert back to the original timezone at the end.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-02-04 15:19:49 -05:00
Hao Liu
15932e3f7c Set feature flag base on setting (#15808) 2025-02-04 11:00:05 -05:00
Lila Yasin
b7b15584af Remove Docker Desktop if statement (#15778) (#6797)
Remove docker desktop if statement
2025-02-03 13:35:19 -05:00
Adrià Sala
148f28f448 fix: azure credential awxkit client_id collision 2025-02-03 17:22:05 +01:00
Peter Braun
26b6eac849 fix: compatibility with black v25+ (#15789) (#6803) 2025-02-03 16:02:20 +00:00
Peter Braun
18ea5cc561 Use upload artifact v4 (#6807)
unique-ify name

psh, who needs loops

Folder management

Extracts into current path

Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-02-03 15:10:55 +00:00
Zack Kayyali
a74e7301cd [AAP-39138] - Add DAB Feature Flag common API (#15786)
* Add DAB Feature Flag common API

* Use updated API /feature_flags_state/

* fix git reference

* organization updates
2025-02-03 11:40:16 +01:00
Alan Rominger
30b0c19e72 AAP-38528 Make default state passing for coverage targets (#15772)
* Make default state passing for coverage targets

* Implement Pablo suggestion for patch vs project

* bump up based on current values
2025-01-31 08:40:40 -05:00
Chris Meyers
b53c576944 Add helper to proxy analytics requests
* Handles OIDC token creation and usage transparently
2025-01-30 19:15:41 -05:00
Lila Yasin
99b67f1e37 Add client_secret and client_id to credential_input_fields (#15734) (#6795) 2025-01-29 18:43:06 -05:00
Alan Rominger
c418bc034f Put duplicate plugin location in error message (#15781) 2025-01-29 17:15:02 -05:00
Alan Rominger
d639953a4c Removing some (but not all) dead pytest fixtures (#15782)
* Try removing dead fixtures

* Add back in EE fixture
2025-01-29 17:13:31 -05:00
Jake Jackson
c6930bdf32 Address Lookup Plugins AttributeError (#15770)
* fix backend attribute error

* managedcredential may now contain 2 different classes
* managedcredentialType and one that represents a lookup plugin

* conditionalize creation params

* added a conditional statement to filter our external types

* all external credentials are managed by awx/aap
2025-01-29 10:27:51 -05:00
Peter Braun
d36cd6c6ab fix: compatibility with black v25+ (#15789) 2025-01-29 15:06:14 +00:00
Jake Jackson
aa0f2e362b Remove old jwcrypto tar from licenses since we included the upgraded version (#15783)
remove old jwcrypto tar file since we updated

* remove old version since we updated to a later version, old version is
  no longer needed
2025-01-29 09:48:27 -05:00
Lila Yasin
00238850f4 Remove Docker Desktop if statement (#15778)
Remove docker desktop if statement
2025-01-28 13:43:00 -05:00
Chris Meyers
cdd9e7263d Add insights service account support to collection 2025-01-24 16:29:18 -05:00
Alan Rominger
1f503645fd Move some more tests out of root functional folder (#15753) 2025-01-24 15:04:59 -05:00
Seth Foster
edba126193 Add service account support to Insights credential
Adds fields client_id and client_secret which
will result in authentication via service account
on console.redhat.com

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-01-24 14:42:01 -05:00
Chris Meyers
ad706d67c2 Add ee cleanup tests
* Adds cleanup tests to the live test.
2025-01-24 10:55:30 -05:00
Alan Rominger
534c312328 Use upload artifact v4
unique-ify name

psh, who needs loops

Folder management

Extracts into current path
2025-01-23 16:56:19 -05:00
thedoubl3j
a270b9b474 remove old psycopg tar file that we do not use 2025-01-23 16:56:19 -05:00
Adrià Sala
22ecb2030c feat: support insights service account credentials for project update (AAP-37464) 2025-01-22 16:33:11 +01:00
Adrià Sala
ada42d7d7c fix: azure credential awxkit client_id collision 2025-01-22 09:47:35 +01:00
jessicamack
4eed454ed7 Establish a feature flag for indirect host counting feature (#15759)
* add feature flag for indirect node counting

* fix name of flag
2025-01-21 20:00:57 +01:00
Alan Rominger
c43dfde45a Create test for using manual & file projects (#15754)
* Create test for using a manual project

* Chang default project factory to git, remove project files monkeypatch

* skip update of factory project

* Initial file scaffolding for feature

* Fill in galaxy and names

* Add README, describe project folders and dependencies
2025-01-20 17:06:15 -05:00
Rodrigo Toshiaki Horie
2e8114394b [4.6][dependency] update django for CVE-2024-56374 (#6784) 2025-01-20 18:58:30 -03:00
Adrià Sala
46403e4312 fix: invalid f-string and oidc url for insights plugin 2025-01-20 17:48:19 +01:00
Adrià Sala
492c7a1af6 feat: support insights service account credentials for project update 2025-01-17 16:30:07 +01:00
Adrià Sala
a19e1ba28f feat: update insights action plugin to handle oauth (#15742) 2025-01-16 10:44:40 +01:00
Jake Jackson
f05173cb65 Add new credential entry point discovery (#15685)
* - add new entry points
- add logic to check what version of the project is running

* remove former discovery method

* update custom_injectors and remove unused import

* fix how  we load external creds

* remove stale code to match devel

* fix cloudforms test and move credential loading

* add load credentials method to get tests passing

* Conditionalize integration tests if the cred is present

* remove inventory source test

* inventory source is covered in the workflow job template target
2025-01-15 16:10:28 -05:00
Chris Meyers
e106e10b49 Add changelog to awx collection 2025-01-15 15:09:28 -05:00
Alan Rominger
f57a9863d6 Use advisory_lock from DAB (#15676)
* Use advisory_lock from DAB

* Remove the django-pglocks dep

* Re-run updater script

* Move the import in new location
2025-01-15 14:06:59 -05:00
Chris Meyers
bb8d878a36 Bump awx collection ansible required version 2025-01-15 13:49:09 -05:00
Chris Meyers
885cb8846f Remove coarse grain unused import
* It would seem that fine-grain noqa pylint ignores do the job and are
  already in place. Prefer that over the coarse entire file ignore.
2025-01-15 13:49:09 -05:00
Chris Meyers
d51d4eb392 Fix ansible-lint empty lines in module docstrings 2025-01-15 13:49:09 -05:00
Chris Meyers
ae0d6b70a0 Fix ansible-lint truthy in module docstrings 2025-01-15 13:49:09 -05:00
Chris Meyers
cc6337b344 Fix ansible-lint indentation in module docstrings 2025-01-15 13:49:09 -05:00
Chris Meyers
c185ff51a7 Fix editable dependencies volume name
Spelling of docker volume fix.
2025-01-15 13:48:57 -05:00
Lila Yasin
211339ce73 Add client_secret and client_id to credential_input_fields (#15734) 2025-01-15 13:32:33 -05:00
Peter Braun
3268c9b5fe Add input_inventories to ordered_associations (#15710) (#6782)
Co-authored-by: rev3r4nt <rev3r4nt@gmail.com>
2025-01-15 13:46:38 +01:00
Alan Rominger
c45eb43d63 Update logstash container image and remove ELK stack (#15744)
* Migrate to new image for logstash container

* Remove ELK stack tooling I will not maintain
2025-01-15 07:43:38 -05:00
Hao Liu
f89be5ec8b Switch from dockerhub to gcr mirror (#15743) 2025-01-13 17:03:24 -05:00
Chris Meyers
8ab89d29ca bust the cache 2025-01-13 15:01:18 -05:00
Chris Meyers
ec2966225b Add insights service account support to collection 2025-01-13 15:01:18 -05:00
Alan Rominger
fb12c834eb Add test to ensure bootstrap reqs are good (#15733)
* Add test to ensure bootstrap reqs are good

* Give full diff list in assert
2025-01-13 14:31:19 -05:00
rev3r4nt
6228fe9b66 Add input_inventories to ordered_associations (#15710) 2025-01-13 12:30:16 +01:00
Alan Rominger
c1572af1d4 Fix dependency upgrades (#15740)
* Update dependencies to fix offline build

* Downgrade cryptography due to compatibility issue with openssl

* Downgrade setuptools

* Run update script to assure constraints work

* Maintain pin on cryptography

* Small adjustment to comment

---------

Co-authored-by: Satoe Imaishi <simaishi@redhat.com>
2025-01-10 21:18:48 +00:00
Alan Rominger
3e50b019e0 Delete test file that should have been removed and fix checks (#15739)
* Delete test file that should have been removed

* Add more insights env variables
2025-01-10 13:10:24 -05:00
Alan Rominger
2d7bbc4ec8 AAP-37080 Delete the cleanup_tokens system job template (#15711)
Delete the cleanup_tokens system job template
2025-01-10 10:33:35 -05:00
Alan Rominger
f7cda7696c Bugfix: adjust incorrectly passed keywords with exclude-strings argument (#15721) (#6777)
* Fix incorrectly passed keywords with exclude-strings arg to ansible-runner worker cleanup command



* Keep the quotes for each arg and adjust test_receptor

---------

Signed-off-by: Sasa Jovicic <jovicic.sasa@hotmail.com>
Co-authored-by: Sasa Jovicic <jovicic.sasa@hotmail.com>
2025-01-08 13:44:10 -05:00
Jake Jackson
a209751f22 Fix CVE-2024-56201 update jinja2 (#6778) 2025-01-08 13:42:42 -05:00
Alan Rominger
5944d041e6 AAP-36536 Send job_lifecycle logs to external loggers (#15701) (#6776)
* Send job_lifecycle logs to external loggers

* Include structured data in message

* Attach the organization_id of the job
2025-01-07 15:23:52 -05:00
Roman Petrakov
56079612c8 Fix API documentation rendering (#15116) (#15726)
* Fix API documentation rendering related #15116

* Fix tests and formatting issues #15116
2025-01-07 15:09:18 -05:00
Alan Rominger
2186c24c8f General upgrade of dependencies (#15705)
* General upgrade of dependencies

* adjust licenses to match requirements

* add missing licenses

* another pass to fix licenses

* Try easy for for psycopg encoding pattern change

---------

Co-authored-by: jessicamack <jmack@redhat.com>
2025-01-07 15:03:43 -05:00
TVo
9c732d2406 AAP-36522 Opened PR to test and fix deprecation error in tower 4.6 checks (#6772)
* Opened PR to test AAP-36522.

* Test pin specific ansible-core version in the run_awx_devel actions.

* Fixed syntax for ansible-core version.

* updated makefile to match syntax for ansible version

* Reverts makefile changes; fixed syntax for upgrade ansible-core action.
2025-01-03 12:01:21 -05:00
Alan Rominger
9a5ed20ed5 AAP-37989 Tests for exclude list with multiple jobs (#15722)
* Tests for exclude list with multiple jobs
2025-01-02 16:38:09 -05:00
Alan Rominger
2657ea840b Disable color logs in CI (#15719)
* Disable color logs in CI

* Disable management command color
2025-01-02 16:19:59 -05:00
Sasa Jovicic
7835e39bac Bugfix: adjust incorrectly passed keywords with exclude-strings argument (#15721)
* Fix incorrectly passed keywords with exclude-strings arg to ansible-runner worker cleanup command

Signed-off-by: Sasa Jovicic <jovicic.sasa@hotmail.com>

* Keep the quotes for each arg and adjust test_receptor

---------

Signed-off-by: Sasa Jovicic <jovicic.sasa@hotmail.com>
2025-01-02 16:16:14 -05:00
Lila Yasin
b215699586 🧪 Run sanity tests w/ ansible-test-gh-action (#15539) (#6773)
* 🧪 Run sanity tests w/ `ansible-test-gh-action`

* 🧪 Upload sanity results to unified dashboard

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk@sydorenko.org.ua>
2025-01-02 13:10:51 -05:00
Alan Rominger
14808cb99b Move RBAC functional tests into folder (#15723) 2024-12-20 14:54:52 -05:00
Chris Meyers
cf9e6796ea Move cred type unite tests to awx-plugins 2024-12-19 14:03:27 -05:00
Chris Meyers
bd96000494 Remove inject_credential from awx
* Consume inject_credential from its new home, awx_plugins.interfaces
2024-12-19 09:48:47 -05:00
Chris Meyers
ac34e14228 Point at inject credentials 2024-12-19 09:48:47 -05:00
Andrea Restle-Lay
1b418f75e6 AAP-36604 (analytics) Thousands of zombie/orphaned Slow/Stuck DB queries in controller querying active host count (#15715)
* lint

* change timeout to 5 minutes

* change timeout to 5 minutes
2024-12-18 22:12:52 +00:00
Alan Rominger
288e8d78d3 Cleanup in-memory data from test that randomly causes other failures (#15716) 2024-12-18 16:59:42 -05:00
Alan Rominger
c0158181c3 Fix test warnings that escaped somehow (#15714) 2024-12-18 15:21:53 -05:00
Alan Rominger
65b104e1f9 Upload container logs for live tests (#15713)
* Upload container logs for live tests

* Get rid of dash that does nothing
2024-12-18 20:17:01 +00:00
Elijah DeLee
29f36793de Min value should be Decimal (#15413)
This hopefully resolves error message seen in logs sometimes about "should be Decimal type"
2024-12-17 12:45:39 -05:00
Andrea Restle-Lay
38f72ac7ea host_metrics date fix to make summary dates (datetime.datetime) comparable to month: datetime.date (#15704) (#6770)
* host_metrics date fix

* AAP-36839 Remove excess comments

* fix extra date() conversion

* actual fix

* datetime is a library, use datetime.datetime

---------

Co-authored-by: Andrea Restle-Lay <arestlel@arestlel-thinkpadx1carbongen9.rht.csb>
2024-12-17 11:14:59 -05:00
Alan Rominger
36c75a2c62 AAP-36536 Send job_lifecycle logs to external loggers (#15701)
* Send job_lifecycle logs to external loggers

* Include structured data in message

* Attach the organization_id of the job
2024-12-16 15:49:16 -05:00
Pablo H.
b361aef0fb chore: addressing CVE 2024-53908 (#6768) 2024-12-16 14:16:00 -05:00
Seth Foster
df79fa4ae1 bump grpcio CVE-2024-11407 (#6766)
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-12-16 13:23:24 -05:00
Andrea Restle-Lay
86d202456a host_metrics date fix to make summary dates (datetime.datetime) comparable to month: datetime.date (#15704)
* host_metrics date fix

* AAP-36839 Remove excess comments

* fix extra date() conversion

* actual fix

* datetime is a library, use datetime.datetime

---------

Co-authored-by: Andrea Restle-Lay <arestlel@arestlel-thinkpadx1carbongen9.rht.csb>
2024-12-16 12:10:59 -05:00
jessicamack
c1f0a831ff Pull the correct collection plugin for the product (#15658)
* pull the correct collection plugin for the product

* remove unused import and logging line

* refactor code to load entry points

* reformat method

* lint fix

* renames for clarity and a lint fix

* move function to utils

* move the rest of the code into load_inventory_plugins

* temp - confirm that tests will pass

* revert change caught in merge

* change back requirement

the related PR has been merged
2024-12-16 11:05:21 -05:00
Seth Foster
e605883592 Do not fast forward rrule if count is set (#15696)
Fixes a bug where a schedule that was created
to run only once will continue to run repeatedly.

e.g. an rrule with
dtstart 20240730; count 1; freq MINUTELY

This job will run on 20240730, and should never
run again.

However, the next time the schedule
update_computed_fields runs, the dtstart
will fast forward to today's date, and
next_run will be computed from that. This will trigger
the job to run again, which is not intended.

If count is set, we just should not fast forward the
rrule and always calculate next_run based on original
dtstart.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-12-11 11:14:58 -05:00
Alan Rominger
f377b5fdde Use runtime log utility moved to DAB (#15675)
* Use runtime log utility moved to DAB
2024-12-11 10:38:24 -05:00
Seth Foster
efbe729c42 bump sqlparse to meet DAB requirement (#15697)
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-12-10 18:20:14 -05:00
Seth Foster
9c2de6b535 Do not fast forward rrule if count is set (#6764)
Fixes a bug where a schedule that was created
to run only once will continue to run repeatedly.

e.g. an rrule with
dtstart 20240730; count 1; freq MINUTELY

This job will run on 20240730, and should never
run again.

However, the next time the schedule
update_computed_fields runs, the dtstart
will fast forward to today's date, and
next_run will be computed from that. This will trigger
the job to run again, which is not intended.

If count is set, we just should not fast forward the
rrule and always calculate next_run based on original
dtstart.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-12-10 16:14:46 -05:00
Peter Braun
0ce0023561 Fix: invalid response type on post request (#15609) (#6763) 2024-12-10 16:12:49 -05:00
Alan Rominger
32122e6822 Fix misused project cache identifier (#15690)
Fix project cache identifiers for new updates

Finish test and discover viable solution

Add comment on related task code
2024-12-10 15:26:17 -05:00
Alan Rominger
f2ae68f302 Fix project cache identifiers for new updates (#6762)
Finish test and discover viable solution

Add comment on related task code
2024-12-10 15:23:54 -05:00
Alan Rominger
b3542c226d Fix error creating partition due to uncaught exception (#6761)
the primary fix is to simply add an exception class
  to those caught in the except block

This also adds live tests for the general scenario
  although this does not hit the new exception type
2024-12-10 15:22:09 -05:00
Chris Meyers
a129bc860b Flake8 fix 2024-12-10 13:02:09 -05:00
Chris Meyers
c82a8f4b9c Add custom_injectors to test code path
* Unit tests do not create CredentialType records for Credential
  plugins. Instead, they explicitly instantiate CredentialType(s) for
  Credential plugins. They rely on CredentialType.defaults[key] to do
  so. This change makes sure custom_injectors get bolted onto the
  created CredentialType.
2024-12-10 13:02:09 -05:00
Chris Meyers
99c18b681d Load all plugins in order to test them 2024-12-10 13:02:09 -05:00
Chris Meyers
aeca9db470 Rename post_injectors to custom_injectors 2024-12-10 13:02:09 -05:00
Chris Meyers
4b85e7e25a Adopt post_injectors change from awx-plugins 2024-12-10 13:02:09 -05:00
Alan Rominger
325b6d3997 Fix missing exception catch in create_partition (#15691)
Fix error creating partition due to uncaught exception

the primary fix is to simply add an exception class
  to those caught in the except block

This also adds live tests for the general scenario
  although this does not hit the new exception type
2024-12-09 20:57:14 -05:00
Alan Rominger
91d92a6636 Make dev script work in combined environment (#15684) 2024-12-09 09:07:17 -05:00
Peter Braun
56d3933154 feat: enable django flags support (#15660) (#6755)
* feat: enable django flags support

* add django flags license

* re-run updater script
2024-12-09 09:40:28 +01:00
Peter Braun
a1ec28aeb9 fix: reset state before evaluating named urls (#15683) (#6756) 2024-12-09 09:40:13 +01:00
Peter Braun
148afce455 deps: receptorctl v1.5.1 (#6760) 2024-12-06 16:12:58 +01:00
Alan Rominger
1a35775c25 Create a new pytest folder for live system testing with normal services (#15688)
* PoC for running dev env tests

* Replace in github actions

* Try non interactive

* Move folder to better location

* Further streamlining of new test folders

* Consolidate fixture, add writeup docs

* Use star import

* Push the wait-for-job to the conftest
2024-12-06 09:20:26 -05:00
Hao Liu
f3b86b5193 Update defaults.py receptor typo (#15682) (#6754)
Fixing typo for  RECEPTOR_KEEP_WORK_ON_ERROR

Co-authored-by: linuxonfire <jaimescampositzel@gmail.com>
2024-12-05 09:35:21 -05:00
linuxonfire
698a8aeb62 Update defaults.py receptor typo (#15682)
Update defaults.py

fixing typo for  RECEPTOR_KEEP_WORK_ON_ERROR
2024-12-04 17:11:58 +00:00
Peter Braun
055d853c54 fix: reset state before evaluating named urls (#15683) 2024-12-04 15:53:24 +01:00
Hao Liu
82c967a66e Fix receptor work unit release after completion (#15679) (#6751)
Fix bug introduced by https://github.com/ansible/awx/pull/15392 that cause workunit to NOT be auto released after job completes
2024-12-03 12:28:55 -05:00
Hao Liu
cb04ad8ef5 Fix receptor work unit release after completion (#15679)
Fix bug introduced by https://github.com/ansible/awx/pull/15392 that cause workunit to NOT be auto released after job completes
2024-12-03 11:42:09 -05:00
Peter Braun
f62dfdad2d feat: enable django flags support (#15660)
* feat: enable django flags support

* add django flags license

* re-run updater script
2024-12-03 14:33:10 +01:00
Don Naro
3ceca1b4c7 use subproject url prefix (#15681)
* use subproject url prefix

* add version details
2024-12-03 12:01:28 +00:00
Don Naro
cdb294c5c7 Add the Sphinx notfound page extension (#15669)
* add sphinx notfound extension
* add notfound conf
* upgrade requirements
* use double backticks
* add urls prefix
2024-12-03 10:26:06 +00:00
Alan Rominger
c64b5eb462 Fix missing dependencies due to extras - vs _ (#15677)
Fix missing dependencies
2024-12-02 13:32:27 -05:00
Alan Rominger
adc2162bac Ignore warnings so people can run tests on python 3.12 (#15663) 2024-12-02 11:53:04 -05:00
Chris Meyers
e411f3534f Decouple inject_credentials from dynamic inputs
* Preparation for moving inject_credentials out of this repo
2024-12-02 11:32:46 -05:00
Don Naro
699c0c769d add custom 404 page (#15668)
* add custom 404 page

* cowsay 404
2024-11-26 12:04:55 -07:00
Pablo H.
268ca7c78a Remove oauth provider (#15666)
* Remove oauth provider

This removes the oauth provider functionality from awx. The
oauth2_provider app and all references to it have been removed.
Migrations to delete the two tables that locally overwrote
oauth2_provider tables are included. This change does not include
migrations to delete the tables provided by the oauth2_provider app.

Also not included here are changes to awxkit, awx_collection or the ui.

* Fix linters

* Update migrations after rebase

* Update collection tests for auth changes

The changes in https://github.com/ansible/awx/pull/15554 will cause a
few collection tests to fail, depending on what the test configuration
is. This changes the tests to look for a specific warning rather than
counting the number of warnings emitted.

* Update migration

* Removed unused oauth_scopes references

---------

Co-authored-by: Mike Graves <mgraves@redhat.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-11-26 18:59:37 +01:00
Alan Rominger
789a43077f Address unclosed fd warnings 2024-11-25 14:01:21 -05:00
Sviatoslav Sydorenko
d8e87da898 🧪 Make pytest notify us about future warnings
In essence, this configures Python to turn any warnings emitted in
runtime into errors[[1]]. This is the best practice that allows
reacting to future deprecation announcements that are coming from the
dependencies (direct, or transitive, or even CPython itself)[[2]].

The typical workflow looks like this:

  1. If a dependency is updated an a warning is hit in tests, the
     deprecated thing should be replaced with newer APIs.

  2. If a dependency is transitive or we have no control over it
     otherwise, the specific warning and a regex matching its message,
     plus the module reference (where possible) can be added to the
     list of temporary ignores in `pytest.ini`.

  3. The list of temporary ignores should be reevaluated periodically,
     including when dependency re-pinning in lockfile is happening.

[1]: https://docs.python.org/3/using/cmdline.html#cmdoption-W
[2]: https://pytest-with-eric.com/configuration/pytest-ignore-warnings/
2024-11-25 14:01:21 -05:00
Peter Braun
8174a28716 update receptorctl to v1.5.0 (#6749) 2024-11-25 15:37:01 +01:00
Lila Yasin
4bbcb34ae3 Add descriptions for plugin names (#15643)
* Add descriptions for plugin names

* Update serializers to display plugin and plugin description

* Add function to extract plugin name descriptions

* Add description for scm

* Conditionalize scm and file descriptions
2024-11-25 09:20:22 -05:00
Seth Foster
9c556db4c0 Make rrule fast forwarding stable (#15601) (#6744)
By stable, we mean future occurrences of the rrule
should be the same before and after the fast forward
operation.

The problem before was that we were fast forwarding to
7 days ago. For some rrules, this does not retain the old
occurrences. Thus, jobs would launch at unexpected times.

This change makes sure we fast forward in increments of
the rrule INTERVAL, thus the new dtstart should be in the
occurrence list of the old rrule.

Additionally, code is updated to fast forward
EXRULE (exclusion rules) in addition to RRULE

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: 🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>
2024-11-24 11:31:30 -05:00
Justin Downie
1d12f0c837 Back port for awx podAntiAffinity spec (#6733) 2024-11-22 16:15:26 -05:00
Satoe Imaishi
71a18c0d61 Bump uwsgi to 2.0.28 (#6736) 2024-11-22 10:54:52 -05:00
TVo
790875ceef Removed UI-focused user docs from AWX. (#15641)
* Replaced with larger graphic.

* Revert "Replaced with larger graphic."

This reverts commit 1214b00052.

* Removed UI-focused user docs from AWX.

* Fixed indentation for release notes

* Removed/updated image files no longer needed.
2024-11-22 07:43:43 -07:00
Alan Rominger
d2cd4e08c5 Do not check error state if null (#15655) 2024-11-22 07:47:48 -05:00
Hao Liu
c55fb369fa Update receptorctl to 1.4.11 (#6746) 2024-11-21 16:31:09 -05:00
Jake Jackson
2c3b4ff5d7 [4.6][dependency] update aiohttp to address vuln CVE-2024-52304 (#6740)
* update aiohttp to address vuln CVE-2024-52304

* add licenses for new deps
2024-11-21 16:21:34 -05:00
Alan Rominger
ce7911e578 Revive the logstash container for testing (#15654)
* Revive the logstash container for testing

* yamllint
2024-11-21 20:11:16 +00:00
Seth Foster
51896f0e1b Make rrule fast forwarding stable (#15601)
By stable, we mean future occurrences of the rrule
should be the same before and after the fast forward
operation.

The problem before was that we were fast forwarding to
7 days ago. For some rrules, this does not retain the old
occurrences. Thus, jobs would launch at unexpected times.

This change makes sure we fast forward in increments of
the rrule INTERVAL, thus the new dtstart should be in the
occurrence list of the old rrule.

Additionally, code is updated to fast forward
EXRULE (exclusion rules) in addition to RRULE

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: 🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>
2024-11-21 14:05:49 -05:00
Pablo H.
3ba6e2e394 feat: remove collection support for oauth (#15623)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-11-20 11:18:52 -05:00
Alan Rominger
6599f3f827 Removal of OAuth2 stuff from CLI
also from awxkit generally

Remove login command
2024-11-20 11:18:52 -05:00
Alan Rominger
670b7e7754 Fix server error from system job detail view (#15640) 2024-11-19 12:53:32 -05:00
Chris Meyers
108cf843d4 Add option to skip credential type discovery
* Option to avoid database operations in django init path. Useful for
  running collectstatic, or other management commands, without a database.
2024-11-18 14:18:24 -05:00
Alan Rominger
d26396ce74 Fix error with CLI monitor of ad hoc output (#15642) 2024-11-18 13:52:20 -05:00
Alan Rominger
3dbcfb138c Add test that resource list does not server error (#15635) 2024-11-15 11:12:18 -05:00
Peter Braun
54487573f3 fix: invalid response type on post request (#15609) 2024-11-14 12:58:55 +01:00
TVo
943964e14f 4.6_Backport changes made to aim.py from AWX. (#6739) 2024-11-13 12:16:25 -07:00
Alan Rominger
989a4387df Set coverage limits so we do not have current failures (#15629)
* Set coverage limits so we do not have current failures

* Fix to reflect changed number of jobs
2024-11-12 13:50:12 -05:00
Alan Rominger
c9f880414c Make lookup plugins return lists to fix failures (#15625)
* Make lookup plugins return lists to fix failures

* Update unit tests

* Use lookup for test failures, update docs

* Grammar fix from review

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

---------

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>
2024-11-12 12:37:38 -05:00
Lila Yasin
6f184e3f76 Fix for 'relation "social_auth_usersocialauth" does not exist' error (#15626)
* Ran updater.sh

* Remove uneeded licenses
2024-11-11 14:40:33 -05:00
Peter Braun
69baa739fa feat: install awx collection from source (#15617) 2024-11-11 12:30:39 +01:00
Chris Meyers
d388f91bcd Metrics dispatcher callback receiver swaparoo 2024-11-08 00:06:17 -05:00
Chris Meyers
51b1fa412d Install awx collection from branch for operator ci 2024-11-07 15:17:06 -05:00
TVo
dfee5a1821 Updated Authentication section to reflect AWX only method. (#15602)
* Updated Authentication section to reflect AWX only method.

* Update awxkit/awxkit/cli/docs/source/authentication.rst

---------

Co-authored-by: Helen Bailey <hakbailey@gmail.com>
2024-11-07 16:02:48 +00:00
TVo
aa162c6128 Removed oAuth methods from collection docs. (#15606)
* Removed oAuth methods from collection docs.
2024-11-07 15:58:31 +00:00
Alan Rominger
f4cbb9f9a8 Fix bug where unrelated jobs were linked as dependencies (#15610) 2024-11-06 14:43:36 -05:00
Alan Rominger
b97240417a Fix bug where unrelated jobs were linked as dependencies (#6735) 2024-11-06 14:41:57 -05:00
Peter Braun
6195e8e879 fix: increase max verbosity level for constructed inventory (#15604) 2024-11-05 16:44:21 +01:00
Kersom
6d959daca1 Revert "[4.6] Bump dependency (#6726)" (#6731)
This reverts commit 8fbe0c2b1f.
2024-10-31 11:36:22 -04:00
Kersom
8fbe0c2b1f [4.6] Bump dependency (#6726)
* Bump dependency

* Delete mini-create-react-context.txt

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-10-30 17:15:59 +00:00
Alan Rominger
68055bb89f Add back git requirements as comments & re-run script (#15317)
* Add back git requirements as comments

* Add comment to commented out git lines for clarity

* Re run the updater script

* Add new licenses

* Fix library name
2024-10-28 19:44:06 -04:00
Lila Yasin
e21dd0a093 Make cloud providers dynamic (#15537)
* Add dynamic pull for cloud inventory plugins and update corresponding tests

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

* Create third dictionary to preserve current functionality and add 'file' there

* Migrations for corresponding change

---------

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>
2024-10-23 11:30:00 -04:00
Seth Foster
c85fa70745 bump django 4.2.16 to be in line with DAB (#15596)
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-10-22 15:40:18 -04:00
David Newswanger
23528b7fef Add service backed SSO stage to saml pipeline (#6722) 2024-10-21 16:17:08 -04:00
Hao Liu
784ff3193d Pin DAB to 2024.10.17 (#6721) 2024-10-21 19:25:05 +00:00
Hao Liu
7972486594 Update receptorctl to 1.4.9 (#6718) 2024-10-17 11:27:21 -04:00
Chris Meyers
dbdbc7635a Redirect user to platform supported collection
* AAP 2.5 Controller 4.6 Org, User, and Team endpoints are restricted.
  When the user performs a restricted operation via the Controller
  collection, kindly notify them that they should be using the platform
  collection instead.
2024-10-17 11:02:19 -04:00
Rick Elrod
4820b084c1 Prettier DRF pages when using trusted proxy (#15579) (#6717)
This is a rather hacky, but fixes the DRF pages when going through a
trusted proxy.

Notably: This is meant to primarily fix the DRF pages on downstream
builds while leaving the upstream to function as-is.

When using a trusted proxy, the DRF login and logout endpoints now
redirect to the Platform login page (which respects ?next) and logout
endpoint respectively.

The CSS and JS is inlined because the trusted proxy might only proxy
to /api/ and not /static/ which is a harder problem to solve.

Signed-off-by: Rick Elrod <rick@elrod.me>
2024-10-16 13:58:01 -04:00
Mike Graves
764dcbf94b Add gateway support to awxkit (#15576)
* Add gateway support to awxkit

This updates awxkit to add support for gateway when fetching oauth
tokens, which is used during the `login` subcommand. awxkit will first
try fetching a token from gateway and if that fails, fallback to
existing behavior. This change is backwards compatible.

Signed-off-by: Mike Graves <mgraves@redhat.com>

* Address review feedback

This:
  * adds coverage for the get_oauth2_token() method
  * changes AuthUrls to a TypedDict
  * changes the url used for personal token access in gateway

* Address review feedback

This is just minor stylistic changes.

---------

Signed-off-by: Mike Graves <mgraves@redhat.com>
2024-10-16 12:01:30 -04:00
jessicamack
42420ebde6 remove oauth use 2024-10-16 10:50:34 -04:00
Hao Liu
31e47706b9 3rd party auth removal cleanup
- Sequentiallize auth config removal migrations
- Remove references to third party auth
- update license files
- lint fix
- Remove unneeded docs
- Remove unreferenced file
- Remove social auth references from docs
- Remove rest of sso dir
- Remove references to third part auth in docs
- Removed screenshots of UI listing removed settings
- Remove AuthView references
- Remove unused imports
...

Co-Authored-By: jessicamack <21223244+jessicamack@users.noreply.github.com>
2024-10-15 17:43:32 -04:00
Djebran Lezzoum
4c7697465b Remove sso app (#15550)
Remove sso app.
2024-10-15 17:43:32 -04:00
jessicamack
1ca034b0a7 Remove SAML authentication (#15568)
* remove saml

* remove license file and management command

* update requirements, add migrations

* remove unused imports
2024-10-15 17:43:32 -04:00
jessicamack
bf09b95b61 Remove OIDC (#15569)
* remove oidc

* remove test fields, linting fix

* merge commit
2024-10-15 17:43:32 -04:00
TVo
65817d4fa4 Removed more mentions about SAML. (#15565)
* Removed docs associated with SAML auth.

* Removed more mentions about SAML.

---------

Co-authored-by: jessicamack <jmack@redhat.com>
2024-10-15 17:43:32 -04:00
jessicamack
0f0919937d Remove Keycloak (#15567)
remove keycloak
2024-10-15 17:43:32 -04:00
Djebran Lezzoum
bcd006f1a5 Remove social oauth (Azure, Github, Google) (#15549)
Remove social oauth (Azure, Github, Google)

Co-authored-by: jessicamack <jmack@redhat.com>
2024-10-15 17:43:32 -04:00
Djebran Lezzoum
2c2694ce89 Remove RADIUS authentication (#15548)
Remove RADIUS authentication from AWX

Do not remove models fields and tables let it for a stage where all the work of removing external auth finished AAP-27707

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-10-15 17:43:32 -04:00
Djebran Lezzoum
e4c11561cc Remove TACACS+ authentication (#15547)
Remove TACACS+ authentication from AWX.

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-10-15 17:43:32 -04:00
Djebran Lezzoum
f22b192fb4 Remove LDAP authentication (#15546)
Remove LDAP authentication from AWX
2024-10-15 17:43:32 -04:00
Rick Elrod
6dea7bfe17 Prettier DRF pages when using trusted proxy (#15579)
This is a rather hacky, but fixes the DRF pages when going through a
trusted proxy.

Notably: This is meant to primarily fix the DRF pages on downstream
builds while leaving the upstream to function as-is.

When using a trusted proxy, the DRF login and logout endpoints now
redirect to the Platform login page (which respects ?next) and logout
endpoint respectively.

The CSS and JS is inlined because the trusted proxy might only proxy
to /api/ and not /static/ which is a harder problem to solve.

Signed-off-by: Rick Elrod <rick@elrod.me>
2024-10-15 15:50:11 -05:00
Hao Liu
d5388b3c56 Remove ui_next static file dir (#6715)
UI_NEXT is no longer being served in 4.6

Removing static file dir for ui-next fixes

```
WARNINGS:
?: (staticfiles.W004) The directory '/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/ui_next/build' in the STATICFILES_DIRS setting does not exist.
CommandError: Inventory with id = 1038181458411569545 cannot be found
```
2024-10-15 19:58:33 +00:00
Hao Liu
1acf8cfde6 Add splitted up inventory source plugins (#15584)
* Add splitted up inventory source plugins

Fix CI failure introduced by
7d83b7dfdb
2024-10-15 15:55:33 -04:00
Hao Liu
433974aea6 [4.6][CI]Fix CI for newer debian image (#15583) (#6716)
* Fix CI for newer debian image (#15583)

* Fix CI for newer debian image

Signed-off-by: Rick Elrod <rick@elrod.me>

* Missed one

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>

* Update ci.yml

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Rick Elrod <rick@elrod.me>
2024-10-15 15:33:55 -04:00
Rick Elrod
dbe6fcc4e7 Fix CI for newer debian image (#15583)
* Fix CI for newer debian image

Signed-off-by: Rick Elrod <rick@elrod.me>

* Missed one

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
2024-10-14 17:49:48 -04:00
Justin Downie
825a02c86a Adding podAntiAffinity (#15578) 2024-10-08 16:29:57 -04:00
Djebran Lezzoum
579c2b7229 Update AWX collection to use basic authentication (#15554)
Update AWX collection to use basic authentication when oauth token not provided,
and when username and password provided.
2024-10-08 10:42:22 -04:00
Tomas Z
d1c85dae4d Upgrade django and sqlparse to pickup CVE fixes (#6709) 2024-10-04 15:51:12 -04:00
Alan Rominger
534b0209f4 Fix 500 error due to None data in DAB response (#15562) (#6714)
* Fix 500 error due to None data in DAB response

* NOQA for flake8 failures
2024-10-02 16:12:30 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
ece21b15d0 Use awx-plugins-shared code from awx_plugins.interfaces (#15566)
* Add `awx_plugins.interfaces` runtime dependency

* Use `awx_plugins.interfaces` for runtime detection

The original function name was `server_product_name()` but it didn't
really represent what it did. So it was renamed into
`detect_server_product_name()` in an attempt of disambiguation.

* Use `awx_plugins.interfaces` to map container path

The original function `to_container_path` has been renamed into
`get_incontainer_path()` to represent what it does better and make
the imports more obvious.

* Add license file for awx_plugins.interfaces

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-10-02 18:40:16 +00:00
Alan Rominger
97ececa8b4 Fix 500 error due to None data in DAB response (#15562)
* Fix 500 error due to None data in DAB response

* NOQA for flake8 failures
2024-10-02 11:27:08 -04:00
Hao Liu
46becf15e9 Switch DAB back to devel to (#6713)
Enable event 2 development
2024-10-01 20:11:04 +00:00
TVo
a1ad320622 Removed docs associated with SAML auth. (#15563) 2024-09-30 17:30:46 -04:00
Hao Liu
48e3afbb00 Filter out ANSIBLE_BASE_ from job env var (#15558) 2024-09-30 13:50:04 +00:00
TVo
486a1264d5 Removed docs associated with OIDC auth (#15557)
Removed docs associated with OIDC auth.
2024-09-30 09:23:27 -04:00
Hao Liu
02795c9ed9 Filter out ANSIBLE_BASE from job env var (#6710) 2024-09-27 19:42:44 +00:00
Alan Rominger
5b7a0504f4 Enable service redirect auth and reverse-sync from DAB (#15489)
* Update settings from DAB features

* Move to the end of the list more correctly
2024-09-23 08:52:06 -04:00
jessicamack
7db7abcd65 Upload the test results for awx-collection to dashboard (#15543)
upload the results for awx-collection separately

the rest of the tests can stay under awx
2024-09-20 15:44:39 -04:00
Hao Liu
6574cfe3a9 Pin dependencies to prepare for release_4.6 release tag (#6707)
* Pin deps to release prep
- ansible-runner@2.4.0
- receptorctl@1.4.8
- django-ansible-base@c8fbc1e345d4908cc97eaae20771238a5dd35aad
2024-09-19 16:22:18 +00:00
Sviatoslav Sydorenko (Святослав Сидоренко)
14698b177b 🧪 Publish awxkit's coverage to Codecov (#15525)
It's already being generated, just not uploaded. This patch
addresses that.
2024-09-18 11:01:24 -04:00
Chris Meyers
1881c26ac4 Make analytics job ts settings hidden
* There isn't a great reason to allow the UI to edit these meta-data
  fields that denote the last time an analytics job ran.
* The only reason I hesitate to mark them uneditable in the API is that
  they are useful to change in order to influence when the jobs run.
  Mostly for debug purposes or 1-off.
2024-09-18 07:23:15 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
2fdb776ce7 🧪 Run sanity tests w/ ansible-test-gh-action (#15539)
* 🧪 Run sanity tests w/ `ansible-test-gh-action`

* 🧪 Upload sanity results to unified dashboard
2024-09-17 19:06:36 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
ce2b8e9a9e 🧪🚑 Fix escaping EOLs in curl invocation (#15538)
This is a follow up for #15532.
2024-09-17 20:58:58 +00:00
Sviatoslav Sydorenko (Святослав Сидоренко)
cf25a09323 🧪 Upload ansible-test coverage to Codecov (#15527) 2024-09-17 16:45:06 -04:00
jessicamack
eccc32cbad Upload API unit test results to dashboard (#15532)
* update ci to upload test report

* Update .github/workflows/ci.yml

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

* Update .github/workflows/ci.yml

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

* Update .github/workflows/ci.yml

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

* Update .github/workflows/ci.yml

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

* Update .github/workflows/ci.yml

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>

---------

Co-authored-by: Sviatoslav Sydorenko (Святослав Сидоренко) <wk.cvs.github@sydorenko.org.ua>
2024-09-17 15:23:11 -04:00
Jake Jackson
fafed924e3 rebase and merge conflict resolution (#6692) 2024-09-17 16:46:12 +00:00
Peter Braun
d2f3c02945 fix: maintain order of insertions into m2m relationship tables (#15536) (#6703) 2024-09-17 18:29:36 +02:00
Jake Jackson
eb4f3c2864 update urllib to fix CVE-2024-37891 (#6700) 2024-09-17 12:14:28 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
66e7210ba4 🧪 Upload coverage from the rest of CI jobs (#15526) 2024-09-17 09:53:56 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
0a4370acf0 🧪 Delegate source filtering to coverage.py (#15528)
This drops the coverage source spec from the `pytest` args as it's
already configured in `coveragerc` which is a better place for
keeping it.
2024-09-17 09:53:44 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
9fbbe3cba0 🧪 Use xunit1 in pytest by default (#15524)
This format is contains file paths unlike the newer implementation.
2024-09-17 09:52:43 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
2ca5fcae2f 🧪💅 Unignore errors in coveragerc (#15523)
This setting does not seem necessary so there is no reason for it to
be listed.
2024-09-17 09:52:25 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
69c1d2f64d 🧪 Include coverage measurement @ site-packages (#15521) 2024-09-17 09:52:02 -04:00
Andrew Klychkov
c4500cfc3d Docs: change Getting started EE guide reference to point to the relevant location (#15502) 2024-09-17 09:17:43 -04:00
Peter Braun
af900c8370 fix: maintain order of insertions into m2m relationship tables (#15536) 2024-09-17 13:37:35 +02:00
TVo
ef8cb892cb Plugin removals for docs (#15505)
* Removed files from AWX that were moved to awx-plugins.

* Removed credential plugins file from AWX.

* Resolved broken build: added back missing graphics and removed obsolete xrefs.
2024-09-16 15:27:58 -06:00
Jake Jackson
bcd18e161c fix CVE-2024-21520 (#6687) 2024-09-16 16:04:11 -04:00
Chris Meyers
b62d0ff8e6 Make analytics job ts settings hidden
* There isn't a great reason to allow the UI to edit these meta-data
  fields that denote the last time an analytics job ran.
* The only reason I hesitate to mark them uneditable in the API is that
  they are useful to change in order to influence when the jobs run.
  Mostly for debug purposes or 1-off.
2024-09-16 14:14:01 -04:00
Alan Rominger
c33947af7f [AAP 2.5] DAB 9.4, enable service redirect auth (#6672)
* Update settings from DAB features

* Delay implementation of the reverse sync setting

* Move auth class to actual end of pipeline
2024-09-16 13:41:26 -04:00
Elijah DeLee
30b005aa9d catch harakiri graceful signal in middlware and log debug info (#6673)
Middleware is from django_ansible_base
2024-09-16 16:25:03 +00:00
Andrew Klychkov
c9ae36804a Remove ML remnants from docs (#15500) 2024-09-16 09:31:16 +01:00
Peter Braun
a1e3919b1f update remaining urls for new UI (#15529) (#6699) 2024-09-15 10:19:51 -04:00
Peter Braun
1140981c64 update remaining urls for new UI (#15529) 2024-09-15 09:31:12 -04:00
Peter Braun
0a8e92cab7 fix workflow job url (#15522) (#6697) 2024-09-15 00:05:18 +02:00
Seth Foster
30e2c3a8cd Validate org-user membership from gateway (#15508) (#6698)
Adding credential and execution environment roles
validates that the user belongs to the same org
as the credential or EE.

In some situations, the user-org membership has not
yet been synced from gateway to controller.

In this case, controller will make a request to
gateway to check if the user is part of the org.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-09-14 16:01:04 +00:00
Peter Braun
6fd483698a fix workflow job url (#15522) 2024-09-14 15:06:05 +02:00
Hao Liu
aef3d8750b [4.6] Fix additional UI URL generated by API (#15517) (#15518) (#6695)
* Fix: change to url in platform ui (#15518)

* Fix instance UI URL generated by API (#15517)

---------

Co-authored-by: Peter Braun <pbraun@redhat.com>
2024-09-13 21:12:08 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
5315a2b194 🧪 Use modern source_pkgs @ coveragerc (#15519)
This helps disambiguate main project code from the tests.
2024-09-13 21:11:28 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
abdc669e50 🧪 Pass specific report files to codecov-cli (#15520)
The automatic discovery is currently unreliable.

Ref: https://github.com/codecov/codecov-cli/issues/500
2024-09-13 21:11:11 -04:00
Seth Foster
3baea0f206 Validate org-user membership from gateway (#15508)
Adding credential and execution environment roles
validates that the user belongs to the same org
as the credential or EE.

In some situations, the user-org membership has not
yet been synced from gateway to controller.

In this case, controller will make a request to
gateway to check if the user is part of the org.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-09-13 17:56:43 -04:00
Hao Liu
acd6b2eb22 Fix instance UI URL generated by API (#15517) 2024-09-13 16:57:23 -04:00
Peter Braun
cc6a0612da fix: change to url in platform ui (#15518) 2024-09-13 22:40:45 +02:00
Sviatoslav Sydorenko (Святослав Сидоренко)
ea7ca3d32d 🧪💅 Categorize the Codecov status checks (#15516)
Make codecov identify metrics for tests and awx modules separately.
2024-09-13 20:26:25 +00:00
Sviatoslav Sydorenko (Святослав Сидоренко)
31ae3f25e7 🧪💅 Migrate to exclude_also @ coveragerc (#15513)
This is an option that appeared in Coverage.py v7.2.0.
2024-09-13 16:20:51 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
d0cc2a1658 🧪🚑 Fix checking schema in CI on merge (#15514)
This is a variation of #15510, this time fixing the
`detect-schema-change` make target.
2024-09-13 16:19:35 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
1b1975a93b 🧪💄 Order settings in coveragerc (#15515) 2024-09-13 20:13:50 +00:00
Sviatoslav Sydorenko (Святослав Сидоренко)
b722f7003d 🧪 Unmeasure coverage in tests expected to fail (#15512)
These tests are known to only be executed partially or not at all. So
we always get incomplete, missing, and sometimes flaky, coverage in
the test functions that are expected to fail.

This change updates the ``coverage.py`` config to prevent said tests
from influencing the coverage level measurement.

Ref https://github.com/pytest-dev/pytest/pull/12531
2024-09-13 15:57:06 -04:00
Sviatoslav Sydorenko (Святослав Сидоренко)
6bfe76d6d1 🧪🚑 Fix running awx image in CI on merge (#15510)
This is a variation of #15509, fixing the `run_awx_devel` in-tree
action this time.
2024-09-13 18:42:48 +00:00
Sviatoslav Sydorenko (Святослав Сидоренко)
a9b0d9f2e5 🧪🚑 Fix fetching the CI image on merges (#15509) 2024-09-13 18:18:19 +00:00
Peter Braun
f799376b3d Add OPTIONAL_UI_URL_PREFIX (#15506) (#6694)
# Add a postfix to the UI URL patterns for UI URL generated by the API
# example if set to '' UI URL generated by the API for jobs would be $TOWER_URL/jobs
# example if set to 'execution' UI URL generated by the API for jobs would be $TOWER_URL/execution/jobs

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-09-13 20:06:40 +02:00
Sviatoslav Sydorenko (Святослав Сидоренко)
e68370f2aa Replace pkg_resources with importlib.metadata (#15441) 2024-09-13 17:39:14 +00:00
Sviatoslav Sydorenko (Святослав Сидоренко)
a5de4652b9 🧪 Upload the devel branch coverage to Codecov (#15507) 2024-09-13 17:25:30 +00:00
Hao Liu
38719405c3 Add OPTIONAL_UI_URL_PREFIX (#15506)
# Add a postfix to the UI URL patterns for UI URL generated by the API
# example if set to '' UI URL generated by the API for jobs would be $TOWER_URL/jobs
# example if set to 'execution' UI URL generated by the API for jobs would be $TOWER_URL/execution/jobs
2024-09-13 19:20:00 +02:00
Peter Braun
96ec709e90 fix: avoid race conditions when removing multiple instance (#15495) (#6693)
* fix: avoid race conditions when removing multiple instance groups at once

* remove unused imports
2024-09-13 17:21:47 +02:00
Sviatoslav Sydorenko (Святослав Сидоренко)
090511e65b 🧪 Gather coverage @ CI and upload to Codecov (#15499) 2024-09-13 10:46:48 -04:00
Peter Braun
1c170c3a12 fix: avoid race conditions when removing multiple instance (#15495)
* fix: avoid race conditions when removing multiple instance groups at once

* remove unused imports
2024-09-13 09:35:45 +02:00
Chris Meyers
490db08224 Register CredentialType(s) every time Django loads
* Register all discovered CredentialType(s) after Django finishes
  loading
* Protect parallel registrations using shared postgres advisory lock
* The down-side of this is that this will run when it does not need to,
  adding overhead to the init process.
* Only register discovered credential types in the database IF
  migrations have ran and are up-to-date.
2024-09-12 14:11:19 -04:00
Hao Liu
70f7ac72d4 Removes collection of unpartitioned_events table (#15501) (#6690)
Fixes: https://issues.redhat.com/browse/AAP-30995

Co-authored-by: Ladislav Smola <lsmola@redhat.com>
2024-09-11 18:39:41 +00:00
Ladislav Smola
71856d61c9 Removes collection of unpartitioned_events table (#15501)
Fixes: https://issues.redhat.com/browse/AAP-30995
2024-09-11 14:16:11 -04:00
Seth Foster
4c9c22fea2 Translate new RBAC to old RBAC (#15490) (#6678)
User and Team assignments using the DAB
RBAC system will be translated back to the old
Role system.

This ensures better backward compatibility and
addresses some inconsistences in the UI that were
relying on older RBAC endpoints.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-09-10 18:26:43 -04:00
Peter Braun
9914229a5a Hide AUTOMATION_ANALYTICS_LAST_GATHER (#15497) (#6684) 2024-09-10 07:44:21 -04:00
Hao Liu
011733ad06 Hide AUTOMATION_ANALYTICS_LAST_GATHER (#15497)
Not user configurable
2024-09-10 10:17:40 +02:00
Hao Liu
ce0d176508 Fix analytic ship (#6679)
REDHAT_USERNAME and REDHAT_PASSWORD are default to empty string instead of None
2024-09-10 09:46:53 +02:00
Elijah DeLee
059f52f314 Unpin django-ansible-base for now (#6681) 2024-09-09 21:51:20 +00:00
Hao Liu
446046c4bf Remove OpenSSL pin (#6683) 2024-09-09 17:27:09 -04:00
Seth Foster
17e01e0eb0 Rename System Auditor to Controller System Auditor (#15470) (#6677)
This is to emphasize that this role is specific
to controller component. That is, not an auditor
for the entire AAP platform.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-09-09 17:07:14 -04:00
Hao Liu
82b8f7d4c0 Unpin OpenSSL (#15498)
Remove OpenSSL pin
2024-09-09 20:55:46 +00:00
Hao Liu
5a0080658c Fix analytic ship (#15496)
REDHAT_USERNAME and REDHAT_PASSWORD are default to empty string instead of None
2024-09-09 14:14:32 -04:00
Rick Elrod
5a4b789488 Don't reverse sync preload script data (#6644)
Signed-off-by: Rick Elrod <rick@elrod.me>
2024-09-09 13:04:01 -04:00
Seth Foster
c4d8fdb197 Translate new RBAC to old RBAC (#15490)
User and Team assignments using the DAB
RBAC system will be translated back to the old
Role system.

This ensures better backward compatibility and
addresses some inconsistences in the UI that were
relying on older RBAC endpoints.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2024-09-06 12:13:48 -04:00
Peter Braun
6dfe2e3a9f fix: avoid calling undefined method for anonymous users (#15440) (#6676) 2024-09-06 17:58:10 +02:00
Hao Liu
01ea091e8a Fix subscription username password setting name (#6675)
used in analytic
2024-09-06 08:58:48 -04:00
Hao Liu
3da9e322b7 Fix subscription username password setting name (#15493)
used in analytic
2024-09-05 19:59:45 +00:00
Andrew Klychkov
79684ab603 CONTRIBUTING.md: remove IRC remnants (#15492) 2024-09-05 14:11:37 +01:00
Chris Meyers
1d89e1a019 Move credential code up a dir
* There is only __init__.py in awx/main/models/credential/ now. So let's
  simplify things and move init up a dir.
2024-09-04 14:46:22 -04:00
Chris Meyers
a4346a667c Fix awx-plugins to use #egg=<package_name>
* #egg _could_ be awx-plugins.some.other.provided.package
* Also point at ansible devel instead of a forked branch since the
  entrypoints PR has now merged to devel
2024-09-04 14:46:22 -04:00
Chris Meyers
4328093c05 Use awx-plugins instead
* Instead of sourcing cred and inv plugins from the awx repo awx_plugins
  local directory, source them from the python package awx-plugins-core.
2024-09-04 14:46:22 -04:00
Chris Meyers
16d1f34179 Delete cred and inv plugins 2024-09-04 14:46:22 -04:00
Chris Meyers
376cc35a92 move inv and cred plugins into awx_plugins 2024-09-04 14:46:22 -04:00
Seth Foster
effbd0e416 Fix SAMLAuth backend to correctly return social auth pipeline results (#15457) (#6669)
Co-authored-by: David Newswanger <gamma.dave@gmail.com>
2024-09-03 10:55:05 -04:00
Seth Foster
2334211ba0 Only refresh session if updating own password (#15426) (#6653)
Fixes bug where creating a new user will
request a new awx_sessionid cookie, invalidating
the previous session.

Do not refresh session if updating or
creating a password for a different user.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-09-03 10:54:13 -04:00
Hao Liu
15e28371eb Prevent automountServiceAccountToken (#6638)
* Prevent job pod from mounting serviceaccount token

* Add serializer validation for cg pod_spec_override

Prevent automountServiceAccountToken to be set to true and provide an error message when automountServiceAccountToken is being set to true
2024-09-03 09:51:17 -04:00
John Barker
8a1d1e9c12 Remove references to IRC & Google Groups (#15480)
Signed-off-by: John Barker <john@johnrbarker.com>
2024-08-30 09:21:45 -04:00
David Newswanger
c59c64c915 Fix SAMLAuth backend to correctly return social auth pipeline results (#15457) 2024-08-30 09:13:31 -04:00
Peter Braun
64d2e10dc2 Fallback to use subscription cred for analytic upload (#15479) (#6668)
* Fallback to use subscription cred for analytic

Fall back to use SUBSCRIPTION_USERNAME/PASSWORD to upload analytic to if REDHAT_USERNAME/PASSWORD are not set

* Improve error message

* Guard against request with no query or data

* Add test for _send_to_analytics

Focus on credentials

* Supress sonarcloud warning about password

* Add test for analytic ship

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-08-30 14:18:42 +02:00
Hao Liu
ac6c5630f1 Fallback to use subscription cred for analytic upload (#15479)
* Fallback to use subscription cred for analytic

Fall back to use SUBSCRIPTION_USERNAME/PASSWORD to upload analytic to if REDHAT_USERNAME/PASSWORD are not set

* Improve error message

* Guard against request with no query or data

* Add test for _send_to_analytics

Focus on credentials

* Supress sonarcloud warning about password

* Add test for analytic ship
2024-08-30 10:39:53 +02:00
Elijah DeLee
444af2b500 catch harakiri graceful signal in middlware and log debug info
Middleware is from django_ansible_base
2024-08-29 09:24:35 -04:00
Alan Rominger
50db80182b Remove archaic monkey patches (#15338) 2024-08-28 21:50:00 -04:00
Andrew Klychkov
79c1921ea4 Docs: add Communication guide (#15469)
* Docs: add Communication guide

* Update docs/docsite/rst/contributor/communication.rst

Co-authored-by: Don Naro <dnaro@redhat.com>

* Update docs/docsite/rst/contributor/communication.rst

---------

Co-authored-by: Don Naro <dnaro@redhat.com>
2024-08-28 11:43:16 +01:00
Seth Foster
d6493fd4df Rename System Auditor to Controller System Auditor (#15470)
This is to emphasize that this role is specific
to controller component. That is, not an auditor
for the entire AAP platform.

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-08-27 15:35:46 -04:00
Alan Rominger
9cf66de454 Pin DAB to devel again (#15467) 2024-08-27 11:18:09 -04:00
Seth Foster
85bd7c3ca0 [4.6] Make controller specific team and org roles (#6662)
Adds the following managed Role Definitions

Controller Team Admin
Controller Team Member
Controller Organization Admin
Controller Organization Member

These have the same permission set as the
platform roles (without the Controller prefix)

Adding members to teams and orgs via the legacy RBAC system
will use these role definitions.

Other changes:
- Bump DAB to 2024.08.22
- Set ALLOW_LOCAL_ASSIGNING_JWT_ROLES to False in defaults.py.
This setting prevents assignments to the platform roles (e.g. Team Member).

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-08-26 16:31:42 -04:00
Alan Rominger
77e999f7c8 Rewrite more access logic in terms of permissions instead of roles (#15453) (#6661)
* Rewrite more access logic in terms of permissions instead of roles

* Cut down supported logic because that would not work anyway

* Remove methods not needed anymore

* Create managed roles in test before delegating permissions
2024-08-26 14:37:52 -04:00
Elijah DeLee
01aa760510 Guard around race condition (#15452) (#6658)
I had the luck of running into this race condition that broke my deployment. No instance was ever able to register because on running "awx-manage" in some check of a setting, it would end up failing here with

```
  File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/conf/license.py", line 10, in _get_validated_license_data
    return get_licenser().validate()
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/main/utils/licensing.py", line 453, in validate
    automated_since = int(Instance.objects.order_by('id').first().created.timestamp())
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'created'
```
2024-08-20 20:51:33 +00:00
Rick Elrod
16a4c66c73 Fix a test in preparation for syncing description
Refs ansible/django-ansible-base#447

Signed-off-by: Rick Elrod <rick@elrod.me>
2024-08-19 07:37:01 -05:00
Rick Elrod
9fa5be015c Bump DAB to 2024.8.19
Signed-off-by: Rick Elrod <rick@elrod.me>
2024-08-19 07:37:01 -05:00
Jake Jackson
8b293e7046 update django to 4.2.15 to address multiple CVEs (#6636) 2024-08-15 13:32:26 -04:00
Jake Jackson
467024bc54 fix CVE-2024-33663 and bring in updates for social-auth-app-django (#6634) 2024-08-15 13:32:09 -04:00
jessicamack
bdf3f81016 Unpin channels-redis (#15329) (#6647)
* unpin channels-redis

The bug that initially caused the upgrade block has been resolved https://github.com/django/channels_redis/issues/332

* replace aioredis Exception with a redis Exception

Version 4.0.0 of channel-redis migrated the underlying Redis library from aioredis to redis-py. The Exception has been changed to an equivalent

* remove unused license

* remove UPGRADE BLOCKER in README

* remove hiredis

it was an indirect dependency from aioredis which was removed

* remove unused license

* add back hiredis

it's potentially providing a performance boost. install explicitly as a part of redis. upgrade to more recent version

* remove UPGRADE BLOCKER for hiredis

it was also addressed as a part of this PR
2024-08-12 15:03:46 -04:00
jessicamack
4a5cfdc11d Remove 'AWX' from setting endpoint (#15432) (#6646)
* Remove AWX from display text

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-08-12 14:54:17 -04:00
1407 changed files with 19648 additions and 38667 deletions

57
.codecov.yml Normal file
View File

@@ -0,0 +1,57 @@
---
codecov:
notify:
after_n_builds: 9 # Number of test matrix+lint jobs uploading coverage
wait_for_ci: false
require_ci_to_pass: false
token: >- # repo-scoped, upload-only, needed for stability in PRs from forks
2b8c7a7a-7293-4a00-bf02-19bd55a1389b
comment:
require_changes: true
coverage:
range: 100..100
status:
patch:
default:
target: 100%
pytest:
target: 100%
flags:
- pytest
typing:
flags:
- MyPy
project:
default:
target: 75%
lib:
flags:
- pytest
paths:
- awx/
target: 75%
tests:
flags:
- pytest
paths:
- tests/
- >-
**/test/
- >-
**/tests/
- >-
**/test/**
- >-
**/tests/**
target: 95%
typing:
flags:
- MyPy
target: 100%
...

View File

@@ -1,16 +1,6 @@
[run]
source = awx
branch = True
omit =
awx/main/migrations/*
awx/lib/site-packages/*
[report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
exclude_also =
# Don't complain about missing debug-only code:
def __repr__
if self\.debug
@@ -23,7 +13,35 @@ exclude_lines =
if 0:
if __name__ == .__main__.:
ignore_errors = True
^\s*@pytest\.mark\.xfail
[run]
branch = True
# NOTE: `disable_warnings` is needed when `pytest-cov` runs in tandem
# NOTE: with `pytest-xdist`. These warnings are false negative in this
# NOTE: context.
#
# NOTE: It's `coveragepy` that emits the warnings and previously they
# NOTE: wouldn't get on the radar of `pytest`'s `filterwarnings`
# NOTE: mechanism. This changed, however, with `pytest >= 8.4`. And
# NOTE: since we set `filterwarnings = error`, those warnings are being
# NOTE: raised as exceptions, cascading into `pytest`'s internals and
# NOTE: causing tracebacks and crashes of the test sessions.
#
# Ref:
# * https://github.com/pytest-dev/pytest-cov/issues/693
# * https://github.com/pytest-dev/pytest-cov/pull/695
# * https://github.com/pytest-dev/pytest-cov/pull/696
disable_warnings =
module-not-measured
omit =
awx/main/migrations/*
awx/settings/defaults.py
awx/settings/*_defaults.py
source =
.
source_pkgs =
awx
[xml]
output = ./reports/coverage.xml

View File

@@ -4,7 +4,8 @@
<!---
If you are fixing an existing issue, please include "related #nnn" in your
commit message and your description; but you should still explain what
the change does.
the change does. Also please make sure that if this PR has an attached JIRA, put AAP-<number>
in as the first entry for your PR title.
-->
##### ISSUE TYPE
@@ -16,17 +17,11 @@ the change does.
##### COMPONENT NAME
<!--- Name of the module/plugin/module/task -->
- API
- UI
- Collection
- CLI
- Docs
- Other
##### AWX VERSION
<!--- Paste verbatim output from `make VERSION` between quotes below -->
```
```
##### ADDITIONAL INFORMATION

View File

@@ -4,12 +4,14 @@ inputs:
github-token:
description: GitHub Token for registry access
required: true
private-github-key:
description: GitHub private key for private repositories
required: false
default: ''
runs:
using: composite
steps:
- name: Get python version from Makefile
shell: bash
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- uses: ./.github/actions/setup-python
- name: Set lower case owner name
shell: bash
@@ -22,13 +24,21 @@ runs:
run: |
echo "${{ inputs.github-token }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- uses: ./.github/actions/setup-ssh-agent
with:
ssh-private-key: ${{ inputs.private-github-key }}
- name: Pre-pull latest devel image to warm cache
shell: bash
run: docker pull -q ghcr.io/${OWNER_LC}/awx_devel:${{ github.base_ref }}
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} \
COMPOSE_TAG=${{ github.base_ref || github.ref_name }} \
docker pull -q `make print-DEVEL_IMAGE_NAME`
continue-on-error: true
- name: Build image for current source checkout
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} \
COMPOSE_TAG=${{ github.base_ref }} \
COMPOSE_TAG=${{ github.base_ref || github.ref_name }} \
make docker-compose-build

View File

@@ -9,20 +9,30 @@ inputs:
required: false
default: false
type: boolean
private-github-key:
description: GitHub private key for private repositories
required: false
default: ''
outputs:
ip:
description: The IP of the tools_awx_1 container
value: ${{ steps.data.outputs.ip }}
admin-token:
description: OAuth token for admin user
value: ${{ steps.data.outputs.admin_token }}
runs:
using: composite
steps:
- name: Disable apparmor for rsyslogd, first step
shell: bash
run: sudo ln -s /etc/apparmor.d/usr.sbin.rsyslogd /etc/apparmor.d/disable/
- name: Disable apparmor for rsyslogd, second step
shell: bash
run: sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.rsyslogd
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ inputs.github-token }}
private-github-key: ${{ inputs.private-github-key }}
- name: Upgrade ansible-core
shell: bash
@@ -36,8 +46,10 @@ runs:
shell: bash
run: |
DEV_DOCKER_OWNER=${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
COMPOSE_UP_OPTS="-d" \
COMPOSE_TAG=${{ github.base_ref || github.ref_name }} \
DJANGO_COLORS=nocolor \
SUPERVISOR_ARGS="-n -t" \
COMPOSE_UP_OPTS="-d --no-color" \
make docker-compose
- name: Update default AWX password
@@ -62,6 +74,4 @@ runs:
shell: bash
run: |
AWX_IP=$(docker inspect -f '{{.NetworkSettings.Networks.awx.IPAddress}}' tools_awx_1)
ADMIN_TOKEN=$(docker exec -i tools_awx_1 awx-manage create_oauth2_token --user admin)
echo "ip=$AWX_IP" >> $GITHUB_OUTPUT
echo "admin_token=$ADMIN_TOKEN" >> $GITHUB_OUTPUT

27
.github/actions/setup-python/action.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: 'Setup Python from Makefile'
description: 'Extract and set up Python version from Makefile'
inputs:
python-version:
description: 'Override Python version (optional)'
required: false
default: ''
working-directory:
description: 'Directory containing the Makefile'
required: false
default: '.'
runs:
using: composite
steps:
- name: Get python version from Makefile
shell: bash
run: |
if [ -n "${{ inputs.python-version }}" ]; then
echo "py_version=${{ inputs.python-version }}" >> $GITHUB_ENV
else
cd ${{ inputs.working-directory }}
echo "py_version=`make PYTHON_VERSION`" >> $GITHUB_ENV
fi
- name: Install python
uses: actions/setup-python@v5
with:
python-version: ${{ env.py_version }}

View File

@@ -0,0 +1,29 @@
name: 'Setup SSH for GitHub'
description: 'Configure SSH for private repository access'
inputs:
ssh-private-key:
description: 'SSH private key for repository access'
required: false
default: ''
runs:
using: composite
steps:
- name: Generate placeholder SSH private key if SSH auth for private repos is not needed
id: generate_key
shell: bash
run: |
if [[ -z "${{ inputs.ssh-private-key }}" ]]; then
ssh-keygen -t ed25519 -C "github-actions" -N "" -f ~/.ssh/id_ed25519
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
cat ~/.ssh/id_ed25519 >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
else
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
echo "${{ inputs.ssh-private-key }}" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
fi
- name: Add private GitHub key to SSH agent
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ steps.generate_key.outputs.SSH_PRIVATE_KEY }}

View File

@@ -13,7 +13,7 @@ runs:
docker logs tools_awx_1 > ${{ inputs.log-filename }}
- name: Upload AWX logs as artifact
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: docker-compose-logs
name: docker-compose-logs-${{ inputs.log-filename }}
path: ${{ inputs.log-filename }}

View File

@@ -8,3 +8,10 @@ updates:
labels:
- "docs"
- "dependencies"
- package-ecosystem: "pip"
directory: "requirements/"
schedule:
interval: "daily" #run daily until we trust it, then back this off to weekly
open-pull-requests-limit: 2
labels:
- "dependencies"

View File

@@ -1,5 +1,4 @@
## General
- For the roundup of all the different mailing lists available from AWX, Ansible, and beyond visit: https://docs.ansible.com/ansible/latest/community/communication.html
- Hello, we think your question is answered in our FAQ. Does this: https://www.ansible.com/products/awx-project/faq cover your question?
- You can find the latest documentation here: https://ansible.readthedocs.io/projects/awx/en/latest/userguide/index.html
@@ -83,7 +82,7 @@ The Ansible Community is looking at building an EE that corresponds to all of th
## Mailing List Triage
### Create an issue
- Hello, thanks for reaching out on list. We think this merits an issue on our Github, https://github.com/ansible/awx/issues. If you could open an issue up on Github it will get tagged and integrated into our planning and workflow. All future work will be tracked there. Issues should include as much information as possible, including screenshots, log outputs, or any reproducers.
- Hello, thanks for reaching out on list. We think this merits an issue on our GitHub, https://github.com/ansible/awx/issues. If you could open an issue up on GitHub it will get tagged and integrated into our planning and workflow. All future work will be tracked there. Issues should include as much information as possible, including screenshots, log outputs, or any reproducers.
### Create a Pull Request
- Hello, we think your idea is good! Please consider contributing a PR for this following our contributing guidelines: https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md
@@ -93,8 +92,8 @@ The Ansible Community is looking at building an EE that corresponds to all of th
- Hello, your issue seems related to receptor. Could you please open an issue in the receptor repository? https://github.com/ansible/receptor. Thanks!
### Ansible Engine not AWX
- Hello, your question seems to be about Ansible development, not about AWX. Try asking on the Ansible-devel specific mailing list: https://groups.google.com/g/ansible-devel
- Hello, your question seems to be about using Ansible, not about AWX. https://groups.google.com/g/ansible-project is the best place to visit for user questions about Ansible. Thanks!
- Hello, your question seems to be about Ansible development, not about AWX. Try asking on in the Forum https://forum.ansible.com/tag/development
- Hello, your question seems to be about using Ansible Core, not about AWX. https://forum.ansible.com/tag/ansible-core is the best place to visit for user questions about Ansible. Thanks!
### Ansible Galaxy not AWX
- Hey there. That sounds like an FAQ question. Did this: https://www.ansible.com/products/awx-project/faq cover your question?
@@ -104,7 +103,7 @@ The Ansible Community is looking at building an EE that corresponds to all of th
- AWX-Operator: https://github.com/ansible/awx-operator/blob/devel/CONTRIBUTING.md
### Oracle AWX
We'd be happy to help if you can reproduce this with AWX since we do not have Oracle's Linux Automation Manager. If you need help with this specific version of Oracles Linux Automation Manager you will need to contact your Oracle for support.
We'd be happy to help if you can reproduce this with AWX since we do not have Oracle's Linux Automation Manager. If you need help with this specific version of Oracles Linux Automation Manager you will need to contact your Oracle for support.
### Community Resolved
Hi,

66
.github/workflows/api_schema_check.yml vendored Normal file
View File

@@ -0,0 +1,66 @@
---
name: API Schema Change Detection
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
CI_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DEV_DOCKER_OWNER: ${{ github.repository_owner }}
COMPOSE_TAG: ${{ github.base_ref || 'devel' }}
UPSTREAM_REPOSITORY_ID: 91594105
on:
pull_request:
branches:
- devel
- release_**
- feature_**
- stable-**
jobs:
api-schema-detection:
name: Detect API Schema Changes
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v4
with:
show-progress: false
fetch-depth: 0
- name: Build awx_devel image for schema check
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
private-github-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Detect API schema changes
id: schema-check
continue-on-error: true
run: |
AWX_DOCKER_ARGS='-e GITHUB_ACTIONS' \
AWX_DOCKER_CMD='make detect-schema-change SCHEMA_DIFF_BASE_BRANCH=${{ github.event.pull_request.base.ref }}' \
make docker-runner 2>&1 | tee schema-diff.txt
exit ${PIPESTATUS[0]}
- name: Add schema diff to job summary
if: always()
# show text and if for some reason, it can't be generated, state that it can't be.
run: |
echo "## API Schema Change Detection Results" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
if [ -f schema-diff.txt ]; then
if grep -q "^+" schema-diff.txt || grep -q "^-" schema-diff.txt; then
echo "### Schema changes detected" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo '```diff' >> $GITHUB_STEP_SUMMARY
cat schema-diff.txt >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
else
echo "### No schema changes detected" >> $GITHUB_STEP_SUMMARY
fi
else
echo "### Unable to generate schema diff" >> $GITHUB_STEP_SUMMARY
fi

View File

@@ -5,8 +5,12 @@ env:
CI_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DEV_DOCKER_OWNER: ${{ github.repository_owner }}
COMPOSE_TAG: ${{ github.base_ref || 'devel' }}
UPSTREAM_REPOSITORY_ID: 91594105
on:
pull_request:
push:
branches:
- devel # needed to publish code coverage post-merge
jobs:
common-tests:
name: ${{ matrix.tests.name }}
@@ -20,17 +24,20 @@ jobs:
matrix:
tests:
- name: api-test
command: /start_tests.sh
command: /start_tests.sh test_coverage
coverage-upload-name: ""
- name: api-migrations
command: /start_tests.sh test_migrations
coverage-upload-name: ""
- name: api-lint
command: /var/lib/awx/venv/awx/bin/tox -e linters
coverage-upload-name: ""
- name: api-swagger
command: /start_tests.sh swagger
coverage-upload-name: ""
- name: awx-collection
command: /start_tests.sh test_collection_all
- name: api-schema
command: /start_tests.sh detect-schema-change SCHEMA_DIFF_BASE_BRANCH=${{ github.event.pull_request.base.ref }}
coverage-upload-name: "awx-collection"
steps:
- uses: actions/checkout@v4
@@ -41,9 +48,73 @@ jobs:
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
private-github-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Run check ${{ matrix.tests.name }}
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make docker-runner
id: make-run
run: >-
AWX_DOCKER_ARGS='-e GITHUB_ACTIONS -e GITHUB_OUTPUT -v "${GITHUB_OUTPUT}:${GITHUB_OUTPUT}:rw,Z"'
AWX_DOCKER_CMD='${{ matrix.tests.command }}'
make docker-runner
- name: Upload test coverage to Codecov
if: >-
!cancelled()
&& steps.make-run.outputs.cov-report-files != ''
uses: codecov/codecov-action@v4
with:
fail_ci_if_error: >-
${{
toJSON(env.UPSTREAM_REPOSITORY_ID == github.repository_id)
}}
files: >-
${{ steps.make-run.outputs.cov-report-files }}
flags: >-
CI-GHA,
pytest,
OS-${{
runner.os
}}
token: ${{ secrets.CODECOV_TOKEN }}
- name: Upload test results to Codecov
if: >-
!cancelled()
&& steps.make-run.outputs.test-result-files != ''
uses: codecov/test-results-action@v1
with:
fail_ci_if_error: >-
${{
toJSON(env.UPSTREAM_REPOSITORY_ID == github.repository_id)
}}
files: >-
${{ steps.make-run.outputs.test-result-files }}
flags: >-
CI-GHA,
pytest,
OS-${{
runner.os
}}
token: ${{ secrets.CODECOV_TOKEN }}
- name: Upload awx jUnit test reports
if: >-
!cancelled()
&& steps.make-run.outputs.test-result-files != ''
&& github.event_name == 'push'
&& env.UPSTREAM_REPOSITORY_ID == github.repository_id
&& github.ref_name == github.event.repository.default_branch
run: |
for junit_file in $(echo '${{ steps.make-run.outputs.test-result-files }}' | sed 's/,/ /')
do
curl \
-v \
--user "${{ vars.PDE_ORG_RESULTS_AGGREGATOR_UPLOAD_USER }}:${{ secrets.PDE_ORG_RESULTS_UPLOAD_PASSWORD }}" \
--form "xunit_xml=@${junit_file}" \
--form "component_name=${{ matrix.tests.coverage-upload-name || 'awx' }}" \
--form "git_commit_sha=${{ github.sha }}" \
--form "git_repository_url=https://github.com/${{ github.repository }}" \
"${{ vars.PDE_ORG_RESULTS_AGGREGATOR_UPLOAD_URL }}/api/results/upload/"
done
dev-env:
runs-on: ubuntu-latest
@@ -53,14 +124,24 @@ jobs:
with:
show-progress: false
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
private-github-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Run smoke test
run: ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
- name: Run live dev env tests
run: docker exec tools_awx_1 /bin/bash -c "make live_test"
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: live-tests.log
awx-operator:
runs-on: ubuntu-latest
@@ -74,6 +155,10 @@ jobs:
show-progress: false
path: awx
- uses: ./awx/.github/actions/setup-ssh-agent
with:
ssh-private-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Checkout awx-operator
uses: actions/checkout@v4
with:
@@ -81,14 +166,10 @@ jobs:
repository: ansible/awx-operator
path: awx-operator
- name: Get python version from Makefile
working-directory: awx
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
- name: Setup python, referencing action at awx relative path
uses: ./awx/.github/actions/setup-python
with:
python-version: ${{ env.py_version }}
python-version: '3.x'
- name: Install playbook dependencies
run: |
@@ -107,6 +188,8 @@ jobs:
working-directory: awx-operator
run: |
python3 -m pip install -r molecule/requirements.txt
python3 -m pip install PyYAML # for awx/tools/scripts/rewrite-awx-operator-requirements.py
$(realpath ../awx/tools/scripts/rewrite-awx-operator-requirements.py) molecule/requirements.yml $(realpath ../awx)
ansible-galaxy collection install -r molecule/requirements.yml
sudo rm -f $(which kustomize)
make kustomize
@@ -119,7 +202,7 @@ jobs:
- name: Upload debug output
if: failure()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: awx-operator-debug-output
path: ${{ env.DEBUG_OUTPUT_DIR }}
@@ -130,17 +213,46 @@ jobs:
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
ansible:
- stable-2.17
# - devel
steps:
- uses: actions/checkout@v4
- name: Perform sanity testing
uses: ansible-community/ansible-test-gh-action@release/v1
with:
show-progress: false
ansible-core-version: ${{ matrix.ansible }}
codecov-token: ${{ secrets.CODECOV_TOKEN }}
collection-root: awx_collection
pre-test-cmd: >-
ansible-playbook
-i localhost,
tools/template_galaxy.yml
-e collection_package=awx
-e collection_namespace=awx
-e collection_version=1.0.0
-e '{"awx_template_version": false}'
testing-type: sanity
# The containers that GitHub Actions use have Ansible installed, so upgrade to make sure we have the latest version.
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
- name: Run sanity tests
run: make test_collection_sanity
- name: Upload awx jUnit test reports to the unified dashboard
if: >-
!cancelled()
&& steps.make-run.outputs.test-result-files != ''
&& github.event_name == 'push'
&& env.UPSTREAM_REPOSITORY_ID == github.repository_id
&& github.ref_name == github.event.repository.default_branch
run: |
for junit_file in $(echo '${{ steps.make-run.outputs.test-result-files }}' | sed 's/,/ /')
do
curl \
-v \
--user "${{ vars.PDE_ORG_RESULTS_AGGREGATOR_UPLOAD_USER }}:${{ secrets.PDE_ORG_RESULTS_UPLOAD_PASSWORD }}" \
--form "xunit_xml=@${junit_file}" \
--form "component_name=awx" \
--form "git_commit_sha=${{ github.sha }}" \
--form "git_repository_url=https://github.com/${{ github.repository }}" \
"${{ vars.PDE_ORG_RESULTS_AGGREGATOR_UPLOAD_URL }}/api/results/upload/"
done
collection-integration:
name: awx_collection integration
@@ -161,11 +273,16 @@ jobs:
with:
show-progress: false
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
private-github-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Install dependencies for running tests
run: |
@@ -173,23 +290,47 @@ jobs:
python3 -m pip install -r awx_collection/requirements.txt
- name: Run integration tests
id: make-run
run: |
echo "::remove-matcher owner=python::" # Disable annoying annotations from setup-python
echo '[general]' > ~/.tower_cli.cfg
echo 'host = https://${{ steps.awx.outputs.ip }}:8043' >> ~/.tower_cli.cfg
echo 'oauth_token = ${{ steps.awx.outputs.admin-token }}' >> ~/.tower_cli.cfg
echo 'username = admin' >> ~/.tower_cli.cfg
echo 'password = password' >> ~/.tower_cli.cfg
echo 'verify_ssl = false' >> ~/.tower_cli.cfg
TARGETS="$(ls awx_collection/tests/integration/targets | grep '${{ matrix.target-regex.regex }}' | tr '\n' ' ')"
make COLLECTION_VERSION=100.100.100-git COLLECTION_TEST_TARGET="--coverage --requirements $TARGETS" test_collection_integration
make COLLECTION_VERSION=100.100.100-git COLLECTION_TEST_TARGET="--requirements $TARGETS" test_collection_integration
env:
ANSIBLE_TEST_PREFER_PODMAN: 1
- name: Upload test coverage to Codecov
if: >-
!cancelled()
&& steps.make-run.outputs.cov-report-files != ''
uses: codecov/codecov-action@v4
with:
fail_ci_if_error: >-
${{
toJSON(env.UPSTREAM_REPOSITORY_ID == github.repository_id)
}}
files: >-
${{ steps.make-run.outputs.cov-report-files }}
flags: >-
CI-GHA,
ansible-test,
integration,
OS-${{
runner.os
}}
token: ${{ secrets.CODECOV_TOKEN }}
# Upload coverage report as artifact
- uses: actions/upload-artifact@v3
- uses: actions/upload-artifact@v4
if: always()
with:
name: coverage-${{ matrix.target-regex.name }}
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
retention-days: 1
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
@@ -207,24 +348,28 @@ jobs:
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
show-progress: false
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
- name: Download coverage artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
merge-multiple: true
path: coverage
pattern: coverage-*
- name: Combine coverage
run: |
make COLLECTION_VERSION=100.100.100-git install_collection
mkdir -p ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage
cd coverage
for i in coverage-*; do
cp -rv $i/* ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
done
cp -rv coverage/* ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
cd ~/.ansible/collections/ansible_collections/awx/awx
ansible-test coverage combine --requirements
ansible-test coverage html
@@ -236,48 +381,8 @@ jobs:
echo '## AWX Collection Integration Coverage HTML' >> $GITHUB_STEP_SUMMARY
echo 'Download the HTML artifacts to view the coverage report.' >> $GITHUB_STEP_SUMMARY
# This is a huge hack, there's no official action for removing artifacts currently.
# Also ACTIONS_RUNTIME_URL and ACTIONS_RUNTIME_TOKEN aren't available in normal run
# steps, so we have to use github-script to get them.
#
# The advantage of doing this, though, is that we save on artifact storage space.
- name: Get secret artifact runtime URL
uses: actions/github-script@v6
id: get-runtime-url
with:
result-encoding: string
script: |
const { ACTIONS_RUNTIME_URL } = process.env;
return ACTIONS_RUNTIME_URL;
- name: Get secret artifact runtime token
uses: actions/github-script@v6
id: get-runtime-token
with:
result-encoding: string
script: |
const { ACTIONS_RUNTIME_TOKEN } = process.env;
return ACTIONS_RUNTIME_TOKEN;
- name: Remove intermediary artifacts
env:
ACTIONS_RUNTIME_URL: ${{ steps.get-runtime-url.outputs.result }}
ACTIONS_RUNTIME_TOKEN: ${{ steps.get-runtime-token.outputs.result }}
run: |
echo "::add-mask::${ACTIONS_RUNTIME_TOKEN}"
artifacts=$(
curl -H "Authorization: Bearer $ACTIONS_RUNTIME_TOKEN" \
${ACTIONS_RUNTIME_URL}_apis/pipelines/workflows/${{ github.run_id }}/artifacts?api-version=6.0-preview \
| jq -r '.value | .[] | select(.name | startswith("coverage-")) | .url'
)
for artifact in $artifacts; do
curl -i -X DELETE -H "Accept: application/json;api-version=6.0-preview" -H "Authorization: Bearer $ACTIONS_RUNTIME_TOKEN" "$artifact"
done
- name: Upload coverage report as artifact
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: awx-collection-integration-coverage-html
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/reports/coverage

View File

@@ -10,6 +10,7 @@ on:
- devel
- release_*
- feature_*
- stable-*
jobs:
push-development-images:
runs-on: ubuntu-latest
@@ -49,14 +50,10 @@ jobs:
run: |
echo "DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER,,}" >> $GITHUB_ENV
echo "COMPOSE_TAG=${GITHUB_REF##*/}" >> $GITHUB_ENV
echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
env:
OWNER: '${{ github.repository_owner }}'
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
- uses: ./.github/actions/setup-python
- name: Log in to registry
run: |
@@ -73,6 +70,10 @@ jobs:
make ui
if: matrix.build-targets.image-name == 'awx'
- uses: ./.github/actions/setup-ssh-agent
with:
ssh-private-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Build and push AWX devel images
run: |
make ${{ matrix.build-targets.make-target }}

View File

@@ -12,6 +12,10 @@ jobs:
with:
show-progress: false
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
- name: install tox
run: pip install tox

View File

@@ -34,9 +34,11 @@ jobs:
with:
show-progress: false
- uses: actions/setup-python@v4
- uses: ./.github/actions/setup-python
- name: Install python requests
run: pip install requests
- name: Check if user is a member of Ansible org
uses: jannekem/run-python-script-action@v1
id: check_user

View File

@@ -33,7 +33,10 @@ jobs:
with:
show-progress: false
- uses: actions/setup-python@v4
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
- name: Install python requests
run: pip install requests
- name: Check if user is a member of Ansible org

View File

@@ -36,13 +36,7 @@ jobs:
with:
show-progress: false
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
- uses: ./.github/actions/setup-python
- name: Install dependencies
run: |

85
.github/workflows/sonarcloud_pr.yml vendored Normal file
View File

@@ -0,0 +1,85 @@
---
name: SonarQube
on:
workflow_run:
workflows:
- CI
types:
- completed
permissions: read-all
jobs:
sonarqube:
runs-on: ubuntu-latest
if: github.event.workflow_run.conclusion == 'success' && github.event.workflow_run.event == 'pull_request'
steps:
- name: Checkout Code
uses: actions/checkout@v4
with:
fetch-depth: 0
show-progress: false
- name: Download coverage report artifact
uses: actions/download-artifact@v4
with:
name: coverage.xml
path: reports/
github-token: ${{ secrets.GITHUB_TOKEN }}
run-id: ${{ github.event.workflow_run.id }}
- name: Download PR number artifact
uses: actions/download-artifact@v4
with:
name: pr-number
path: .
github-token: ${{ secrets.GITHUB_TOKEN }}
run-id: ${{ github.event.workflow_run.id }}
- name: Extract PR number
run: |
cat pr-number.txt
echo "PR_NUMBER=$(cat pr-number.txt)" >> $GITHUB_ENV
- name: Get PR info
uses: octokit/request-action@v2.x
id: pr_info
with:
route: GET /repos/{repo}/pulls/{number}
repo: ${{ github.event.repository.full_name }}
number: ${{ env.PR_NUMBER }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Set PR info into env
run: |
echo "PR_BASE=${{ fromJson(steps.pr_info.outputs.data).base.ref }}" >> $GITHUB_ENV
echo "PR_HEAD=${{ fromJson(steps.pr_info.outputs.data).head.ref }}" >> $GITHUB_ENV
- name: Add base branch
run: |
gh pr checkout ${{ env.PR_NUMBER }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Extract and export repo owner/name
run: |
REPO_SLUG="${GITHUB_REPOSITORY}"
IFS="/" read -r REPO_OWNER REPO_NAME <<< "$REPO_SLUG"
echo "REPO_OWNER=$REPO_OWNER" >> $GITHUB_ENV
echo "REPO_NAME=$REPO_NAME" >> $GITHUB_ENV
- name: SonarQube scan
uses: SonarSource/sonarqube-scan-action@v5
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets[format('{0}', vars.SONAR_TOKEN_SECRET_NAME)] }}
with:
args: >
-Dsonar.organization=${{ env.REPO_OWNER }}
-Dsonar.projectKey=${{ env.REPO_OWNER }}_${{ env.REPO_NAME }}
-Dsonar.pullrequest.key=${{ env.PR_NUMBER }}
-Dsonar.pullrequest.branch=${{ env.PR_HEAD }}
-Dsonar.pullrequest.base=${{ env.PR_BASE }}
-Dsonar.scm.revision=${{ github.event.workflow_run.head_sha }}

View File

@@ -64,14 +64,9 @@ jobs:
repository: ansible/awx-logos
path: awx-logos
- name: Get python version from Makefile
working-directory: awx
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
- uses: ./awx/.github/actions/setup-python
with:
python-version: ${{ env.py_version }}
working-directory: awx
- name: Install playbook dependencies
run: |
@@ -90,9 +85,11 @@ jobs:
cp ../awx-logos/awx/ui/client/assets/* awx/ui/public/static/media/
- name: Setup node and npm for new UI build
uses: actions/setup-node@v2
uses: actions/setup-node@v4
with:
node-version: '18'
cache: 'npm'
cache-dependency-path: awx/awx/ui/**/package-lock.json
- name: Prebuild new UI for awx image (to speed up build process)
working-directory: awx

View File

@@ -5,11 +5,13 @@ env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on:
workflow_dispatch:
push:
branches:
- devel
- release_**
- feature_**
- stable-**
jobs:
push:
runs-on: ubuntu-latest
@@ -22,39 +24,26 @@ jobs:
with:
show-progress: false
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
- name: Build awx_devel image to use for schema gen
uses: ./.github/actions/awx_devel_image
with:
python-version: ${{ env.py_version }}
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull -q ghcr.io/${{ github.repository_owner }}/awx_devel:${GITHUB_REF##*/} || :
- name: Build image
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${GITHUB_REF##*/} make docker-compose-build
github-token: ${{ secrets.GITHUB_TOKEN }}
private-github-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Generate API Schema
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} \
COMPOSE_TAG=${{ github.base_ref || github.ref_name }} \
docker run -u $(id -u) --rm -v ${{ github.workspace }}:/awx_devel/:Z \
--workdir=/awx_devel ghcr.io/${{ github.repository_owner }}/awx_devel:${GITHUB_REF##*/} /start_tests.sh genschema
--workdir=/awx_devel `make print-DEVEL_IMAGE_NAME` /start_tests.sh genschema
- name: Upload API Schema
env:
AWS_ACCESS_KEY: ${{ secrets.AWS_ACCESS_KEY }}
AWS_SECRET_KEY: ${{ secrets.AWS_SECRET_KEY }}
AWS_REGION: 'us-east-1'
run: |
ansible localhost -c local, -m command -a "{{ ansible_python_interpreter + ' -m pip install boto3'}}"
ansible localhost -c local -m aws_s3 \
-a "src=${{ github.workspace }}/schema.json bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=put permission=public-read"
uses: keithweaver/aws-s3-github-action@4dd5a7b81d54abaa23bbac92b27e85d7f405ae53
with:
command: cp
source: ${{ github.workspace }}/schema.json
destination: s3://awx-public-ci-files/${{ github.ref_name }}/schema.json
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_KEY }}
aws_region: us-east-1
flags: --acl public-read --only-show-errors

3
.gitignore vendored
View File

@@ -31,7 +31,6 @@ tools/docker-compose/_build
tools/docker-compose/_sources
tools/docker-compose/overrides/
tools/docker-compose-minikube/_sources
tools/docker-compose/keycloak.awx.realm.json
!tools/docker-compose/editable_dependencies
tools/docker-compose/editable_dependencies/*
@@ -151,6 +150,8 @@ use_dev_supervisor.txt
awx/ui/src
awx/ui/build
awx/ui/.ui-built
awx/ui_next
# Docs build stuff
docs/docsite/build/

View File

@@ -2,7 +2,7 @@
Hi there! We're excited to have you as a contributor.
Have questions about this document or anything not covered here? Come chat with us at `#ansible-awx` on irc.libera.chat, or submit your question to the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
Have questions about this document or anything not covered here? Create a topic using the [AWX tag on the Ansible Forum](https://forum.ansible.com/tag/awx).
## Table of contents
@@ -30,7 +30,7 @@ Have questions about this document or anything not covered here? Come chat with
- You must use `git commit --signoff` for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md).
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see [git push docs](https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt).
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.libera.chat, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- If submitting a large code change, it's a good idea to create a [forum topic tagged with 'awx'](https://forum.ansible.com/tag/awx), and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
## Setting up your development environment
@@ -124,7 +124,7 @@ If it has someone assigned to it then that person is the person responsible for
> Issue assignment will only be done for maintainers of the project. If you decide to work on an issue, please feel free to add a comment in the issue to let others know that you are working on it; but know that we will accept the first pull request from whomever is able to fix an issue. Once your PR is accepted we can add you as an assignee to an issue upon request.
> If you work in a part of the codebase that is going through active development, your changes may be rejected, or you may be asked to `rebase`. A good idea before starting work is to have a discussion with us in the `#ansible-awx` channel on irc.libera.chat, or on the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
> If you work in a part of the codebase that is going through active development, your changes may be rejected, or you may be asked to `rebase`. A good idea before starting work is to have a discussion with us in the [Ansible Forum](https://forum.ansible.com/tag/awx).
> If you're planning to develop features or fixes for the UI, please review the [UI Developer doc](https://github.com/ansible/ansible-ui/blob/main/CONTRIBUTING.md).
@@ -149,7 +149,7 @@ Here are a few things you can do to help the visibility of your change, and incr
- Make the smallest change possible
- Write good commit messages. See [How to write a Git commit message](https://chris.beams.io/posts/git-commit/).
It's generally a good idea to discuss features with us first by engaging us in the `#ansible-awx` channel on irc.libera.chat, or on the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
It's generally a good idea to discuss features with us first by engaging on the [Ansible Forum](https://forum.ansible.com/tag/awx).
We like to keep our commit history clean, and will require resubmission of pull requests that contain merge commits. Use `git pull --rebase`, rather than
`git pull`, and `git rebase`, rather than `git merge`.
@@ -164,6 +164,6 @@ We welcome your feedback, and encourage you to file an issue when you run into a
## Getting Help
If you require additional assistance, please reach out to us at `#ansible-awx` on irc.libera.chat, or submit your question to the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
If you require additional assistance, please submit your question to the [Ansible Forum](https://forum.ansible.com/tag/awx).
For extra information on debugging tools, see [Debugging](./docs/debugging/).

View File

@@ -1,11 +1,11 @@
# Issues
## Reporting
## Reporting
Use the GitHub [issue tracker](https://github.com/ansible/awx/issues) for filing bugs. In order to save time, and help us respond to issues quickly, make sure to fill out as much of the issue template
as possible. Version information, and an accurate reproducing scenario are critical to helping us identify the problem.
Please don't use the issue tracker as a way to ask how to do something. Instead, use the [mailing list](https://groups.google.com/forum/#!forum/awx-project) , and the `#ansible-awx` channel on irc.libera.chat to get help.
Please don't use the issue tracker as a way to ask how to do something. Instead, use the [Ansible Forum](https://forum.ansible.com/tag/awx).
Before opening a new issue, please use the issue search feature to see if what you're experiencing has already been reported. If you have any extra detail to provide, please comment. Otherwise, rather than posting a "me too" comment, please consider giving it a ["thumbs up"](https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comment) to give us an indication of the severity of the problem.
@@ -14,7 +14,7 @@ Before opening a new issue, please use the issue search feature to see if what y
When reporting issues for the UI, we also appreciate having screen shots and any error messages from the web browser's console. It's not unusual for browser extensions
and plugins to cause problems. Reporting those will also help speed up analyzing and resolving UI bugs.
### API and backend issues
### API and backend issues
For the API and backend services, please capture all of the logs that you can from the time the problem occurred.

130
Makefile
View File

@@ -8,6 +8,7 @@ NODE ?= node
NPM_BIN ?= npm
KIND_BIN ?= $(shell which kind)
CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_REPO_NAME ?= $(shell basename `git rev-parse --show-toplevel`)
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py 2> /dev/null)
@@ -18,12 +19,18 @@ COLLECTION_VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py | cut -d .
COLLECTION_SANITY_ARGS ?= --docker
# collection unit testing directories
COLLECTION_TEST_DIRS ?= awx_collection/test/awx
# pytest added args to collect coverage
COVERAGE_ARGS ?= --cov --cov-report=xml --junitxml=reports/junit.xml
# pytest test directories
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests
# pytest args to run tests in parallel
PARALLEL_TESTS ?= -n auto
# collection integration test directories (defaults to all)
COLLECTION_TEST_TARGET ?=
# args for collection install
COLLECTION_PACKAGE ?= awx
COLLECTION_NAMESPACE ?= awx
COLLECTION_INSTALL = ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
COLLECTION_INSTALL = $(HOME)/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
COLLECTION_TEMPLATE_VERSION ?= false
# NOTE: This defaults the container image version to the branch that's active
@@ -31,10 +38,6 @@ COMPOSE_TAG ?= $(GIT_BRANCH)
MAIN_NODE_TYPE ?= hybrid
# If set to true docker-compose will also start a pgbouncer instance and use it
PGBOUNCER ?= false
# If set to true docker-compose will also start a keycloak instance
KEYCLOAK ?= false
# If set to true docker-compose will also start an ldap instance
LDAP ?= false
# If set to true docker-compose will also start a splunk instance
SPLUNK ?= false
# If set to true docker-compose will also start a prometheus instance
@@ -45,8 +48,6 @@ GRAFANA ?= false
VAULT ?= false
# If set to true docker-compose will also start a hashicorp vault instance with TLS enabled
VAULT_TLS ?= false
# If set to true docker-compose will also start a tacacs+ instance
TACACS ?= false
# If set to true docker-compose will also start an OpenTelemetry Collector instance
OTEL ?= false
# If set to true docker-compose will also start a Loki instance
@@ -62,9 +63,9 @@ DEV_DOCKER_OWNER ?= ansible
# Docker will only accept lowercase, so github names like Paul need to be paul
DEV_DOCKER_OWNER_LOWER = $(shell echo $(DEV_DOCKER_OWNER) | tr A-Z a-z)
DEV_DOCKER_TAG_BASE ?= ghcr.io/$(DEV_DOCKER_OWNER_LOWER)
DEVEL_IMAGE_NAME ?= $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
IMAGE_KUBE_DEV=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG)
IMAGE_KUBE=$(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG)
DEVEL_IMAGE_NAME ?= $(DEV_DOCKER_TAG_BASE)/$(GIT_REPO_NAME)_devel:$(COMPOSE_TAG)
IMAGE_KUBE_DEV=$(DEV_DOCKER_TAG_BASE)/$(GIT_REPO_NAME)_kube_devel:$(COMPOSE_TAG)
IMAGE_KUBE=$(DEV_DOCKER_TAG_BASE)/$(GIT_REPO_NAME):$(COMPOSE_TAG)
# Common command to use for running ansible-playbook
ANSIBLE_PLAYBOOK ?= ansible-playbook -e ansible_python_interpreter=$(PYTHON)
@@ -76,7 +77,7 @@ RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio
# These should be upgraded in the AWX and Ansible venv before attempting
# to install the actual requirements
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==69.0.2 setuptools_scm[toml]==8.0.4 wheel==0.42.0 cython==0.29.37
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==80.9.0 setuptools_scm[toml]==8.0.4 wheel==0.42.0 cython==3.1.3
NAME ?= awx
@@ -228,12 +229,6 @@ migrate:
dbchange:
$(MANAGEMENT_COMMAND) makemigrations
supervisor:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
supervisord --pidfile=/tmp/supervisor_pid -n
collectstatic:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -320,14 +315,19 @@ black: reports
@chmod +x .git/hooks/pre-commit
genschema: reports
$(MAKE) swagger PYTEST_ARGS="--genschema --create-db "
$(MAKE) swagger PYTEST_ADDOPTS="--genschema --create-db "
mv swagger.json schema.json
swagger: reports
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
(set -o pipefail && py.test $(PYTEST_ARGS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs | tee reports/$@.report)
(set -o pipefail && py.test $(COVERAGE_ARGS) $(PARALLEL_TESTS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs | tee reports/$@.report)
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo 'cov-report-files=reports/coverage.xml' >> "${GITHUB_OUTPUT}"; \
echo 'test-result-files=reports/junit.xml' >> "${GITHUB_OUTPUT}"; \
fi
check: black
@@ -340,26 +340,38 @@ api-lint:
awx-link:
[ -d "/awx_devel/awx.egg-info" ] || $(PYTHON) /awx_devel/tools/scripts/egg_info_dev
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests
PYTEST_ARGS ?= -n auto
## Run all API unit tests.
test:
if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider $(PYTEST_ARGS) $(TEST_DIRS)
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider $(PARALLEL_TESTS) $(TEST_DIRS)
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
live_test:
cd awx/main/tests/live && py.test tests/
## Run all API unit tests with coverage enabled.
test_coverage:
$(MAKE) test PYTEST_ADDOPTS="--create-db $(COVERAGE_ARGS)"
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo 'cov-report-files=awxkit/coverage.xml,reports/coverage.xml' >> "${GITHUB_OUTPUT}"; \
echo 'test-result-files=awxkit/report.xml,reports/junit.xml' >> "${GITHUB_OUTPUT}"; \
fi
test_migrations:
if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider --migrations -m migration_test $(PYTEST_ARGS) $(TEST_DIRS)
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider --migrations -m migration_test --create-db $(PARALLEL_TESTS) $(COVERAGE_ARGS) $(TEST_DIRS)
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo 'cov-report-files=reports/coverage.xml' >> "${GITHUB_OUTPUT}"; \
echo 'test-result-files=reports/junit.xml' >> "${GITHUB_OUTPUT}"; \
fi
## Runs AWX_DOCKER_CMD inside a new docker container.
docker-runner:
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z $(AWX_DOCKER_ARGS) --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)
test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
@@ -368,7 +380,12 @@ test_collection:
fi && \
if ! [ -x "$(shell command -v ansible-playbook)" ]; then pip install ansible-core; fi
ansible --version
py.test $(COLLECTION_TEST_DIRS) -v
py.test $(COLLECTION_TEST_DIRS) $(COVERAGE_ARGS) -v
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo 'cov-report-files=reports/coverage.xml' >> "${GITHUB_OUTPUT}"; \
echo 'test-result-files=reports/junit.xml' >> "${GITHUB_OUTPUT}"; \
fi
# The python path needs to be modified so that the tests can find Ansible within the container
# First we will use anything expility set as PYTHONPATH
# Second we will load any libraries out of the virtualenv (if it's unspecified that should be ok because python should not load out of an empty directory)
@@ -403,23 +420,29 @@ test_collection_sanity:
if ! [ -x "$(shell command -v ansible-test)" ]; then pip install ansible-core; fi
ansible --version
COLLECTION_VERSION=1.0.0 $(MAKE) install_collection
cd $(COLLECTION_INSTALL) && ansible-test sanity $(COLLECTION_SANITY_ARGS)
cd $(COLLECTION_INSTALL) && \
ansible-test sanity $(COLLECTION_SANITY_ARGS) --coverage --junit && \
ansible-test coverage xml --requirements --group-by command --group-by version
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo cov-report-files="$$(find "$(COLLECTION_INSTALL)/tests/output/reports/" -type f -name 'coverage=sanity*.xml' -print0 | tr '\0' ',' | sed 's#,$$##')" >> "${GITHUB_OUTPUT}"; \
echo test-result-files="$$(find "$(COLLECTION_INSTALL)/tests/output/junit/" -type f -name '*.xml' -print0 | tr '\0' ',' | sed 's#,$$##')" >> "${GITHUB_OUTPUT}"; \
fi
test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && ansible-test integration -vvv $(COLLECTION_TEST_TARGET)
cd $(COLLECTION_INSTALL) && \
ansible-test integration --coverage -vvv $(COLLECTION_TEST_TARGET) && \
ansible-test coverage xml --requirements --group-by command --group-by version
@if [ "${GITHUB_ACTIONS}" = "true" ]; \
then \
echo cov-report-files="$$(find "$(COLLECTION_INSTALL)/tests/output/reports/" -type f -name 'coverage=integration*.xml' -print0 | tr '\0' ',' | sed 's#,$$##')" >> "${GITHUB_OUTPUT}"; \
fi
test_unit:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
py.test awx/main/tests/unit awx/conf/tests/unit awx/sso/tests/unit
## Run all API unit tests with coverage enabled.
test_coverage:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
py.test --create-db --cov=awx --cov-report=xml --junitxml=./reports/junit.xml $(TEST_DIRS)
py.test awx/main/tests/unit awx/conf/tests/unit
## Output test coverage as HTML (into htmlcov directory).
coverage_html:
@@ -473,21 +496,18 @@ docker-compose-sources: .git/hooks/pre-commit
fi;
$(ANSIBLE_PLAYBOOK) -i tools/docker-compose/inventory tools/docker-compose/ansible/sources.yml \
-e awx_image=$(DEV_DOCKER_TAG_BASE)/awx_devel \
-e awx_image=$(DEV_DOCKER_TAG_BASE)/$(GIT_REPO_NAME)_devel \
-e awx_image_tag=$(COMPOSE_TAG) \
-e receptor_image=$(RECEPTOR_IMAGE) \
-e control_plane_node_count=$(CONTROL_PLANE_NODE_COUNT) \
-e execution_node_count=$(EXECUTION_NODE_COUNT) \
-e minikube_container_group=$(MINIKUBE_CONTAINER_GROUP) \
-e enable_pgbouncer=$(PGBOUNCER) \
-e enable_keycloak=$(KEYCLOAK) \
-e enable_ldap=$(LDAP) \
-e enable_splunk=$(SPLUNK) \
-e enable_prometheus=$(PROMETHEUS) \
-e enable_grafana=$(GRAFANA) \
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS) \
-e enable_tacacs=$(TACACS) \
-e enable_otel=$(OTEL) \
-e enable_loki=$(LOKI) \
-e install_editable_dependencies=$(EDITABLE_DEPENDENCIES) \
@@ -498,8 +518,7 @@ docker-compose: awx/projects docker-compose-sources
ansible-galaxy install --ignore-certs -r tools/docker-compose/ansible/requirements.yml;
$(ANSIBLE_PLAYBOOK) -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS) \
-e enable_ldap=$(LDAP); \
-e vault_tls=$(VAULT_TLS); \
$(MAKE) docker-compose-up
docker-compose-up:
@@ -547,6 +566,7 @@ Dockerfile.dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
## Build awx_devel image for docker compose development environment
docker-compose-build: Dockerfile.dev
DOCKER_BUILDKIT=1 docker build \
--ssh default=$(SSH_AUTH_SOCK) \
-f Dockerfile.dev \
-t $(DEVEL_IMAGE_NAME) \
--build-arg BUILDKIT_INLINE_CACHE=1 \
@@ -558,6 +578,7 @@ docker-compose-buildx: Dockerfile.dev
- docker buildx create --name docker-compose-buildx
docker buildx use docker-compose-buildx
- docker buildx build \
--ssh default=$(SSH_AUTH_SOCK) \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \
$(DOCKER_DEVEL_CACHE_FLAG) \
@@ -571,28 +592,13 @@ docker-clean:
-$(foreach image_id,$(shell docker images --filter=reference='*/*/*awx_devel*' --filter=reference='*/*awx_devel*' --filter=reference='*awx_devel*' -aq),docker rmi --force $(image_id);)
docker-clean-volumes: docker-compose-clean docker-compose-container-group-clean
docker volume rm -f tools_var_lib_awx tools_awx_db tools_awx_db_15 tools_vault_1 tools_ldap_1 tools_grafana_storage tools_prometheus_storage $(shell docker volume ls --filter name=tools_redis_socket_ -q)
docker volume rm -f tools_var_lib_awx tools_awx_db tools_awx_db_15 tools_vault_1 tools_grafana_storage tools_prometheus_storage $(shell docker volume ls --filter name=tools_redis_socket_ -q)
docker-refresh: docker-clean docker-compose
## Docker Development Environment with Elastic Stack Connected
docker-compose-elk: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-cluster-elk: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-container-group:
MINIKUBE_CONTAINER_GROUP=true $(MAKE) docker-compose
clean-elk:
docker stop tools_kibana_1
docker stop tools_logstash_1
docker stop tools_elasticsearch_1
docker rm tools_logstash_1
docker rm tools_elasticsearch_1
docker rm tools_kibana_1
VERSION:
@echo "awx: $(VERSION)"
@@ -620,6 +626,7 @@ Dockerfile: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
## Build awx image for deployment on Kubernetes environment.
awx-kube-build: Dockerfile
DOCKER_BUILDKIT=1 docker build -f Dockerfile \
--ssh default=$(SSH_AUTH_SOCK) \
--build-arg VERSION=$(VERSION) \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
--build-arg HEADLESS=$(HEADLESS) \
@@ -631,6 +638,7 @@ awx-kube-buildx: Dockerfile
- docker buildx create --name awx-kube-buildx
docker buildx use awx-kube-buildx
- docker buildx build \
--ssh default=$(SSH_AUTH_SOCK) \
--push \
--build-arg VERSION=$(VERSION) \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
@@ -654,6 +662,7 @@ Dockerfile.kube-dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
## Build awx_kube_devel image for development on local Kubernetes environment.
awx-kube-dev-build: Dockerfile.kube-dev
DOCKER_BUILDKIT=1 docker build -f Dockerfile.kube-dev \
--ssh default=$(SSH_AUTH_SOCK) \
--build-arg BUILDKIT_INLINE_CACHE=1 \
$(DOCKER_KUBE_DEV_CACHE_FLAG) \
-t $(IMAGE_KUBE_DEV) .
@@ -663,6 +672,7 @@ awx-kube-dev-buildx: Dockerfile.kube-dev
- docker buildx create --name awx-kube-dev-buildx
docker buildx use awx-kube-dev-buildx
- docker buildx build \
--ssh default=$(SSH_AUTH_SOCK) \
--push \
--build-arg BUILDKIT_INLINE_CACHE=1 \
$(DOCKER_KUBE_DEV_CACHE_FLAG) \

View File

@@ -1,8 +1,19 @@
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX Mailing List](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://groups.google.com/g/awx-project)
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![codecov](https://codecov.io/github/ansible/awx/graph/badge.svg?token=4L4GSP9IAR)](https://codecov.io/github/ansible/awx) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX on the Ansible Forum](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://forum.ansible.com/tag/awx)
[![Ansible Matrix](https://img.shields.io/badge/matrix-Ansible%20Community-blueviolet.svg?logo=matrix)](https://chat.ansible.im/#/welcome) [![Ansible Discourse](https://img.shields.io/badge/discourse-Ansible%20Community-yellowgreen.svg?logo=discourse)](https://forum.ansible.com)
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
> [!CAUTION]
> The last release of this repository was released on Jul 2, 2024.
> **Releases of this project are now paused during a large scale refactoring.**
> For more information, follow [the Forum](https://forum.ansible.com/) and - more specifically - see the various communications on the matter:
>
> * [Blog: Upcoming Changes to the AWX Project](https://www.ansible.com/blog/upcoming-changes-to-the-awx-project/)
> * [Streamlining AWX Releases](https://forum.ansible.com/t/streamlining-awx-releases/6894) Primary update
> * [Refactoring AWX into a Pluggable, Service-Oriented Architecture](https://forum.ansible.com/t/refactoring-awx-into-a-pluggable-service-oriented-architecture/7404)
> * [Upcoming changes to AWX Operator installation methods](https://forum.ansible.com/t/upcoming-changes-to-awx-operator-installation-methods/7598)
> * [AWX UI and credential types transitioning to the new pluggable architecture](https://forum.ansible.com/t/awx-ui-and-credential-types-transitioning-to-the-new-pluggable-architecture/8027)
AWX provides a web-based user interface, REST API, and task engine built on top of [Ansible](https://github.com/ansible/ansible). It is one of the upstream projects for [Red Hat Ansible Automation Platform](https://www.ansible.com/products/automation-platform).
To install AWX, please view the [Install guide](./INSTALL.md).
@@ -18,9 +29,9 @@ Contributing
- Refer to the [Contributing guide](./CONTRIBUTING.md) to get started developing, testing, and building AWX.
- All code submissions are made through pull requests against the `devel` branch.
- All contributors must use git commit --signoff for any commit to be merged and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md)
- All contributors must use `git commit --signoff` for any commit to be merged and agree that usage of `--signoff` constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md)
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs. `git merge` for this reason.
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on web.libera.chat and talk about what you would like to do or add first. This not only helps everyone know what's going on, but it also helps save time and effort if the community decides some changes are needed.
- If submitting a large code change, it's a good idea to join discuss via the [Ansible Forum](https://forum.ansible.com/tag/awx). This helps everyone know what's going on, and it also helps save time and effort if the community decides some changes are needed.
Reporting Issues
----------------
@@ -30,12 +41,11 @@ If you're experiencing a problem that you feel is a bug in AWX or have ideas for
Code of Conduct
---------------
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
We require all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
Get Involved
------------
We welcome your feedback and ideas. Here's how to reach us with feedback and questions:
We welcome your feedback and ideas via the [Ansible Forum](https://forum.ansible.com/tag/awx).
- Join the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com)
- Join the [Ansible Community Forum](https://forum.ansible.com)
For a full list of all the ways to talk with the Ansible Community, see the [AWX Communication guide](https://ansible.readthedocs.io/projects/awx/en/latest/contributor/communication.html).

View File

@@ -5,6 +5,7 @@ from __future__ import absolute_import, unicode_literals
import os
import sys
import warnings
from importlib.metadata import PackageNotFoundError, version as _get_version
def get_version():
@@ -34,10 +35,8 @@ def version_file():
try:
import pkg_resources
__version__ = pkg_resources.get_distribution('awx').version
except pkg_resources.DistributionNotFound:
__version__ = _get_version('awx')
except PackageNotFoundError:
__version__ = get_version()
__all__ = ['__version__']
@@ -61,90 +60,16 @@ else:
from django.db import connection
def find_commands(management_dir):
# Modified version of function from django/core/management/__init__.py.
command_dir = os.path.join(management_dir, 'commands')
commands = []
try:
for f in os.listdir(command_dir):
if f.startswith('_'):
continue
elif f.endswith('.py') and f[:-3] not in commands:
commands.append(f[:-3])
elif f.endswith('.pyc') and f[:-4] not in commands: # pragma: no cover
commands.append(f[:-4])
except OSError:
pass
return commands
def oauth2_getattribute(self, attr):
# Custom method to override
# oauth2_provider.settings.OAuth2ProviderSettings.__getattribute__
from django.conf import settings
from oauth2_provider.settings import DEFAULTS
val = None
if (isinstance(attr, str)) and (attr in DEFAULTS) and (not attr.startswith('_')):
# certain Django OAuth Toolkit migrations actually reference
# setting lookups for references to model classes (e.g.,
# oauth2_settings.REFRESH_TOKEN_MODEL)
# If we're doing an OAuth2 setting lookup *while running* a migration,
# don't do our usual database settings lookup
val = settings.OAUTH2_PROVIDER.get(attr)
if val is None:
val = object.__getattribute__(self, attr)
return val
def prepare_env():
# Update the default settings environment variable based on current mode.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'awx.settings.%s' % MODE)
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'awx.settings')
os.environ.setdefault('AWX_MODE', MODE)
# Hide DeprecationWarnings when running in production. Need to first load
# settings to apply our filter after Django's own warnings filter.
from django.conf import settings
if not settings.DEBUG: # pragma: no cover
warnings.simplefilter('ignore', DeprecationWarning)
# Monkeypatch Django find_commands to also work with .pyc files.
import django.core.management
django.core.management.find_commands = find_commands
# Monkeypatch Oauth2 toolkit settings class to check for settings
# in django.conf settings each time, not just once during import
import oauth2_provider.settings
oauth2_provider.settings.OAuth2ProviderSettings.__getattribute__ = oauth2_getattribute
# Use the AWX_TEST_DATABASE_* environment variables to specify the test
# database settings to use when management command is run as an external
# program via unit tests.
for opt in ('ENGINE', 'NAME', 'USER', 'PASSWORD', 'HOST', 'PORT'): # pragma: no cover
if os.environ.get('AWX_TEST_DATABASE_%s' % opt, None):
settings.DATABASES['default'][opt] = os.environ['AWX_TEST_DATABASE_%s' % opt]
# Disable capturing all SQL queries in memory when in DEBUG mode.
if settings.DEBUG and not getattr(settings, 'SQL_DEBUG', True):
from django.db.backends.base.base import BaseDatabaseWrapper
from django.db.backends.utils import CursorWrapper
BaseDatabaseWrapper.make_debug_cursor = lambda self, cursor: CursorWrapper(cursor, self)
# Use the default devserver addr/port defined in settings for runserver.
default_addr = getattr(settings, 'DEVSERVER_DEFAULT_ADDR', '127.0.0.1')
default_port = getattr(settings, 'DEVSERVER_DEFAULT_PORT', 8000)
from django.core.management.commands import runserver as core_runserver
original_handle = core_runserver.Command.handle
def handle(self, *args, **options):
if not options.get('addrport'):
options['addrport'] = '%s:%d' % (default_addr, int(default_port))
elif options.get('addrport').isdigit():
options['addrport'] = '%s:%d' % (default_addr, int(options['addrport']))
return original_handle(self, *args, **options)
core_runserver.Command.handle = handle
def manage():

View File

@@ -11,9 +11,6 @@ from django.utils.encoding import smart_str
# Django REST Framework
from rest_framework import authentication
# Django-OAuth-Toolkit
from oauth2_provider.contrib.rest_framework import OAuth2Authentication
logger = logging.getLogger('awx.api.authentication')
@@ -36,16 +33,3 @@ class LoggedBasicAuthentication(authentication.BasicAuthentication):
class SessionAuthentication(authentication.SessionAuthentication):
def authenticate_header(self, request):
return 'Session'
class LoggedOAuth2Authentication(OAuth2Authentication):
def authenticate(self, request):
ret = super(LoggedOAuth2Authentication, self).authenticate(request)
if ret:
user, token = ret
username = user.username if user else '<none>'
logger.info(
smart_str(u"User {} performed a {} to {} through the API using OAuth 2 token {}.".format(username, request.method, request.path, token.pk))
)
setattr(user, 'oauth_scopes', [x for x in token.scope.split() if x])
return ret

View File

@@ -6,9 +6,6 @@ from rest_framework import serializers
# AWX
from awx.conf import fields, register, register_validate
from awx.api.fields import OAuth2ProviderField
from oauth2_provider.settings import oauth2_settings
from awx.sso.common import is_remote_auth_enabled
register(
@@ -35,10 +32,7 @@ register(
'DISABLE_LOCAL_AUTH',
field_class=fields.BooleanField,
label=_('Disable the built-in authentication system'),
help_text=_(
"Controls whether users are prevented from using the built-in authentication system. "
"You probably want to do this if you are using an LDAP or SAML integration."
),
help_text=_("Controls whether users are prevented from using the built-in authentication system. "),
category=_('Authentication'),
category_slug='authentication',
)
@@ -50,41 +44,6 @@ register(
category=_('Authentication'),
category_slug='authentication',
)
register(
'OAUTH2_PROVIDER',
field_class=OAuth2ProviderField,
default={
'ACCESS_TOKEN_EXPIRE_SECONDS': oauth2_settings.ACCESS_TOKEN_EXPIRE_SECONDS,
'AUTHORIZATION_CODE_EXPIRE_SECONDS': oauth2_settings.AUTHORIZATION_CODE_EXPIRE_SECONDS,
'REFRESH_TOKEN_EXPIRE_SECONDS': oauth2_settings.REFRESH_TOKEN_EXPIRE_SECONDS,
},
label=_('OAuth 2 Timeout Settings'),
help_text=_(
'Dictionary for customizing OAuth 2 timeouts, available items are '
'`ACCESS_TOKEN_EXPIRE_SECONDS`, the duration of access tokens in the number '
'of seconds, `AUTHORIZATION_CODE_EXPIRE_SECONDS`, the duration of '
'authorization codes in the number of seconds, and `REFRESH_TOKEN_EXPIRE_SECONDS`, '
'the duration of refresh tokens, after expired access tokens, '
'in the number of seconds.'
),
category=_('Authentication'),
category_slug='authentication',
unit=_('seconds'),
)
register(
'ALLOW_OAUTH2_FOR_EXTERNAL_USERS',
field_class=fields.BooleanField,
default=False,
label=_('Allow External Users to Create OAuth2 Tokens'),
help_text=_(
'For security reasons, users from external auth providers (LDAP, SAML, '
'SSO, Radius, and others) are not allowed to create OAuth2 tokens. '
'To change this behavior, enable this setting. Existing tokens will '
'not be deleted when this setting is toggled off.'
),
category=_('Authentication'),
category_slug='authentication',
)
register(
'LOGIN_REDIRECT_OVERRIDE',
field_class=fields.CharField,
@@ -109,7 +68,7 @@ register(
def authentication_validate(serializer, attrs):
if attrs.get('DISABLE_LOCAL_AUTH', False) and not is_remote_auth_enabled():
if attrs.get('DISABLE_LOCAL_AUTH', False):
raise serializers.ValidationError(_("There are no remote authentication systems configured."))
return attrs

View File

@@ -9,7 +9,6 @@ from django.core.exceptions import ObjectDoesNotExist
from rest_framework import serializers
# AWX
from awx.conf import fields
from awx.main.models import Credential
__all__ = ['BooleanNullField', 'CharNullField', 'ChoiceNullField', 'VerbatimField']
@@ -79,19 +78,6 @@ class VerbatimField(serializers.Field):
return value
class OAuth2ProviderField(fields.DictField):
default_error_messages = {'invalid_key_names': _('Invalid key names: {invalid_key_names}')}
valid_key_names = {'ACCESS_TOKEN_EXPIRE_SECONDS', 'AUTHORIZATION_CODE_EXPIRE_SECONDS', 'REFRESH_TOKEN_EXPIRE_SECONDS'}
child = fields.IntegerField(min_value=1)
def to_internal_value(self, data):
data = super(OAuth2ProviderField, self).to_internal_value(data)
invalid_flags = set(data.keys()) - self.valid_key_names
if invalid_flags:
self.fail('invalid_key_names', invalid_key_names=', '.join(list(invalid_flags)))
return data
class DeprecatedCredentialField(serializers.IntegerField):
def __init__(self, **kwargs):
kwargs['allow_null'] = True

View File

@@ -13,8 +13,8 @@ from django.contrib.contenttypes.models import ContentType
from django.core.exceptions import FieldDoesNotExist
from django.db import connection, transaction
from django.db.models.fields.related import OneToOneRel
from django.http import QueryDict
from django.shortcuts import get_object_or_404
from django.http import QueryDict, JsonResponse
from django.shortcuts import get_object_or_404, redirect
from django.template.loader import render_to_string
from django.utils.encoding import smart_str
from django.utils.safestring import mark_safe
@@ -30,10 +30,13 @@ from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import StaticHTMLRenderer
from rest_framework.negotiation import DefaultContentNegotiation
# Shared code for the AWX platform
from awx_plugins.interfaces._temporary_private_licensing_api import detect_server_product_name
# django-ansible-base
from ansible_base.rest_filters.rest_framework.field_lookup_backend import FieldLookupBackend
from ansible_base.lib.utils.models import get_all_field_names
from ansible_base.lib.utils.requests import get_remote_host
from ansible_base.lib.utils.requests import get_remote_host, is_proxied_request
from ansible_base.rbac.models import RoleEvaluation, RoleDefinition
from ansible_base.rbac.permission_registry import permission_registry
from ansible_base.jwt_consumer.common.util import validate_x_trusted_proxy_header
@@ -43,7 +46,6 @@ from awx.main.models import UnifiedJob, UnifiedJobTemplate, User, Role, Credenti
from awx.main.models.rbac import give_creator_permissions
from awx.main.access import optimize_queryset
from awx.main.utils import camelcase_to_underscore, get_search_fields, getattrd, get_object_or_400, decrypt_field, get_awx_version
from awx.main.utils.licensing import server_product_name
from awx.main.utils.proxy import is_proxy_in_headers, delete_headers_starting_with_http
from awx.main.views import ApiErrorView
from awx.api.serializers import ResourceAccessListElementSerializer, CopySerializer
@@ -79,7 +81,14 @@ analytics_logger = logging.getLogger('awx.analytics.performance')
class LoggedLoginView(auth_views.LoginView):
def get(self, request, *args, **kwargs):
if is_proxied_request():
next = request.GET.get('next', "")
if next:
next = f"?next={next}"
return redirect(f"/{next}")
# The django.auth.contrib login form doesn't perform the content
# negotiation we've come to expect from DRF; add in code to catch
# situations where Accept != text/html (or */*) and reply with
@@ -95,6 +104,15 @@ class LoggedLoginView(auth_views.LoginView):
return super(LoggedLoginView, self).get(request, *args, **kwargs)
def post(self, request, *args, **kwargs):
if is_proxied_request():
# Give a message, saying to login via AAP
return JsonResponse(
{
'detail': _('Please log in via Platform Authentication.'),
},
status=status.HTTP_401_UNAUTHORIZED,
)
ret = super(LoggedLoginView, self).post(request, *args, **kwargs)
ip = get_remote_host(request) # request.META.get('REMOTE_ADDR', None)
if request.user.is_authenticated:
@@ -113,10 +131,15 @@ class LoggedLoginView(auth_views.LoginView):
class LoggedLogoutView(auth_views.LogoutView):
success_url_allowed_hosts = set(settings.LOGOUT_ALLOWED_HOSTS.split(",")) if settings.LOGOUT_ALLOWED_HOSTS else set()
def dispatch(self, request, *args, **kwargs):
if is_proxied_request():
# 1) We intentionally don't obey ?next= here, just always redirect to platform login
# 2) Hack to prevent rewrites of Location header
qs = "?__gateway_no_rewrite__=1&next=/"
return redirect(f"/api/gateway/v1/logout/{qs}")
original_user = getattr(request, 'user', None)
ret = super(LoggedLogoutView, self).dispatch(request, *args, **kwargs)
current_user = getattr(request, 'user', None)
@@ -138,10 +161,10 @@ def get_view_description(view, html=False):
def get_default_schema():
if settings.SETTINGS_MODULE == 'awx.settings.development':
from awx.api.swagger import AutoSchema
if settings.DYNACONF.is_development_mode:
from awx.api.swagger import schema_view
return AutoSchema()
return schema_view
else:
return views.APIView.schema
@@ -244,7 +267,8 @@ class APIView(views.APIView):
if hasattr(self, '__init_request_error__'):
response = self.handle_exception(self.__init_request_error__)
if response.status_code == 401:
response.data['detail'] += _(' To establish a login session, visit') + ' /api/login/.'
if response.data and 'detail' in response.data:
response.data['detail'] += _(' To establish a login session, visit') + ' /api/login/.'
logger.info(status_msg)
else:
logger.warning(status_msg)
@@ -253,7 +277,7 @@ class APIView(views.APIView):
time_started = getattr(self, 'time_started', None)
if request.user.is_authenticated:
response['X-API-Product-Version'] = get_awx_version()
response['X-API-Product-Name'] = server_product_name()
response['X-API-Product-Name'] = detect_server_product_name()
response['X-API-Node'] = settings.CLUSTER_HOST_ID
if time_started:
@@ -350,12 +374,6 @@ class APIView(views.APIView):
kwargs.pop('version')
return super(APIView, self).dispatch(request, *args, **kwargs)
def check_permissions(self, request):
if request.method not in ('GET', 'OPTIONS', 'HEAD'):
if 'write' not in getattr(request.user, 'oauth_scopes', ['write']):
raise PermissionDenied()
return super(APIView, self).check_permissions(request)
class GenericAPIView(generics.GenericAPIView, APIView):
# Base class for all model-based views.
@@ -826,7 +844,7 @@ class ResourceAccessList(ParentMixin, ListAPIView):
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
ancestors = set(RoleEvaluation.objects.filter(content_type_id=content_type.id, object_id=obj.id).values_list('role_id', flat=True))
qs = User.objects.filter(has_roles__in=ancestors) | User.objects.filter(is_superuser=True)
auditor_role = RoleDefinition.objects.filter(name="System Auditor").first()
auditor_role = RoleDefinition.objects.filter(name="Platform Auditor").first()
if auditor_role:
qs |= User.objects.filter(role_assignments__role_definition=auditor_role)
return qs.distinct()

View File

@@ -10,7 +10,7 @@ from rest_framework import permissions
# AWX
from awx.main.access import check_user_access
from awx.main.models import Inventory, UnifiedJob
from awx.main.models import Inventory, UnifiedJob, Organization
from awx.main.utils import get_object_or_400
logger = logging.getLogger('awx.api.permissions')
@@ -228,12 +228,19 @@ class InventoryInventorySourcesUpdatePermission(ModelAccessPermission):
class UserPermission(ModelAccessPermission):
def check_post_permissions(self, request, view, obj=None):
if not request.data:
return request.user.admin_of_organizations.exists()
return Organization.access_qs(request.user, 'change').exists()
elif request.user.is_superuser:
return True
raise PermissionDenied()
class IsSystemAdmin(permissions.BasePermission):
def has_permission(self, request, view):
if not (request.user and request.user.is_authenticated):
return False
return request.user.is_superuser
class IsSystemAdminOrAuditor(permissions.BasePermission):
"""
Allows write access only to system admin users.

View File

@@ -6,14 +6,12 @@ import copy
import json
import logging
import re
import yaml
import urllib.parse
from collections import Counter, OrderedDict
from datetime import timedelta
from uuid import uuid4
# OAuth2
from oauthlib import oauth2
from oauthlib.common import generate_token
# Jinja
from jinja2 import sandbox, StrictUndefined
from jinja2.exceptions import TemplateSyntaxError, UndefinedError, SecurityError
@@ -50,7 +48,7 @@ from ansible_base.rbac import permission_registry
# AWX
from awx.main.access import get_user_capabilities
from awx.main.constants import ACTIVE_STATES, CENSOR_VALUE, org_role_to_permission
from awx.main.constants import ACTIVE_STATES, org_role_to_permission
from awx.main.models import (
ActivityStream,
AdHocCommand,
@@ -79,14 +77,11 @@ from awx.main.models import (
Label,
Notification,
NotificationTemplate,
OAuth2AccessToken,
OAuth2Application,
Organization,
Project,
ProjectUpdate,
ProjectUpdateEvent,
ReceptorAddress,
RefreshToken,
Role,
Schedule,
SystemJob,
@@ -102,7 +97,6 @@ from awx.main.models import (
WorkflowJobTemplate,
WorkflowJobTemplateNode,
StdoutMaxBytesExceeded,
CLOUD_INVENTORY_SOURCES,
)
from awx.main.models.base import VERBOSITY_CHOICES, NEW_JOB_TYPE_CHOICES
from awx.main.models.rbac import role_summary_fields_generator, give_creator_permissions, get_role_codenames, to_permissions, get_role_from_object_role
@@ -119,8 +113,11 @@ from awx.main.utils import (
truncate_stdout,
get_licenser,
)
from awx.main.utils.filters import SmartFilter
from awx.main.utils.plugins import load_combined_inventory_source_options
from awx.main.utils.named_url_graph import reset_counters
from awx.main.utils.inventory_vars import update_group_variables
from awx.main.scheduler.task_manager_models import TaskManagerModels
from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.signals import update_inventory_computed_fields
@@ -134,8 +131,6 @@ from awx.api.fields import BooleanNullField, CharNullField, ChoiceNullField, Ver
# AWX Utils
from awx.api.validators import HostnameRegexValidator
from awx.sso.common import get_external_account
logger = logging.getLogger('awx.api.serializers')
# Fields that should be summarized regardless of object type.
@@ -634,15 +629,41 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
return exclusions
def validate(self, attrs):
"""
Apply serializer validation. Called by DRF.
Can be extended by subclasses. Or consider overwriting
`validate_with_obj` in subclasses, which provides access to the model
object and exception handling for field validation.
:param dict attrs: The names and values of the model form fields.
:raise rest_framework.exceptions.ValidationError: If the validation
fails.
The exception must contain a dict with the names of the form fields
which failed validation as keys, and a list of error messages as
values. This ensures that the error messages are rendered near the
relevant fields.
:return: The names and values from the model form fields, possibly
modified by the validations.
:rtype: dict
"""
attrs = super(BaseSerializer, self).validate(attrs)
# Create/update a model instance and run its full_clean() method to
# do any validation implemented on the model class.
exclusions = self.get_validation_exclusions(self.instance)
# Create a new model instance or take the existing one if it exists,
# and update its attributes with the respective field values from
# attrs.
obj = self.instance or self.Meta.model()
for k, v in attrs.items():
if k not in exclusions and k != 'canonical_address_port':
setattr(obj, k, v)
try:
# Create/update a model instance and run its full_clean() method to
# do any validation implemented on the model class.
exclusions = self.get_validation_exclusions(self.instance)
obj = self.instance or self.Meta.model()
for k, v in attrs.items():
if k not in exclusions and k != 'canonical_address_port':
setattr(obj, k, v)
# Run serializer validators which need the model object for
# validation.
self.validate_with_obj(attrs, obj)
# Apply any validations implemented on the model class.
obj.full_clean(exclude=exclusions)
# full_clean may modify values on the instance; copy those changes
# back to attrs so they are saved.
@@ -671,6 +692,32 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
raise ValidationError(d)
return attrs
def validate_with_obj(self, attrs, obj):
"""
Overwrite this if you need the model instance for your validation.
:param dict attrs: The names and values of the model form fields.
:param obj: An instance of the class's meta model.
If the serializer runs on a newly created object, obj contains only
the attrs from its serializer. If the serializer runs because an
object has been edited, obj is the existing model instance with all
attributes and values available.
:raise django.core.exceptionsValidationError: Raise this if your
validation fails.
To make the error appear at the respective form field, instantiate
the Exception with a dict containing the field name as key and the
error message as value.
Example: ``ValidationError({"password": "Not good enough!"})``
If the exception contains just a string, the message cannot be
related to a field and is rendered at the top of the model form.
:return: None
"""
return
def reverse(self, *args, **kwargs):
kwargs['request'] = self.context.get('request')
return reverse(*args, **kwargs)
@@ -687,7 +734,22 @@ class EmptySerializer(serializers.Serializer):
pass
class UnifiedJobTemplateSerializer(BaseSerializer):
class OpaQueryPathMixin(serializers.Serializer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def validate_opa_query_path(self, value):
# Decode the URL and re-encode it
decoded_value = urllib.parse.unquote(value)
re_encoded_value = urllib.parse.quote(decoded_value, safe='/')
if value != re_encoded_value:
raise serializers.ValidationError(_("The URL must be properly encoded."))
return value
class UnifiedJobTemplateSerializer(BaseSerializer, OpaQueryPathMixin):
# As a base serializer, the capabilities prefetch is not used directly,
# instead they are derived from the Workflow Job Template Serializer and the Job Template Serializer, respectively.
capabilities_prefetch = []
@@ -961,8 +1023,6 @@ class UnifiedJobStdoutSerializer(UnifiedJobSerializer):
class UserSerializer(BaseSerializer):
password = serializers.CharField(required=False, default='', help_text=_('Field used to change the password.'))
ldap_dn = serializers.CharField(source='profile.ldap_dn', read_only=True)
external_account = serializers.SerializerMethodField(help_text=_('Set if the account is managed by an external service'))
is_system_auditor = serializers.BooleanField(default=False)
show_capabilities = ['edit', 'delete']
@@ -979,22 +1039,13 @@ class UserSerializer(BaseSerializer):
'is_superuser',
'is_system_auditor',
'password',
'ldap_dn',
'last_login',
'external_account',
)
extra_kwargs = {'last_login': {'read_only': True}}
def to_representation(self, obj):
ret = super(UserSerializer, self).to_representation(obj)
if self.get_external_account(obj):
# If this is an external account it shouldn't have a password field
ret.pop('password', None)
else:
# If its an internal account lets assume there is a password and return $encrypted$ to the user
ret['password'] = '$encrypted$'
if obj and type(self) is UserSerializer:
ret['auth'] = obj.social_auth.values('provider', 'uid')
ret['password'] = '$encrypted$'
return ret
def get_validation_exclusions(self, obj=None):
@@ -1003,7 +1054,6 @@ class UserSerializer(BaseSerializer):
return ret
def validate_password(self, value):
django_validate_password(value)
if not self.instance and value in (None, ''):
raise serializers.ValidationError(_('Password required for new User.'))
@@ -1026,11 +1076,52 @@ class UserSerializer(BaseSerializer):
return value
def validate_with_obj(self, attrs, obj):
"""
Validate the password with the Django password validators
To enable the Django password validators, configure
`settings.AUTH_PASSWORD_VALIDATORS` as described in the [Django
docs](https://docs.djangoproject.com/en/5.1/topics/auth/passwords/#enabling-password-validation)
:param dict attrs: The User form field names and their values as a dict.
Example::
{
'username': 'TestUsername', 'first_name': 'FirstName',
'last_name': 'LastName', 'email': 'First.Last@my.org',
'is_superuser': False, 'is_system_auditor': False,
'password': 'secret123'
}
:param obj: The User model instance.
:raises django.core.exceptions.ValidationError: Raise this if at least
one Django password validator fails.
The exception contains a dict ``{"password": <error-message>``}
which indicates that the password field has failed validation, and
the reason for failure.
:return: None.
"""
# We must do this here instead of in `validate_password` bacause some
# django password validators need access to other model instance fields,
# e.g. ``username`` for the ``UserAttributeSimilarityValidator``.
password = attrs.get("password")
# Skip validation if no password has been entered. This may happen when
# an existing User is edited.
if password and password != '$encrypted$':
# Apply validators from settings.AUTH_PASSWORD_VALIDATORS. This may
# raise ValidationError.
#
# If the validation fails, re-raise the exception with adjusted
# content to make the error appear near the password field.
try:
django_validate_password(password, user=obj)
except DjangoValidationError as exc:
raise DjangoValidationError({"password": exc.messages})
def _update_password(self, obj, new_password):
# For now we're not raising an error, just not saving password for
# users managed by LDAP who already have an unusable password set.
# Get external password will return something like ldap or enterprise or None if the user isn't external. We only want to allow a password update for a None option
if new_password and new_password != '$encrypted$' and not self.get_external_account(obj):
if new_password and new_password != '$encrypted$':
obj.set_password(new_password)
obj.save(update_fields=['password'])
@@ -1045,9 +1136,6 @@ class UserSerializer(BaseSerializer):
obj.set_unusable_password()
obj.save(update_fields=['password'])
def get_external_account(self, obj):
return get_external_account(obj)
def create(self, validated_data):
new_password = validated_data.pop('password', None)
is_system_auditor = validated_data.pop('is_system_auditor', None)
@@ -1078,44 +1166,10 @@ class UserSerializer(BaseSerializer):
roles=self.reverse('api:user_roles_list', kwargs={'pk': obj.pk}),
activity_stream=self.reverse('api:user_activity_stream_list', kwargs={'pk': obj.pk}),
access_list=self.reverse('api:user_access_list', kwargs={'pk': obj.pk}),
tokens=self.reverse('api:o_auth2_token_list', kwargs={'pk': obj.pk}),
authorized_tokens=self.reverse('api:user_authorized_token_list', kwargs={'pk': obj.pk}),
personal_tokens=self.reverse('api:user_personal_token_list', kwargs={'pk': obj.pk}),
)
)
return res
def _validate_ldap_managed_field(self, value, field_name):
if not getattr(settings, 'AUTH_LDAP_SERVER_URI', None):
return value
try:
is_ldap_user = bool(self.instance and self.instance.profile.ldap_dn)
except AttributeError:
is_ldap_user = False
if is_ldap_user:
ldap_managed_fields = ['username']
ldap_managed_fields.extend(getattr(settings, 'AUTH_LDAP_USER_ATTR_MAP', {}).keys())
ldap_managed_fields.extend(getattr(settings, 'AUTH_LDAP_USER_FLAGS_BY_GROUP', {}).keys())
if field_name in ldap_managed_fields:
if value != getattr(self.instance, field_name):
raise serializers.ValidationError(_('Unable to change %s on user managed by LDAP.') % field_name)
return value
def validate_username(self, value):
return self._validate_ldap_managed_field(value, 'username')
def validate_first_name(self, value):
return self._validate_ldap_managed_field(value, 'first_name')
def validate_last_name(self, value):
return self._validate_ldap_managed_field(value, 'last_name')
def validate_email(self, value):
return self._validate_ldap_managed_field(value, 'email')
def validate_is_superuser(self, value):
return self._validate_ldap_managed_field(value, 'is_superuser')
class UserActivityStreamSerializer(UserSerializer):
"""Changes to system auditor status are shown as separate entries,
@@ -1128,205 +1182,12 @@ class UserActivityStreamSerializer(UserSerializer):
fields = ('*', '-is_system_auditor')
class BaseOAuth2TokenSerializer(BaseSerializer):
refresh_token = serializers.SerializerMethodField()
token = serializers.SerializerMethodField()
ALLOWED_SCOPES = ['read', 'write']
class Meta:
model = OAuth2AccessToken
fields = ('*', '-name', 'description', 'user', 'token', 'refresh_token', 'application', 'expires', 'scope')
read_only_fields = ('user', 'token', 'expires', 'refresh_token')
extra_kwargs = {'scope': {'allow_null': False, 'required': False}, 'user': {'allow_null': False, 'required': True}}
def get_token(self, obj):
request = self.context.get('request', None)
try:
if request.method == 'POST':
return obj.token
else:
return CENSOR_VALUE
except ObjectDoesNotExist:
return ''
def get_refresh_token(self, obj):
request = self.context.get('request', None)
try:
if not obj.refresh_token:
return None
elif request.method == 'POST':
return getattr(obj.refresh_token, 'token', '')
else:
return CENSOR_VALUE
except ObjectDoesNotExist:
return None
def get_related(self, obj):
ret = super(BaseOAuth2TokenSerializer, self).get_related(obj)
if obj.user:
ret['user'] = self.reverse('api:user_detail', kwargs={'pk': obj.user.pk})
if obj.application:
ret['application'] = self.reverse('api:o_auth2_application_detail', kwargs={'pk': obj.application.pk})
ret['activity_stream'] = self.reverse('api:o_auth2_token_activity_stream_list', kwargs={'pk': obj.pk})
return ret
def _is_valid_scope(self, value):
if not value or (not isinstance(value, str)):
return False
words = value.split()
for word in words:
if words.count(word) > 1:
return False # do not allow duplicates
if word not in self.ALLOWED_SCOPES:
return False
return True
def validate_scope(self, value):
if not self._is_valid_scope(value):
raise serializers.ValidationError(_('Must be a simple space-separated string with allowed scopes {}.').format(self.ALLOWED_SCOPES))
return value
def create(self, validated_data):
validated_data['user'] = self.context['request'].user
try:
return super(BaseOAuth2TokenSerializer, self).create(validated_data)
except oauth2.AccessDeniedError as e:
raise PermissionDenied(str(e))
class UserAuthorizedTokenSerializer(BaseOAuth2TokenSerializer):
class Meta:
extra_kwargs = {
'scope': {'allow_null': False, 'required': False},
'user': {'allow_null': False, 'required': True},
'application': {'allow_null': False, 'required': True},
}
def create(self, validated_data):
current_user = self.context['request'].user
validated_data['token'] = generate_token()
validated_data['expires'] = now() + timedelta(seconds=settings.OAUTH2_PROVIDER['ACCESS_TOKEN_EXPIRE_SECONDS'])
obj = super(UserAuthorizedTokenSerializer, self).create(validated_data)
obj.save()
if obj.application:
RefreshToken.objects.create(user=current_user, token=generate_token(), application=obj.application, access_token=obj)
return obj
class OAuth2TokenSerializer(BaseOAuth2TokenSerializer):
def create(self, validated_data):
current_user = self.context['request'].user
validated_data['token'] = generate_token()
validated_data['expires'] = now() + timedelta(seconds=settings.OAUTH2_PROVIDER['ACCESS_TOKEN_EXPIRE_SECONDS'])
obj = super(OAuth2TokenSerializer, self).create(validated_data)
if obj.application and obj.application.user:
obj.user = obj.application.user
obj.save()
if obj.application:
RefreshToken.objects.create(user=current_user, token=generate_token(), application=obj.application, access_token=obj)
return obj
class OAuth2TokenDetailSerializer(OAuth2TokenSerializer):
class Meta:
read_only_fields = ('*', 'user', 'application')
class UserPersonalTokenSerializer(BaseOAuth2TokenSerializer):
class Meta:
read_only_fields = ('user', 'token', 'expires', 'application')
def create(self, validated_data):
validated_data['token'] = generate_token()
validated_data['expires'] = now() + timedelta(seconds=settings.OAUTH2_PROVIDER['ACCESS_TOKEN_EXPIRE_SECONDS'])
validated_data['application'] = None
obj = super(UserPersonalTokenSerializer, self).create(validated_data)
obj.save()
return obj
class OAuth2ApplicationSerializer(BaseSerializer):
show_capabilities = ['edit', 'delete']
class Meta:
model = OAuth2Application
fields = (
'*',
'description',
'-user',
'client_id',
'client_secret',
'client_type',
'redirect_uris',
'authorization_grant_type',
'skip_authorization',
'organization',
)
read_only_fields = ('client_id', 'client_secret')
read_only_on_update_fields = ('user', 'authorization_grant_type')
extra_kwargs = {
'user': {'allow_null': True, 'required': False},
'organization': {'allow_null': False},
'authorization_grant_type': {'allow_null': False, 'label': _('Authorization Grant Type')},
'client_secret': {'label': _('Client Secret')},
'client_type': {'label': _('Client Type')},
'redirect_uris': {'label': _('Redirect URIs')},
'skip_authorization': {'label': _('Skip Authorization')},
}
def to_representation(self, obj):
ret = super(OAuth2ApplicationSerializer, self).to_representation(obj)
request = self.context.get('request', None)
if request.method != 'POST' and obj.client_type == 'confidential':
ret['client_secret'] = CENSOR_VALUE
if obj.client_type == 'public':
ret.pop('client_secret', None)
return ret
def get_related(self, obj):
res = super(OAuth2ApplicationSerializer, self).get_related(obj)
res.update(
dict(
tokens=self.reverse('api:o_auth2_application_token_list', kwargs={'pk': obj.pk}),
activity_stream=self.reverse('api:o_auth2_application_activity_stream_list', kwargs={'pk': obj.pk}),
)
)
if obj.organization_id:
res.update(
dict(
organization=self.reverse('api:organization_detail', kwargs={'pk': obj.organization_id}),
)
)
return res
def get_modified(self, obj):
if obj is None:
return None
return obj.updated
def _summary_field_tokens(self, obj):
token_list = [{'id': x.pk, 'token': CENSOR_VALUE, 'scope': x.scope} for x in obj.oauth2accesstoken_set.all()[:10]]
if has_model_field_prefetched(obj, 'oauth2accesstoken_set'):
token_count = len(obj.oauth2accesstoken_set.all())
else:
if len(token_list) < 10:
token_count = len(token_list)
else:
token_count = obj.oauth2accesstoken_set.count()
return {'count': token_count, 'results': token_list}
def get_summary_fields(self, obj):
ret = super(OAuth2ApplicationSerializer, self).get_summary_fields(obj)
ret['tokens'] = self._summary_field_tokens(obj)
return ret
class OrganizationSerializer(BaseSerializer):
class OrganizationSerializer(BaseSerializer, OpaQueryPathMixin):
show_capabilities = ['edit', 'delete']
class Meta:
model = Organization
fields = ('*', 'max_hosts', 'custom_virtualenv', 'default_environment')
fields = ('*', 'max_hosts', 'custom_virtualenv', 'default_environment', 'opa_query_path')
read_only_fields = ('*', 'custom_virtualenv')
def get_related(self, obj):
@@ -1341,7 +1202,6 @@ class OrganizationSerializer(BaseSerializer):
admins=self.reverse('api:organization_admins_list', kwargs={'pk': obj.pk}),
teams=self.reverse('api:organization_teams_list', kwargs={'pk': obj.pk}),
credentials=self.reverse('api:organization_credential_list', kwargs={'pk': obj.pk}),
applications=self.reverse('api:organization_applications_list', kwargs={'pk': obj.pk}),
activity_stream=self.reverse('api:organization_activity_stream_list', kwargs={'pk': obj.pk}),
notification_templates=self.reverse('api:organization_notification_templates_list', kwargs={'pk': obj.pk}),
notification_templates_started=self.reverse('api:organization_notification_templates_started_list', kwargs={'pk': obj.pk}),
@@ -1681,7 +1541,7 @@ class LabelsListMixin(object):
return res
class InventorySerializer(LabelsListMixin, BaseSerializerWithVariables):
class InventorySerializer(LabelsListMixin, BaseSerializerWithVariables, OpaQueryPathMixin):
show_capabilities = ['edit', 'delete', 'adhoc', 'copy']
capabilities_prefetch = ['admin', 'adhoc', {'copy': 'organization.inventory_admin'}]
@@ -1702,6 +1562,7 @@ class InventorySerializer(LabelsListMixin, BaseSerializerWithVariables):
'inventory_sources_with_failures',
'pending_deletion',
'prevent_instance_group_fallback',
'opa_query_path',
)
def get_related(self, obj):
@@ -1771,8 +1632,68 @@ class InventorySerializer(LabelsListMixin, BaseSerializerWithVariables):
if kind == 'smart' and not host_filter:
raise serializers.ValidationError({'host_filter': _('Smart inventories must specify host_filter')})
return super(InventorySerializer, self).validate(attrs)
@staticmethod
def _update_variables(variables, inventory_id):
"""
Update the inventory variables of the 'all'-group.
The variables field contains vars from the inventory dialog, hence
representing the "all"-group variables.
Since this is not an update from an inventory source, we update the
variables when the inventory details form is saved.
A user edit on the inventory variables is considered a reset of the
variables update history. Particularly if the user removes a variable by
editing the inventory variables field, the variable is not supposed to
reappear with a value from a previous inventory source update.
We achieve this by forcing `reset=True` on such an update.
As a side-effect, variables which have been set by source updates and
have survived a user-edit (i.e. they have not been deleted from the
variables field) will be assumed to originate from the user edit and are
thus no longer deleted from the inventory when they are removed from
their original source!
Note that we use the inventory source id -1 for user-edit updates
because a regular inventory source cannot have an id of -1 since
PostgreSQL assigns pk's starting from 1 (if this assumption doesn't hold
true, we have to assign another special value for invsrc_id).
:param str variables: The variables as plain text in yaml or json
format.
:param int inventory_id: The primary key of the related inventory
object.
"""
variables_dict = parse_yaml_or_json(variables, silent_failure=False)
logger.debug(f"InventorySerializer._update_variables: {inventory_id=} {variables_dict=}, {variables=}")
update_group_variables(
group_id=None, # `None` denotes the 'all' group (which doesn't have a pk).
newvars=variables_dict,
dbvars=None,
invsrc_id=-1,
inventory_id=inventory_id,
reset=True,
)
def create(self, validated_data):
"""Called when a new inventory has to be created."""
logger.debug(f"InventorySerializer.create({validated_data=}) >>>>")
obj = super().create(validated_data)
self._update_variables(validated_data.get("variables") or "", obj.id)
return obj
def update(self, obj, validated_data):
"""Called when an existing inventory is updated."""
logger.debug(f"InventorySerializer.update({validated_data=}) >>>>")
obj = super().update(obj, validated_data)
self._update_variables(validated_data.get("variables") or "", obj.id)
return obj
class ConstructedFieldMixin(serializers.Field):
def get_attribute(self, instance):
@@ -1814,7 +1735,7 @@ class ConstructedInventorySerializer(InventorySerializer):
required=False,
allow_null=True,
min_value=0,
max_value=2,
max_value=5,
default=None,
help_text=_('The verbosity level for the related auto-created inventory source, special to constructed inventory'),
)
@@ -2062,10 +1983,12 @@ class GroupSerializer(BaseSerializerWithVariables):
return res
def validate(self, attrs):
# Do not allow the group name to conflict with an existing host name.
name = force_str(attrs.get('name', self.instance and self.instance.name or ''))
inventory = attrs.get('inventory', self.instance and self.instance.inventory or '')
if Host.objects.filter(name=name, inventory=inventory).exists():
raise serializers.ValidationError(_('A Host with that name already exists.'))
#
return super(GroupSerializer, self).validate(attrs)
def validate_name(self, value):
@@ -2350,6 +2273,7 @@ class GroupVariableDataSerializer(BaseVariableDataSerializer):
class InventorySourceOptionsSerializer(BaseSerializer):
credential = DeprecatedCredentialField(help_text=_('Cloud credential to use for inventory updates.'))
source = serializers.ChoiceField(choices=[])
class Meta:
fields = (
@@ -2371,6 +2295,14 @@ class InventorySourceOptionsSerializer(BaseSerializer):
)
read_only_fields = ('*', 'custom_virtualenv')
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if 'source' in self.fields:
source_options = load_combined_inventory_source_options()
self.fields['source'].choices = [(plugin, description) for plugin, description in source_options.items()]
def get_related(self, obj):
res = super(InventorySourceOptionsSerializer, self).get_related(obj)
if obj.credential: # TODO: remove when 'credential' field is removed
@@ -2907,7 +2839,7 @@ class ResourceAccessListElementSerializer(UserSerializer):
{
"role": {
"id": None,
"name": _("System Auditor"),
"name": _("Platform Auditor"),
"description": _("Can view all aspects of the system"),
"user_capabilities": {"unattach": False},
},
@@ -3095,11 +3027,6 @@ class CredentialSerializer(BaseSerializer):
ret.remove(field)
return ret
def validate_organization(self, org):
if self.instance and (not self.instance.managed) and self.instance.credential_type.kind == 'galaxy' and org is None:
raise serializers.ValidationError(_("Galaxy credentials must be owned by an Organization."))
return org
def validate_credential_type(self, credential_type):
if self.instance and credential_type.pk != self.instance.credential_type.pk:
for related_objects in (
@@ -3175,9 +3102,6 @@ class CredentialSerializerCreate(CredentialSerializer):
if attrs.get('team'):
attrs['organization'] = attrs['team'].organization
if 'credential_type' in attrs and attrs['credential_type'].kind == 'galaxy' and list(owner_fields) != ['organization']:
raise serializers.ValidationError({"organization": _("Galaxy credentials must be owned by an Organization.")})
return super(CredentialSerializerCreate, self).validate(attrs)
def create(self, validated_data):
@@ -3395,6 +3319,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
'webhook_service',
'webhook_credential',
'prevent_instance_group_fallback',
'opa_query_path',
)
read_only_fields = ('*', 'custom_virtualenv')
@@ -3596,11 +3521,17 @@ class JobRelaunchSerializer(BaseSerializer):
choices=[('all', _('No change to job limit')), ('failed', _('All failed and unreachable hosts'))],
write_only=True,
)
job_type = serializers.ChoiceField(
required=False,
allow_null=True,
choices=NEW_JOB_TYPE_CHOICES,
write_only=True,
)
credential_passwords = VerbatimField(required=True, write_only=True)
class Meta:
model = Job
fields = ('passwords_needed_to_start', 'retry_counts', 'hosts', 'credential_passwords')
fields = ('passwords_needed_to_start', 'retry_counts', 'hosts', 'job_type', 'credential_passwords')
def validate_credential_passwords(self, value):
pnts = self.instance.passwords_needed_to_start
@@ -5550,7 +5481,7 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
return summary_fields
def validate_unified_job_template(self, value):
if type(value) == InventorySource and value.source not in CLOUD_INVENTORY_SOURCES:
if type(value) == InventorySource and value.source not in load_combined_inventory_source_options():
raise serializers.ValidationError(_('Inventory Source must be a cloud resource.'))
elif type(value) == Project and value.scm_type == '':
raise serializers.ValidationError(_('Manual Project cannot have a schedule set.'))
@@ -6059,6 +5990,34 @@ class InstanceGroupSerializer(BaseSerializer):
raise serializers.ValidationError(_('Only Kubernetes credentials can be associated with an Instance Group'))
return value
def validate_pod_spec_override(self, value):
if not value:
return value
# value should be empty for non-container groups
if self.instance and not self.instance.is_container_group:
raise serializers.ValidationError(_('pod_spec_override is only valid for container groups'))
pod_spec_override_json = {}
# defect if the value is yaml or json if yaml convert to json
try:
# convert yaml to json
pod_spec_override_json = yaml.safe_load(value)
except yaml.YAMLError:
try:
pod_spec_override_json = json.loads(value)
except json.JSONDecodeError:
raise serializers.ValidationError(_('pod_spec_override must be valid yaml or json'))
# validate the
spec = pod_spec_override_json.get('spec', {})
automount_service_account_token = spec.get('automountServiceAccountToken', False)
if automount_service_account_token:
raise serializers.ValidationError(_('automountServiceAccountToken is not allowed for security reasons'))
return value
def validate(self, attrs):
attrs = super(InstanceGroupSerializer, self).validate(attrs)
@@ -6124,8 +6083,6 @@ class ActivityStreamSerializer(BaseSerializer):
('workflow_job_template_node', ('id', 'unified_job_template_id')),
('label', ('id', 'name', 'organization_id')),
('notification', ('id', 'status', 'notification_type', 'notification_template_id')),
('o_auth2_access_token', ('id', 'user_id', 'description', 'application_id', 'scope')),
('o_auth2_application', ('id', 'name', 'description')),
('credential_type', ('id', 'name', 'description', 'kind', 'managed')),
('ad_hoc_command', ('id', 'name', 'status', 'limit')),
('workflow_approval', ('id', 'name', 'unified_job_id')),

View File

@@ -1,62 +1,54 @@
import warnings
from rest_framework.permissions import AllowAny
from rest_framework.schemas import SchemaGenerator, AutoSchema as DRFAuthSchema
from drf_yasg.views import get_schema_view
from drf_yasg import openapi
from drf_yasg.inspectors import SwaggerAutoSchema
from drf_yasg.views import get_schema_view
class SuperUserSchemaGenerator(SchemaGenerator):
def has_view_permissions(self, path, method, view):
#
# Generate the Swagger schema as if you were a superuser and
# permissions didn't matter; this short-circuits the schema path
# discovery to include _all_ potential paths in the API.
#
return True
class CustomSwaggerAutoSchema(SwaggerAutoSchema):
"""Custom SwaggerAutoSchema to add swagger_topic to tags."""
class AutoSchema(DRFAuthSchema):
def get_link(self, path, method, base_url):
link = super(AutoSchema, self).get_link(path, method, base_url)
def get_tags(self, operation_keys=None):
tags = []
try:
serializer = self.view.get_serializer()
if hasattr(self.view, 'get_serializer'):
serializer = self.view.get_serializer()
else:
serializer = None
except Exception:
serializer = None
warnings.warn(
'{}.get_serializer() raised an exception during '
'schema generation. Serializer fields will not be '
'generated for {} {}.'.format(self.view.__class__.__name__, method, path)
'generated for {}.'.format(self.view.__class__.__name__, operation_keys)
)
link.__dict__['deprecated'] = getattr(self.view, 'deprecated', False)
# auto-generate a topic/tag for the serializer based on its model
if hasattr(self.view, 'swagger_topic'):
link.__dict__['topic'] = str(self.view.swagger_topic).title()
tags.append(str(self.view.swagger_topic).title())
elif serializer and hasattr(serializer, 'Meta'):
link.__dict__['topic'] = str(serializer.Meta.model._meta.verbose_name_plural).title()
tags.append(str(serializer.Meta.model._meta.verbose_name_plural).title())
elif hasattr(self.view, 'model'):
link.__dict__['topic'] = str(self.view.model._meta.verbose_name_plural).title()
tags.append(str(self.view.model._meta.verbose_name_plural).title())
else:
warnings.warn('Could not determine a Swagger tag for path {}'.format(path))
return link
tags = ['api'] # Fallback to default value
def get_description(self, path, method):
setattr(self.view.request, 'swagger_method', method)
description = super(AutoSchema, self).get_description(path, method)
return description
if not tags:
warnings.warn(f'Could not determine tags for {self.view.__class__.__name__}')
return tags
def is_deprecated(self):
"""Return `True` if this operation is to be marked as deprecated."""
return getattr(self.view, 'deprecated', False)
schema_view = get_schema_view(
openapi.Info(
title="Snippets API",
default_version='v1',
description="Test description",
terms_of_service="https://www.google.com/policies/terms/",
contact=openapi.Contact(email="contact@snippets.local"),
license=openapi.License(name="BSD License"),
title='AWX API',
default_version='v2',
description='AWX API Documentation',
terms_of_service='https://www.google.com/policies/terms/',
contact=openapi.Contact(email='contact@snippets.local'),
license=openapi.License(name='Apache License'),
),
public=True,
permission_classes=[AllowAny],

View File

@@ -1,114 +0,0 @@
# Token Handling using OAuth2
This page lists OAuth 2 utility endpoints used for authorization, token refresh and revoke.
Note endpoints other than `/api/o/authorize/` are not meant to be used in browsers and do not
support HTTP GET. The endpoints here strictly follow
[RFC specs for OAuth2](https://tools.ietf.org/html/rfc6749), so please use that for detailed
reference. Note AWX net location default to `http://localhost:8013` in examples:
## Create Token for an Application using Authorization code grant type
Given an application "AuthCodeApp" of grant type `authorization-code`,
from the client app, the user makes a GET to the Authorize endpoint with
* `response_type`
* `client_id`
* `redirect_uris`
* `scope`
AWX will respond with the authorization `code` and `state`
to the redirect_uri specified in the application. The client application will then make a POST to the
`api/o/token/` endpoint on AWX with
* `code`
* `client_id`
* `client_secret`
* `grant_type`
* `redirect_uri`
AWX will respond with the `access_token`, `token_type`, `refresh_token`, and `expires_in`. For more
information on testing this flow, refer to [django-oauth-toolkit](http://django-oauth-toolkit.readthedocs.io/en/latest/tutorial/tutorial_01.html#test-your-authorization-server).
## Create Token for an Application using Password grant type
Log in is not required for `password` grant type, so a simple `curl` can be used to acquire a personal access token
via `/api/o/token/` with
* `grant_type`: Required to be "password"
* `username`
* `password`
* `client_id`: Associated application must have grant_type "password"
* `client_secret`
For example:
```bash
curl -X POST \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=password&username=<username>&password=<password>&scope=read" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569e
IaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://localhost:8013/api/o/token/ -i
```
In the above post request, parameters `username` and `password` are username and password of the related
AWX user of the underlying application, and the authentication information is of format
`<client_id>:<client_secret>`, where `client_id` and `client_secret` are the corresponding fields of
underlying application.
Upon success, access token, refresh token and other information are given in the response body in JSON
format:
```text
{
"access_token": "9epHOqHhnXUcgYK8QanOmUQPSgX92g",
"token_type": "Bearer",
"expires_in": 31536000000,
"refresh_token": "jMRX6QvzOTf046KHee3TU5mT3nyXsz",
"scope": "read"
}
```
## Refresh an existing access token
The `/api/o/token/` endpoint is used for refreshing access token:
```bash
curl -X POST \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=refresh_token&refresh_token=AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://localhost:8013/api/o/token/ -i
```
In the above post request, `refresh_token` is provided by `refresh_token` field of the access token
above. The authentication information is of format `<client_id>:<client_secret>`, where `client_id`
and `client_secret` are the corresponding fields of underlying related application of the access token.
Upon success, the new (refreshed) access token with the same scope information as the previous one is
given in the response body in JSON format:
```text
{
"access_token": "NDInWxGJI4iZgqpsreujjbvzCfJqgR",
"token_type": "Bearer",
"expires_in": 31536000000,
"refresh_token": "DqOrmz8bx3srlHkZNKmDpqA86bnQkT",
"scope": "read write"
}
```
Internally, the refresh operation deletes the existing token and a new token is created immediately
after, with information like scope and related application identical to the original one. We can
verify by checking the new token is present at the `api/v2/tokens` endpoint.
## Revoke an access token
Revoking an access token is the same as deleting the token resource object.
Revoking is done by POSTing to `/api/o/revoke_token/` with the token to revoke as parameter:
```bash
curl -X POST -d "token=rQONsve372fQwuc2pn76k3IHDCYpi7" \
-H "Content-Type: application/x-www-form-urlencoded" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://localhost:8013/api/o/revoke_token/ -i
```
`200 OK` means a successful delete.

View File

@@ -1,27 +0,0 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.urls import re_path
from awx.api.views import (
OAuth2ApplicationList,
OAuth2ApplicationDetail,
ApplicationOAuth2TokenList,
OAuth2ApplicationActivityStreamList,
OAuth2TokenList,
OAuth2TokenDetail,
OAuth2TokenActivityStreamList,
)
urls = [
re_path(r'^applications/$', OAuth2ApplicationList.as_view(), name='o_auth2_application_list'),
re_path(r'^applications/(?P<pk>[0-9]+)/$', OAuth2ApplicationDetail.as_view(), name='o_auth2_application_detail'),
re_path(r'^applications/(?P<pk>[0-9]+)/tokens/$', ApplicationOAuth2TokenList.as_view(), name='o_auth2_application_token_list'),
re_path(r'^applications/(?P<pk>[0-9]+)/activity_stream/$', OAuth2ApplicationActivityStreamList.as_view(), name='o_auth2_application_activity_stream_list'),
re_path(r'^tokens/$', OAuth2TokenList.as_view(), name='o_auth2_token_list'),
re_path(r'^tokens/(?P<pk>[0-9]+)/$', OAuth2TokenDetail.as_view(), name='o_auth2_token_detail'),
re_path(r'^tokens/(?P<pk>[0-9]+)/activity_stream/$', OAuth2TokenActivityStreamList.as_view(), name='o_auth2_token_activity_stream_list'),
]
__all__ = ['urls']

View File

@@ -1,45 +0,0 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from datetime import timedelta
from django.utils.timezone import now
from django.conf import settings
from django.urls import re_path
from oauthlib import oauth2
from oauth2_provider import views
from awx.main.models import RefreshToken
from awx.api.views.root import ApiOAuthAuthorizationRootView
class TokenView(views.TokenView):
def create_token_response(self, request):
# Django OAuth2 Toolkit has a bug whereby refresh tokens are *never*
# properly expired (ugh):
#
# https://github.com/jazzband/django-oauth-toolkit/issues/746
#
# This code detects and auto-expires them on refresh grant
# requests.
if request.POST.get('grant_type') == 'refresh_token' and 'refresh_token' in request.POST:
refresh_token = RefreshToken.objects.filter(token=request.POST['refresh_token']).first()
if refresh_token:
expire_seconds = settings.OAUTH2_PROVIDER.get('REFRESH_TOKEN_EXPIRE_SECONDS', 0)
if refresh_token.created + timedelta(seconds=expire_seconds) < now():
return request.build_absolute_uri(), {}, 'The refresh token has expired.', '403'
try:
return super(TokenView, self).create_token_response(request)
except oauth2.AccessDeniedError as e:
return request.build_absolute_uri(), {}, str(e), '403'
urls = [
re_path(r'^$', ApiOAuthAuthorizationRootView.as_view(), name='oauth_authorization_root_view'),
re_path(r"^authorize/$", views.AuthorizationView.as_view(), name="authorize"),
re_path(r"^token/$", TokenView.as_view(), name="token"),
re_path(r"^revoke_token/$", views.RevokeTokenView.as_view(), name="revoke-token"),
]
__all__ = ['urls']

View File

@@ -25,7 +25,7 @@ from awx.api.views.organization import (
OrganizationObjectRolesList,
OrganizationAccessList,
)
from awx.api.views import OrganizationCredentialList, OrganizationApplicationList
from awx.api.views import OrganizationCredentialList
urls = [
@@ -66,7 +66,6 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/galaxy_credentials/$', OrganizationGalaxyCredentialsList.as_view(), name='organization_galaxy_credentials_list'),
re_path(r'^(?P<pk>[0-9]+)/object_roles/$', OrganizationObjectRolesList.as_view(), name='organization_object_roles_list'),
re_path(r'^(?P<pk>[0-9]+)/access_list/$', OrganizationAccessList.as_view(), name='organization_access_list'),
re_path(r'^(?P<pk>[0-9]+)/applications/$', OrganizationApplicationList.as_view(), name='organization_applications_list'),
]
__all__ = ['urls']

View File

@@ -3,7 +3,7 @@
from django.urls import re_path
from awx.api.views import RoleList, RoleDetail, RoleUsersList, RoleTeamsList, RoleParentsList, RoleChildrenList
from awx.api.views import RoleList, RoleDetail, RoleUsersList, RoleTeamsList
urls = [
@@ -11,8 +11,6 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/$', RoleDetail.as_view(), name='role_detail'),
re_path(r'^(?P<pk>[0-9]+)/users/$', RoleUsersList.as_view(), name='role_users_list'),
re_path(r'^(?P<pk>[0-9]+)/teams/$', RoleTeamsList.as_view(), name='role_teams_list'),
re_path(r'^(?P<pk>[0-9]+)/parents/$', RoleParentsList.as_view(), name='role_parents_list'),
re_path(r'^(?P<pk>[0-9]+)/children/$', RoleChildrenList.as_view(), name='role_children_list'),
]
__all__ = ['urls']

View File

@@ -15,7 +15,6 @@ from awx.api.views.root import (
ApiV2AttachView,
)
from awx.api.views import (
AuthView,
UserMeList,
DashboardView,
DashboardJobsGraphView,
@@ -26,10 +25,6 @@ from awx.api.views import (
JobTemplateCredentialsList,
SchedulePreview,
ScheduleZoneInfo,
OAuth2ApplicationList,
OAuth2TokenList,
ApplicationOAuth2TokenList,
OAuth2ApplicationDetail,
HostMetricSummaryMonthlyList,
)
@@ -80,8 +75,6 @@ from .schedule import urls as schedule_urls
from .activity_stream import urls as activity_stream_urls
from .instance import urls as instance_urls
from .instance_group import urls as instance_group_urls
from .oauth2 import urls as oauth2_urls
from .oauth2_root import urls as oauth2_root_urls
from .workflow_approval_template import urls as workflow_approval_template_urls
from .workflow_approval import urls as workflow_approval_urls
from .analytics import urls as analytics_urls
@@ -96,17 +89,11 @@ v2_urls = [
re_path(r'^job_templates/(?P<pk>[0-9]+)/credentials/$', JobTemplateCredentialsList.as_view(), name='job_template_credentials_list'),
re_path(r'^schedules/preview/$', SchedulePreview.as_view(), name='schedule_rrule'),
re_path(r'^schedules/zoneinfo/$', ScheduleZoneInfo.as_view(), name='schedule_zoneinfo'),
re_path(r'^applications/$', OAuth2ApplicationList.as_view(), name='o_auth2_application_list'),
re_path(r'^applications/(?P<pk>[0-9]+)/$', OAuth2ApplicationDetail.as_view(), name='o_auth2_application_detail'),
re_path(r'^applications/(?P<pk>[0-9]+)/tokens/$', ApplicationOAuth2TokenList.as_view(), name='application_o_auth2_token_list'),
re_path(r'^tokens/$', OAuth2TokenList.as_view(), name='o_auth2_token_list'),
re_path(r'^', include(oauth2_urls)),
re_path(r'^metrics/$', MetricsView.as_view(), name='metrics_view'),
re_path(r'^ping/$', ApiV2PingView.as_view(), name='api_v2_ping_view'),
re_path(r'^config/$', ApiV2ConfigView.as_view(), name='api_v2_config_view'),
re_path(r'^config/subscriptions/$', ApiV2SubscriptionView.as_view(), name='api_v2_subscription_view'),
re_path(r'^config/attach/$', ApiV2AttachView.as_view(), name='api_v2_attach_view'),
re_path(r'^auth/$', AuthView.as_view()),
re_path(r'^me/$', UserMeList.as_view(), name='user_me_list'),
re_path(r'^dashboard/$', DashboardView.as_view(), name='dashboard_view'),
re_path(r'^dashboard/graphs/jobs/$', DashboardJobsGraphView.as_view(), name='dashboard_jobs_graph_view'),
@@ -166,7 +153,6 @@ urlpatterns = [
re_path(r'^(?P<version>(v2))/', include(v2_urls)),
re_path(r'^login/$', LoggedLoginView.as_view(template_name='rest_framework/login.html', extra_context={'inside_login_context': True}), name='login'),
re_path(r'^logout/$', LoggedLogoutView.as_view(next_page='/api/', redirect_field_name='next'), name='logout'),
re_path(r'^o/', include(oauth2_root_urls)),
]
if MODE == 'development':
# Only include these if we are in the development environment

View File

@@ -14,10 +14,6 @@ from awx.api.views import (
UserRolesList,
UserActivityStreamList,
UserAccessList,
OAuth2ApplicationList,
OAuth2UserTokenList,
UserPersonalTokenList,
UserAuthorizedTokenList,
)
urls = [
@@ -31,10 +27,6 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/roles/$', UserRolesList.as_view(), name='user_roles_list'),
re_path(r'^(?P<pk>[0-9]+)/activity_stream/$', UserActivityStreamList.as_view(), name='user_activity_stream_list'),
re_path(r'^(?P<pk>[0-9]+)/access_list/$', UserAccessList.as_view(), name='user_access_list'),
re_path(r'^(?P<pk>[0-9]+)/applications/$', OAuth2ApplicationList.as_view(), name='o_auth2_application_list'),
re_path(r'^(?P<pk>[0-9]+)/tokens/$', OAuth2UserTokenList.as_view(), name='o_auth2_token_list'),
re_path(r'^(?P<pk>[0-9]+)/authorized_tokens/$', UserAuthorizedTokenList.as_view(), name='user_authorized_token_list'),
re_path(r'^(?P<pk>[0-9]+)/personal_tokens/$', UserPersonalTokenList.as_view(), name='user_personal_token_list'),
]
__all__ = ['urls']

View File

@@ -36,7 +36,7 @@ from django.utils.translation import gettext_lazy as _
# Django REST Framework
from rest_framework.exceptions import APIException, PermissionDenied, ParseError, NotFound
from rest_framework.parsers import FormParser
from rest_framework.permissions import AllowAny, IsAuthenticated
from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import JSONRenderer, StaticHTMLRenderer
from rest_framework.response import Response
from rest_framework.settings import api_settings
@@ -50,19 +50,12 @@ from rest_framework_yaml.renderers import YAMLRenderer
# ansi2html
from ansi2html import Ansi2HTMLConverter
# Python Social Auth
from social_core.backends.utils import load_backends
# Django OAuth Toolkit
from oauth2_provider.models import get_access_token_model
import pytz
from wsgiref.util import FileWrapper
# django-ansible-base
from ansible_base.lib.utils.requests import get_remote_hosts
from ansible_base.rbac.models import RoleEvaluation, ObjectRole
from ansible_base.resource_registry.shared_types import OrganizationType, TeamType, UserType
from ansible_base.rbac.models import RoleEvaluation
# AWX
from awx.main.tasks.system import send_notifications, update_inventory_computed_fields
@@ -91,7 +84,6 @@ from awx.api.generics import (
from awx.api.views.labels import LabelSubListCreateAttachDetachView
from awx.api.versioning import reverse
from awx.main import models
from awx.main.models.rbac import get_role_definition
from awx.main.utils import (
camelcase_to_underscore,
extract_ansible_vars,
@@ -103,6 +95,7 @@ from awx.main.utils import (
)
from awx.main.utils.encryption import encrypt_value
from awx.main.utils.filters import SmartFilter
from awx.main.utils.plugins import compute_cloud_inventory_sources
from awx.main.redact import UriCleaner
from awx.api.permissions import (
JobTemplateCallbackPermission,
@@ -676,116 +669,16 @@ class ScheduleUnifiedJobsList(SubListAPIView):
name = _('Schedule Jobs List')
class AuthView(APIView):
'''List enabled single-sign-on endpoints'''
authentication_classes = []
permission_classes = (AllowAny,)
swagger_topic = 'System Configuration'
def get(self, request):
from rest_framework.reverse import reverse
data = OrderedDict()
err_backend, err_message = request.session.get('social_auth_error', (None, None))
auth_backends = list(load_backends(settings.AUTHENTICATION_BACKENDS, force_load=True).items())
# Return auth backends in consistent order: Google, GitHub, SAML.
auth_backends.sort(key=lambda x: 'g' if x[0] == 'google-oauth2' else x[0])
for name, backend in auth_backends:
login_url = reverse('social:begin', args=(name,))
complete_url = request.build_absolute_uri(reverse('social:complete', args=(name,)))
backend_data = {'login_url': login_url, 'complete_url': complete_url}
if name == 'saml':
backend_data['metadata_url'] = reverse('sso:saml_metadata')
for idp in sorted(settings.SOCIAL_AUTH_SAML_ENABLED_IDPS.keys()):
saml_backend_data = dict(backend_data.items())
saml_backend_data['login_url'] = '%s?idp=%s' % (login_url, idp)
full_backend_name = '%s:%s' % (name, idp)
if (err_backend == full_backend_name or err_backend == name) and err_message:
saml_backend_data['error'] = err_message
data[full_backend_name] = saml_backend_data
else:
if err_backend == name and err_message:
backend_data['error'] = err_message
data[name] = backend_data
return Response(data)
def immutablesharedfields(cls):
'''
Class decorator to prevent modifying shared resources when ALLOW_LOCAL_RESOURCE_MANAGEMENT setting is set to False.
Works by overriding these view methods:
- create
- delete
- perform_update
create and delete are overridden to raise a PermissionDenied exception.
perform_update is overridden to check if any shared fields are being modified,
and raise a PermissionDenied exception if so.
'''
# create instead of perform_create because some of our views
# override create instead of perform_create
if hasattr(cls, 'create'):
cls.original_create = cls.create
@functools.wraps(cls.create)
def create_wrapper(*args, **kwargs):
if settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
return cls.original_create(*args, **kwargs)
raise PermissionDenied({'detail': _('Creation of this resource is not allowed. Create this resource via the platform ingress.')})
cls.create = create_wrapper
if hasattr(cls, 'delete'):
cls.original_delete = cls.delete
@functools.wraps(cls.delete)
def delete_wrapper(*args, **kwargs):
if settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
return cls.original_delete(*args, **kwargs)
raise PermissionDenied({'detail': _('Deletion of this resource is not allowed. Delete this resource via the platform ingress.')})
cls.delete = delete_wrapper
if hasattr(cls, 'perform_update'):
cls.original_perform_update = cls.perform_update
@functools.wraps(cls.perform_update)
def update_wrapper(*args, **kwargs):
if not settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
view, serializer = args
instance = view.get_object()
if instance:
if isinstance(instance, models.Organization):
shared_fields = OrganizationType._declared_fields.keys()
elif isinstance(instance, models.User):
shared_fields = UserType._declared_fields.keys()
elif isinstance(instance, models.Team):
shared_fields = TeamType._declared_fields.keys()
attrs = serializer.validated_data
for field in shared_fields:
if field in attrs and getattr(instance, field) != attrs[field]:
raise PermissionDenied({field: _(f"Cannot change shared field '{field}'. Alter this field via the platform ingress.")})
return cls.original_perform_update(*args, **kwargs)
cls.perform_update = update_wrapper
return cls
@immutablesharedfields
class TeamList(ListCreateAPIView):
model = models.Team
serializer_class = serializers.TeamSerializer
@immutablesharedfields
class TeamDetail(RetrieveUpdateDestroyAPIView):
model = models.Team
serializer_class = serializers.TeamSerializer
@immutablesharedfields
class TeamUsersList(BaseUsersList):
model = models.User
serializer_class = serializers.UserSerializer
@@ -827,9 +720,19 @@ class TeamRolesList(SubListAttachDetachAPIView):
team = get_object_or_404(models.Team, pk=self.kwargs['pk'])
credential_content_type = ContentType.objects.get_for_model(models.Credential)
if role.content_type == credential_content_type:
if not role.content_object.organization or role.content_object.organization.id != team.organization.id:
data = dict(msg=_("You cannot grant credential access to a team when the Organization field isn't set, or belongs to a different organization"))
if not role.content_object.organization:
data = dict(
msg=_("You cannot grant access to a credential that is not assigned to an organization (private credentials cannot be assigned to teams)")
)
return Response(data, status=status.HTTP_400_BAD_REQUEST)
elif role.content_object.organization.id != team.organization.id:
if not request.user.is_superuser:
data = dict(
msg=_(
"You cannot grant a team access to a credential in a different organization. Only superusers can grant cross-organization credential access to teams"
)
)
return Response(data, status=status.HTTP_400_BAD_REQUEST)
return super(TeamRolesList, self).post(request, *args, **kwargs)
@@ -856,17 +759,9 @@ class TeamProjectsList(SubListAPIView):
def get_queryset(self):
team = self.get_parent_object()
self.check_parent_access(team)
model_ct = ContentType.objects.get_for_model(self.model)
parent_ct = ContentType.objects.get_for_model(self.parent_model)
rd = get_role_definition(team.member_role)
role = ObjectRole.objects.filter(object_id=team.id, content_type=parent_ct, role_definition=rd).first()
if role is None:
# Team has no permissions, therefore team has no projects
return self.model.objects.none()
else:
project_qs = self.model.accessible_objects(self.request.user, 'read_role')
return project_qs.filter(id__in=RoleEvaluation.objects.filter(content_type_id=model_ct.id, role=role).values_list('object_id'))
my_qs = self.model.accessible_objects(self.request.user, 'read_role')
team_qs = models.Project.accessible_objects(team, 'read_role')
return my_qs & team_qs
class TeamActivityStreamList(SubListAPIView):
@@ -981,13 +876,23 @@ class ProjectTeamsList(ListAPIView):
serializer_class = serializers.TeamSerializer
def get_queryset(self):
p = get_object_or_404(models.Project, pk=self.kwargs['pk'])
if not self.request.user.can_access(models.Project, 'read', p):
parent = get_object_or_404(models.Project, pk=self.kwargs['pk'])
if not self.request.user.can_access(models.Project, 'read', parent):
raise PermissionDenied()
project_ct = ContentType.objects.get_for_model(models.Project)
project_ct = ContentType.objects.get_for_model(parent)
team_ct = ContentType.objects.get_for_model(self.model)
all_roles = models.Role.objects.filter(Q(descendents__content_type=project_ct) & Q(descendents__object_id=p.pk), content_type=team_ct)
return self.model.accessible_objects(self.request.user, 'read_role').filter(pk__in=[t.content_object.pk for t in all_roles])
roles_on_project = models.Role.objects.filter(
content_type=project_ct,
object_id=parent.pk,
)
team_member_parent_roles = models.Role.objects.filter(children__in=roles_on_project, role_field='member_role', content_type=team_ct).distinct()
team_ids = team_member_parent_roles.values_list('object_id', flat=True)
my_qs = self.model.accessible_objects(self.request.user, 'read_role').filter(pk__in=team_ids)
return my_qs
class ProjectSchedulesList(SubListCreateAPIView):
@@ -1167,7 +1072,6 @@ class ProjectCopy(CopyAPIView):
copy_return_serializer_class = serializers.ProjectSerializer
@immutablesharedfields
class UserList(ListCreateAPIView):
model = models.User
serializer_class = serializers.UserSerializer
@@ -1185,121 +1089,6 @@ class UserMeList(ListAPIView):
return self.model.objects.filter(pk=self.request.user.pk)
class OAuth2ApplicationList(ListCreateAPIView):
name = _("OAuth 2 Applications")
model = models.OAuth2Application
serializer_class = serializers.OAuth2ApplicationSerializer
swagger_topic = 'Authentication'
class OAuth2ApplicationDetail(RetrieveUpdateDestroyAPIView):
name = _("OAuth 2 Application Detail")
model = models.OAuth2Application
serializer_class = serializers.OAuth2ApplicationSerializer
swagger_topic = 'Authentication'
def update_raw_data(self, data):
data.pop('client_secret', None)
return super(OAuth2ApplicationDetail, self).update_raw_data(data)
class ApplicationOAuth2TokenList(SubListCreateAPIView):
name = _("OAuth 2 Application Tokens")
model = models.OAuth2AccessToken
serializer_class = serializers.OAuth2TokenSerializer
parent_model = models.OAuth2Application
relationship = 'oauth2accesstoken_set'
parent_key = 'application'
swagger_topic = 'Authentication'
class OAuth2ApplicationActivityStreamList(SubListAPIView):
model = models.ActivityStream
serializer_class = serializers.ActivityStreamSerializer
parent_model = models.OAuth2Application
relationship = 'activitystream_set'
swagger_topic = 'Authentication'
search_fields = ('changes',)
class OAuth2TokenList(ListCreateAPIView):
name = _("OAuth2 Tokens")
model = models.OAuth2AccessToken
serializer_class = serializers.OAuth2TokenSerializer
swagger_topic = 'Authentication'
class OAuth2UserTokenList(SubListCreateAPIView):
name = _("OAuth2 User Tokens")
model = models.OAuth2AccessToken
serializer_class = serializers.OAuth2TokenSerializer
parent_model = models.User
relationship = 'main_oauth2accesstoken'
parent_key = 'user'
swagger_topic = 'Authentication'
class UserAuthorizedTokenList(SubListCreateAPIView):
name = _("OAuth2 User Authorized Access Tokens")
model = models.OAuth2AccessToken
serializer_class = serializers.UserAuthorizedTokenSerializer
parent_model = models.User
relationship = 'oauth2accesstoken_set'
parent_key = 'user'
swagger_topic = 'Authentication'
def get_queryset(self):
return get_access_token_model().objects.filter(application__isnull=False, user=self.request.user)
class OrganizationApplicationList(SubListCreateAPIView):
name = _("Organization OAuth2 Applications")
model = models.OAuth2Application
serializer_class = serializers.OAuth2ApplicationSerializer
parent_model = models.Organization
relationship = 'applications'
parent_key = 'organization'
swagger_topic = 'Authentication'
class UserPersonalTokenList(SubListCreateAPIView):
name = _("OAuth2 Personal Access Tokens")
model = models.OAuth2AccessToken
serializer_class = serializers.UserPersonalTokenSerializer
parent_model = models.User
relationship = 'main_oauth2accesstoken'
parent_key = 'user'
swagger_topic = 'Authentication'
def get_queryset(self):
return get_access_token_model().objects.filter(application__isnull=True, user=self.request.user)
class OAuth2TokenDetail(RetrieveUpdateDestroyAPIView):
name = _("OAuth Token Detail")
model = models.OAuth2AccessToken
serializer_class = serializers.OAuth2TokenDetailSerializer
swagger_topic = 'Authentication'
class OAuth2TokenActivityStreamList(SubListAPIView):
model = models.ActivityStream
serializer_class = serializers.ActivityStreamSerializer
parent_model = models.OAuth2AccessToken
relationship = 'activitystream_set'
swagger_topic = 'Authentication'
search_fields = ('changes',)
class UserTeamsList(SubListAPIView):
model = models.Team
serializer_class = serializers.TeamSerializer
@@ -1339,14 +1128,6 @@ class UserRolesList(SubListAttachDetachAPIView):
role = get_object_or_400(models.Role, pk=sub_id)
content_types = ContentType.objects.get_for_models(models.Organization, models.Team, models.Credential) # dict of {model: content_type}
# Prevent user to be associated with team/org when ALLOW_LOCAL_RESOURCE_MANAGEMENT is False
if not settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
for model in [models.Organization, models.Team]:
ct = content_types[model]
if role.content_type == ct and role.role_field in ['member_role', 'admin_role']:
data = dict(msg=_(f"Cannot directly modify user membership to {ct.model}. Direct shared resource management disabled"))
return Response(data, status=status.HTTP_403_FORBIDDEN)
credential_content_type = content_types[models.Credential]
if role.content_type == credential_content_type:
if 'disassociate' not in request.data and role.content_object.organization and user not in role.content_object.organization.member_role:
@@ -1381,7 +1162,6 @@ class UserOrganizationsList(OrganizationCountsMixin, SubListAPIView):
model = models.Organization
serializer_class = serializers.OrganizationSerializer
parent_model = models.User
relationship = 'organizations'
def get_queryset(self):
parent = self.get_parent_object()
@@ -1395,7 +1175,6 @@ class UserAdminOfOrganizationsList(OrganizationCountsMixin, SubListAPIView):
model = models.Organization
serializer_class = serializers.OrganizationSerializer
parent_model = models.User
relationship = 'admin_of_organizations'
def get_queryset(self):
parent = self.get_parent_object()
@@ -1419,7 +1198,6 @@ class UserActivityStreamList(SubListAPIView):
return qs.filter(Q(actor=parent) | Q(user__in=[parent]))
@immutablesharedfields
class UserDetail(RetrieveUpdateDestroyAPIView):
model = models.User
serializer_class = serializers.UserSerializer
@@ -2234,9 +2012,9 @@ class InventorySourceNotificationTemplatesAnyList(SubListCreateAttachDetachAPIVi
def post(self, request, *args, **kwargs):
parent = self.get_parent_object()
if parent.source not in models.CLOUD_INVENTORY_SOURCES:
if parent.source not in compute_cloud_inventory_sources():
return Response(
dict(msg=_("Notification Templates can only be assigned when source is one of {}.").format(models.CLOUD_INVENTORY_SOURCES, parent.source)),
dict(msg=_("Notification Templates can only be assigned when source is one of {}.").format(compute_cloud_inventory_sources(), parent.source)),
status=status.HTTP_400_BAD_REQUEST,
)
return super(InventorySourceNotificationTemplatesAnyList, self).post(request, *args, **kwargs)
@@ -3590,6 +3368,7 @@ class JobRelaunch(RetrieveAPIView):
copy_kwargs = {}
retry_hosts = serializer.validated_data.get('hosts', None)
job_type = serializer.validated_data.get('job_type', None)
if retry_hosts and retry_hosts != 'all':
if obj.status in ACTIVE_STATES:
return Response(
@@ -3610,6 +3389,8 @@ class JobRelaunch(RetrieveAPIView):
)
copy_kwargs['limit'] = ','.join(retry_host_list)
if job_type:
copy_kwargs['job_type'] = job_type
new_job = obj.copy_unified_job(**copy_kwargs)
result = new_job.signal_start(**serializer.validated_data['credential_passwords'])
if not result:
@@ -4391,13 +4172,6 @@ class RoleUsersList(SubListAttachDetachAPIView):
role = self.get_parent_object()
content_types = ContentType.objects.get_for_models(models.Organization, models.Team, models.Credential) # dict of {model: content_type}
if not settings.ALLOW_LOCAL_RESOURCE_MANAGEMENT:
for model in [models.Organization, models.Team]:
ct = content_types[model]
if role.content_type == ct and role.role_field in ['member_role', 'admin_role']:
data = dict(msg=_(f"Cannot directly modify user membership to {ct.model}. Direct shared resource management disabled"))
return Response(data, status=status.HTTP_403_FORBIDDEN)
credential_content_type = content_types[models.Credential]
if role.content_type == credential_content_type:
if 'disassociate' not in request.data and role.content_object.organization and user not in role.content_object.organization.member_role:
@@ -4439,9 +4213,21 @@ class RoleTeamsList(SubListAttachDetachAPIView):
credential_content_type = ContentType.objects.get_for_model(models.Credential)
if role.content_type == credential_content_type:
if not role.content_object.organization or role.content_object.organization.id != team.organization.id:
data = dict(msg=_("You cannot grant credential access to a team when the Organization field isn't set, or belongs to a different organization"))
# Private credentials (no organization) are never allowed for teams
if not role.content_object.organization:
data = dict(
msg=_("You cannot grant access to a credential that is not assigned to an organization (private credentials cannot be assigned to teams)")
)
return Response(data, status=status.HTTP_400_BAD_REQUEST)
# Cross-organization credentials are only allowed for superusers
elif role.content_object.organization.id != team.organization.id:
if not request.user.is_superuser:
data = dict(
msg=_(
"You cannot grant a team access to a credential in a different organization. Only superusers can grant cross-organization credential access to teams"
)
)
return Response(data, status=status.HTTP_400_BAD_REQUEST)
action = 'attach'
if request.data.get('disassociate', None):
@@ -4461,34 +4247,6 @@ class RoleTeamsList(SubListAttachDetachAPIView):
return Response(status=status.HTTP_204_NO_CONTENT)
class RoleParentsList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Role
relationship = 'parents'
permission_classes = (IsAuthenticated,)
search_fields = ('role_field', 'content_type__model')
def get_queryset(self):
role = models.Role.objects.get(pk=self.kwargs['pk'])
return models.Role.filter_visible_roles(self.request.user, role.parents.all())
class RoleChildrenList(SubListAPIView):
deprecated = True
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.Role
relationship = 'children'
permission_classes = (IsAuthenticated,)
search_fields = ('role_field', 'content_type__model')
def get_queryset(self):
role = models.Role.objects.get(pk=self.kwargs['pk'])
return models.Role.filter_visible_roles(self.request.user, role.children.all())
# Create view functions for all of the class-based views to simplify inclusion
# in URL patterns and reverse URL lookups, converting CamelCase names to
# lowercase_with_underscore (e.g. MyView.as_view() becomes my_view).

View File

@@ -10,6 +10,7 @@ from awx.api.generics import APIView, Response
from awx.api.permissions import AnalyticsPermission
from awx.api.versioning import reverse
from awx.main.utils import get_awx_version
from awx.main.utils.analytics_proxy import OIDCClient
from rest_framework import status
from collections import OrderedDict
@@ -179,33 +180,59 @@ class AnalyticsGenericView(APIView):
return Response(response.content, status=response.status_code)
@staticmethod
def _base_auth_request(request: requests.Request, method: str, url: str, user: str, pw: str, headers: dict[str, str]) -> requests.Response:
response = requests.request(
method,
url,
auth=(user, pw),
verify=settings.INSIGHTS_CERT_PATH,
params=getattr(request, 'query_params', {}),
headers=headers,
json=getattr(request, 'data', {}),
timeout=(31, 31),
)
return response
def _send_to_analytics(self, request, method):
try:
headers = self._request_headers(request)
self._get_setting('INSIGHTS_TRACKING_STATE', False, ERROR_UPLOAD_NOT_ENABLED)
url = self._get_analytics_url(request.path)
rh_user = self._get_setting('REDHAT_USERNAME', None, ERROR_MISSING_USER)
rh_password = self._get_setting('REDHAT_PASSWORD', None, ERROR_MISSING_PASSWORD)
if method not in ["GET", "POST", "OPTIONS"]:
return self._error_response(ERROR_UNSUPPORTED_METHOD, method, remote=False, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
else:
response = requests.request(
url = self._get_analytics_url(request.path)
using_subscriptions_credentials = False
try:
rh_user = getattr(settings, 'REDHAT_USERNAME', None)
rh_password = getattr(settings, 'REDHAT_PASSWORD', None)
if not (rh_user and rh_password):
rh_user = self._get_setting('SUBSCRIPTIONS_CLIENT_ID', None, ERROR_MISSING_USER)
rh_password = self._get_setting('SUBSCRIPTIONS_CLIENT_SECRET', None, ERROR_MISSING_PASSWORD)
using_subscriptions_credentials = True
client = OIDCClient(rh_user, rh_password)
response = client.make_request(
method,
url,
auth=(rh_user, rh_password),
verify=settings.INSIGHTS_CERT_PATH,
params=request.query_params,
headers=headers,
json=request.data,
verify=settings.INSIGHTS_CERT_PATH,
params=getattr(request, 'query_params', {}),
json=getattr(request, 'data', {}),
timeout=(31, 31),
)
except requests.RequestException:
# subscriptions credentials are not valid for basic auth, so just return 401
if using_subscriptions_credentials:
response = Response(status=status.HTTP_401_UNAUTHORIZED)
else:
logger.error("Automation Analytics API request failed, trying base auth method")
response = self._base_auth_request(request, method, url, rh_user, rh_password, headers)
#
# Missing or wrong user/pass
#
if response.status_code == status.HTTP_401_UNAUTHORIZED:
text = (response.text or '').rstrip("\n")
text = response.get('text', '').rstrip("\n")
return self._error_response(ERROR_UNAUTHORIZED, text, remote=True, remote_status_code=response.status_code)
#
# Not found, No entitlement or No data in Analytics

View File

@@ -12,7 +12,7 @@ import re
import asn1
from awx.api import serializers
from awx.api.generics import GenericAPIView, Response
from awx.api.permissions import IsSystemAdminOrAuditor
from awx.api.permissions import IsSystemAdmin
from awx.main import models
from cryptography import x509
from cryptography.hazmat.primitives import hashes, serialization
@@ -48,7 +48,7 @@ class InstanceInstallBundle(GenericAPIView):
name = _('Install Bundle')
model = models.Instance
serializer_class = serializers.InstanceSerializer
permission_classes = (IsSystemAdminOrAuditor,)
permission_classes = (IsSystemAdmin,)
def get(self, request, *args, **kwargs):
instance_obj = self.get_object()

View File

@@ -15,6 +15,7 @@ from rest_framework.response import Response
from rest_framework import status
from awx.main.constants import ACTIVE_STATES
from awx.main.models import Organization
from awx.main.utils import get_object_or_400
from awx.main.models.ha import Instance, InstanceGroup, schedule_policy_task
from awx.main.models.organization import Team
@@ -60,6 +61,21 @@ class UnifiedJobDeletionMixin(object):
return Response(status=status.HTTP_204_NO_CONTENT)
class OrganizationInstanceGroupMembershipMixin(object):
"""
This mixin overloads attach/detach so that it calls Organization.save(),
to ensure instance group updates are persisted
"""
def unattach(self, request, *args, **kwargs):
with transaction.atomic():
organization_queryset = Organization.objects.select_for_update()
organization = organization_queryset.get(pk=self.get_parent_object().id)
response = super(OrganizationInstanceGroupMembershipMixin, self).unattach(request, *args, **kwargs)
organization.save()
return response
class InstanceGroupMembershipMixin(object):
"""
This mixin overloads attach/detach so that it calls InstanceGroup.save(),

View File

@@ -52,19 +52,16 @@ from awx.api.serializers import (
WorkflowJobTemplateSerializer,
CredentialSerializer,
)
from awx.api.views.mixin import RelatedJobsPreventDeleteMixin, OrganizationCountsMixin
from awx.api.views import immutablesharedfields
from awx.api.views.mixin import RelatedJobsPreventDeleteMixin, OrganizationCountsMixin, OrganizationInstanceGroupMembershipMixin
logger = logging.getLogger('awx.api.views.organization')
@immutablesharedfields
class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization
serializer_class = OrganizationSerializer
@immutablesharedfields
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization
serializer_class = OrganizationSerializer
@@ -107,7 +104,6 @@ class OrganizationInventoriesList(SubListAPIView):
relationship = 'inventories'
@immutablesharedfields
class OrganizationUsersList(BaseUsersList):
model = User
serializer_class = UserSerializer
@@ -116,7 +112,6 @@ class OrganizationUsersList(BaseUsersList):
ordering = ('username',)
@immutablesharedfields
class OrganizationAdminsList(BaseUsersList):
model = User
serializer_class = UserSerializer
@@ -155,7 +150,6 @@ class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
parent_key = 'organization'
@immutablesharedfields
class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
model = Team
serializer_class = TeamSerializer
@@ -202,7 +196,7 @@ class OrganizationNotificationTemplatesApprovalList(OrganizationNotificationTemp
relationship = 'notification_templates_approvals'
class OrganizationInstanceGroupsList(SubListAttachDetachAPIView):
class OrganizationInstanceGroupsList(OrganizationInstanceGroupMembershipMixin, SubListAttachDetachAPIView):
model = InstanceGroup
serializer_class = InstanceGroupSerializer
parent_model = Organization

View File

@@ -8,6 +8,8 @@ import operator
from collections import OrderedDict
from django.conf import settings
from django.core.cache import cache
from django.db import connection
from django.utils.encoding import smart_str
from django.utils.decorators import method_decorator
from django.views.decorators.csrf import ensure_csrf_cookie
@@ -26,12 +28,14 @@ from awx.api.generics import APIView
from awx.conf.registry import settings_registry
from awx.main.analytics import all_collectors
from awx.main.ha import is_ha_environment
from awx.main.tasks.system import clear_setting_cache
from awx.main.utils import get_awx_version, get_custom_venv_choices
from awx.main.utils.licensing import validate_entitlement_manifest
from awx.api.versioning import URLPathVersioning, is_optional_api_urlpattern_prefix_request, reverse, drf_reverse
from awx.api.versioning import URLPathVersioning, reverse, drf_reverse
from awx.main.constants import PRIVILEGE_ESCALATION_METHODS
from awx.main.models import Project, Organization, Instance, InstanceGroup, JobTemplate
from awx.main.utils import set_environ
from awx.main.utils.analytics_proxy import TokenError
from awx.main.utils.licensing import get_licenser
logger = logging.getLogger('awx.api.views.root')
@@ -51,8 +55,6 @@ class ApiRootView(APIView):
data['description'] = _('AWX REST API')
data['current_version'] = v2
data['available_versions'] = dict(v2=v2)
if not is_optional_api_urlpattern_prefix_request(request):
data['oauth2'] = drf_reverse('api:oauth_authorization_root_view')
data['custom_logo'] = settings.CUSTOM_LOGO
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
data['login_redirect_override'] = settings.LOGIN_REDIRECT_OVERRIDE
@@ -61,20 +63,6 @@ class ApiRootView(APIView):
return Response(data)
class ApiOAuthAuthorizationRootView(APIView):
permission_classes = (AllowAny,)
name = _("API OAuth 2 Authorization Root")
versioning_class = None
swagger_topic = 'Authentication'
def get(self, request, format=None):
data = OrderedDict()
data['authorize'] = drf_reverse('api:authorize')
data['token'] = drf_reverse('api:token')
data['revoke_token'] = drf_reverse('api:revoke-token')
return Response(data)
class ApiVersionRootView(APIView):
permission_classes = (AllowAny,)
swagger_topic = 'Versioning'
@@ -99,8 +87,6 @@ class ApiVersionRootView(APIView):
data['credentials'] = reverse('api:credential_list', request=request)
data['credential_types'] = reverse('api:credential_type_list', request=request)
data['credential_input_sources'] = reverse('api:credential_input_source_list', request=request)
data['applications'] = reverse('api:o_auth2_application_list', request=request)
data['tokens'] = reverse('api:o_auth2_token_list', request=request)
data['metrics'] = reverse('api:metrics_view', request=request)
data['inventory'] = reverse('api:inventory_list', request=request)
data['constructed_inventory'] = reverse('api:constructed_inventory_list', request=request)
@@ -194,19 +180,52 @@ class ApiV2SubscriptionView(APIView):
def post(self, request):
data = request.data.copy()
if data.get('subscriptions_password') == '$encrypted$':
data['subscriptions_password'] = settings.SUBSCRIPTIONS_PASSWORD
try:
user, pw = data.get('subscriptions_username'), data.get('subscriptions_password')
user = None
pw = None
basic_auth = False
# determine if the credentials are for basic auth or not
if data.get('subscriptions_client_id'):
user, pw = data.get('subscriptions_client_id'), data.get('subscriptions_client_secret')
if pw == '$encrypted$':
pw = settings.SUBSCRIPTIONS_CLIENT_SECRET
elif data.get('subscriptions_username'):
user, pw = data.get('subscriptions_username'), data.get('subscriptions_password')
if pw == '$encrypted$':
pw = settings.SUBSCRIPTIONS_PASSWORD
basic_auth = True
if not user or not pw:
return Response({"error": _("Missing subscription credentials")}, status=status.HTTP_400_BAD_REQUEST)
with set_environ(**settings.AWX_TASK_ENV):
validated = get_licenser().validate_rh(user, pw)
if user:
settings.SUBSCRIPTIONS_USERNAME = data['subscriptions_username']
if pw:
settings.SUBSCRIPTIONS_PASSWORD = data['subscriptions_password']
validated = get_licenser().validate_rh(user, pw, basic_auth)
# update settings if the credentials were valid
if basic_auth:
if user:
settings.SUBSCRIPTIONS_USERNAME = user
if pw:
settings.SUBSCRIPTIONS_PASSWORD = pw
# mutual exclusion for basic auth and service account
# only one should be set at a given time so that
# config/attach/ knows which credentials to use
settings.SUBSCRIPTIONS_CLIENT_ID = ""
settings.SUBSCRIPTIONS_CLIENT_SECRET = ""
else:
if user:
settings.SUBSCRIPTIONS_CLIENT_ID = user
if pw:
settings.SUBSCRIPTIONS_CLIENT_SECRET = pw
# mutual exclusion for basic auth and service account
settings.SUBSCRIPTIONS_USERNAME = ""
settings.SUBSCRIPTIONS_PASSWORD = ""
except Exception as exc:
msg = _("Invalid Subscription")
if isinstance(exc, requests.exceptions.HTTPError) and getattr(getattr(exc, 'response', None), 'status_code', None) == 401:
if isinstance(exc, TokenError) or (
isinstance(exc, requests.exceptions.HTTPError) and getattr(getattr(exc, 'response', None), 'status_code', None) == 401
):
msg = _("The provided credentials are invalid (HTTP 401).")
elif isinstance(exc, requests.exceptions.ProxyError):
msg = _("Unable to connect to proxy server.")
@@ -233,16 +252,25 @@ class ApiV2AttachView(APIView):
def post(self, request):
data = request.data.copy()
pool_id = data.get('pool_id', None)
if not pool_id:
return Response({"error": _("No subscription pool ID provided.")}, status=status.HTTP_400_BAD_REQUEST)
user = getattr(settings, 'SUBSCRIPTIONS_USERNAME', None)
pw = getattr(settings, 'SUBSCRIPTIONS_PASSWORD', None)
if pool_id and user and pw:
subscription_id = data.get('subscription_id', None)
if not subscription_id:
return Response({"error": _("No subscription ID provided.")}, status=status.HTTP_400_BAD_REQUEST)
# Ensure we always use the latest subscription credentials
cache.delete_many(['SUBSCRIPTIONS_CLIENT_ID', 'SUBSCRIPTIONS_CLIENT_SECRET', 'SUBSCRIPTIONS_USERNAME', 'SUBSCRIPTIONS_PASSWORD'])
user = getattr(settings, 'SUBSCRIPTIONS_CLIENT_ID', None)
pw = getattr(settings, 'SUBSCRIPTIONS_CLIENT_SECRET', None)
basic_auth = False
if not (user and pw):
user = getattr(settings, 'SUBSCRIPTIONS_USERNAME', None)
pw = getattr(settings, 'SUBSCRIPTIONS_PASSWORD', None)
basic_auth = True
if not (user and pw):
return Response({"error": _("Missing subscription credentials")}, status=status.HTTP_400_BAD_REQUEST)
if subscription_id and user and pw:
data = request.data.copy()
try:
with set_environ(**settings.AWX_TASK_ENV):
validated = get_licenser().validate_rh(user, pw)
validated = get_licenser().validate_rh(user, pw, basic_auth)
except Exception as exc:
msg = _("Invalid Subscription")
if isinstance(exc, requests.exceptions.HTTPError) and getattr(getattr(exc, 'response', None), 'status_code', None) == 401:
@@ -256,10 +284,12 @@ class ApiV2AttachView(APIView):
else:
logger.exception(smart_str(u"Invalid subscription submitted."), extra=dict(actor=request.user.username))
return Response({"error": msg}, status=status.HTTP_400_BAD_REQUEST)
for sub in validated:
if sub['pool_id'] == pool_id:
if sub['subscription_id'] == subscription_id:
sub['valid_key'] = True
settings.LICENSE = sub
connection.on_commit(lambda: clear_setting_cache.delay(['LICENSE']))
return Response(sub)
return Response({"error": _("Error processing subscription metadata.")}, status=status.HTTP_400_BAD_REQUEST)
@@ -279,7 +309,6 @@ class ApiV2ConfigView(APIView):
'''Return various sitewide configuration settings'''
license_data = get_licenser().validate()
if not license_data.get('valid_key', False):
license_data = {}
@@ -295,15 +324,6 @@ class ApiV2ConfigView(APIView):
become_methods=PRIVILEGE_ESCALATION_METHODS,
)
# If LDAP is enabled, user_ldap_fields will return a list of field
# names that are managed by LDAP and should be read-only for users with
# a non-empty ldap_dn attribute.
if getattr(settings, 'AUTH_LDAP_SERVER_URI', None):
user_ldap_fields = ['username', 'password']
user_ldap_fields.extend(getattr(settings, 'AUTH_LDAP_USER_ATTR_MAP', {}).keys())
user_ldap_fields.extend(getattr(settings, 'AUTH_LDAP_USER_FLAGS_BY_GROUP', {}).keys())
data['user_ldap_fields'] = user_ldap_fields
if (
request.user.is_superuser
or request.user.is_system_auditor
@@ -352,6 +372,7 @@ class ApiV2ConfigView(APIView):
try:
license_data_validated = get_licenser().license_from_manifest(license_data)
connection.on_commit(lambda: clear_setting_cache.delay(['LICENSE']))
except Exception:
logger.warning(smart_str(u"Invalid subscription submitted."), extra=dict(actor=request.user.username))
return Response({"error": _("Invalid License")}, status=status.HTTP_400_BAD_REQUEST)
@@ -370,6 +391,7 @@ class ApiV2ConfigView(APIView):
def delete(self, request):
try:
settings.LICENSE = {}
connection.on_commit(lambda: clear_setting_cache.delay(['LICENSE']))
return Response(status=status.HTTP_204_NO_CONTENT)
except Exception:
# FIX: Log

View File

@@ -10,7 +10,7 @@ from django.core.validators import URLValidator, _lazy_re_compile
from django.utils.translation import gettext_lazy as _
# Django REST Framework
from rest_framework.fields import BooleanField, CharField, ChoiceField, DictField, DateTimeField, EmailField, IntegerField, ListField # noqa
from rest_framework.fields import BooleanField, CharField, ChoiceField, DictField, DateTimeField, EmailField, IntegerField, ListField, FloatField # noqa
from rest_framework.serializers import PrimaryKeyRelatedField # noqa
# AWX
@@ -207,7 +207,8 @@ class URLField(CharField):
if self.allow_plain_hostname:
try:
url_parts = urlparse.urlsplit(value)
if url_parts.hostname and '.' not in url_parts.hostname:
looks_like_ipv6 = bool(url_parts.netloc and url_parts.netloc.startswith('[') and url_parts.netloc.endswith(']'))
if not looks_like_ipv6 and url_parts.hostname and '.' not in url_parts.hostname:
netloc = '{}.local'.format(url_parts.hostname)
if url_parts.port:
netloc = '{}:{}'.format(netloc, url_parts.port)

View File

@@ -1,13 +1,11 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
# AWX
from awx.conf.migrations._ldap_group_type import fill_ldap_group_type_params
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [('conf', '0005_v330_rename_two_session_settings')]
operations = [migrations.RunPython(fill_ldap_group_type_params)]
# this migration is doing nothing, and is here to preserve migrations files integrity
operations = []

View File

@@ -0,0 +1,115 @@
from django.db import migrations
LDAP_AUTH_CONF_KEYS = [
'AUTH_LDAP_SERVER_URI',
'AUTH_LDAP_BIND_DN',
'AUTH_LDAP_BIND_PASSWORD',
'AUTH_LDAP_START_TLS',
'AUTH_LDAP_CONNECTION_OPTIONS',
'AUTH_LDAP_USER_SEARCH',
'AUTH_LDAP_USER_DN_TEMPLATE',
'AUTH_LDAP_USER_ATTR_MAP',
'AUTH_LDAP_GROUP_SEARCH',
'AUTH_LDAP_GROUP_TYPE',
'AUTH_LDAP_GROUP_TYPE_PARAMS',
'AUTH_LDAP_REQUIRE_GROUP',
'AUTH_LDAP_DENY_GROUP',
'AUTH_LDAP_USER_FLAGS_BY_GROUP',
'AUTH_LDAP_ORGANIZATION_MAP',
'AUTH_LDAP_TEAM_MAP',
'AUTH_LDAP_1_SERVER_URI',
'AUTH_LDAP_1_BIND_DN',
'AUTH_LDAP_1_BIND_PASSWORD',
'AUTH_LDAP_1_START_TLS',
'AUTH_LDAP_1_CONNECTION_OPTIONS',
'AUTH_LDAP_1_USER_SEARCH',
'AUTH_LDAP_1_USER_DN_TEMPLATE',
'AUTH_LDAP_1_USER_ATTR_MAP',
'AUTH_LDAP_1_GROUP_SEARCH',
'AUTH_LDAP_1_GROUP_TYPE',
'AUTH_LDAP_1_GROUP_TYPE_PARAMS',
'AUTH_LDAP_1_REQUIRE_GROUP',
'AUTH_LDAP_1_DENY_GROUP',
'AUTH_LDAP_1_USER_FLAGS_BY_GROUP',
'AUTH_LDAP_1_ORGANIZATION_MAP',
'AUTH_LDAP_1_TEAM_MAP',
'AUTH_LDAP_2_SERVER_URI',
'AUTH_LDAP_2_BIND_DN',
'AUTH_LDAP_2_BIND_PASSWORD',
'AUTH_LDAP_2_START_TLS',
'AUTH_LDAP_2_CONNECTION_OPTIONS',
'AUTH_LDAP_2_USER_SEARCH',
'AUTH_LDAP_2_USER_DN_TEMPLATE',
'AUTH_LDAP_2_USER_ATTR_MAP',
'AUTH_LDAP_2_GROUP_SEARCH',
'AUTH_LDAP_2_GROUP_TYPE',
'AUTH_LDAP_2_GROUP_TYPE_PARAMS',
'AUTH_LDAP_2_REQUIRE_GROUP',
'AUTH_LDAP_2_DENY_GROUP',
'AUTH_LDAP_2_USER_FLAGS_BY_GROUP',
'AUTH_LDAP_2_ORGANIZATION_MAP',
'AUTH_LDAP_2_TEAM_MAP',
'AUTH_LDAP_3_SERVER_URI',
'AUTH_LDAP_3_BIND_DN',
'AUTH_LDAP_3_BIND_PASSWORD',
'AUTH_LDAP_3_START_TLS',
'AUTH_LDAP_3_CONNECTION_OPTIONS',
'AUTH_LDAP_3_USER_SEARCH',
'AUTH_LDAP_3_USER_DN_TEMPLATE',
'AUTH_LDAP_3_USER_ATTR_MAP',
'AUTH_LDAP_3_GROUP_SEARCH',
'AUTH_LDAP_3_GROUP_TYPE',
'AUTH_LDAP_3_GROUP_TYPE_PARAMS',
'AUTH_LDAP_3_REQUIRE_GROUP',
'AUTH_LDAP_3_DENY_GROUP',
'AUTH_LDAP_3_USER_FLAGS_BY_GROUP',
'AUTH_LDAP_3_ORGANIZATION_MAP',
'AUTH_LDAP_3_TEAM_MAP',
'AUTH_LDAP_4_SERVER_URI',
'AUTH_LDAP_4_BIND_DN',
'AUTH_LDAP_4_BIND_PASSWORD',
'AUTH_LDAP_4_START_TLS',
'AUTH_LDAP_4_CONNECTION_OPTIONS',
'AUTH_LDAP_4_USER_SEARCH',
'AUTH_LDAP_4_USER_DN_TEMPLATE',
'AUTH_LDAP_4_USER_ATTR_MAP',
'AUTH_LDAP_4_GROUP_SEARCH',
'AUTH_LDAP_4_GROUP_TYPE',
'AUTH_LDAP_4_GROUP_TYPE_PARAMS',
'AUTH_LDAP_4_REQUIRE_GROUP',
'AUTH_LDAP_4_DENY_GROUP',
'AUTH_LDAP_4_USER_FLAGS_BY_GROUP',
'AUTH_LDAP_4_ORGANIZATION_MAP',
'AUTH_LDAP_4_TEAM_MAP',
'AUTH_LDAP_5_SERVER_URI',
'AUTH_LDAP_5_BIND_DN',
'AUTH_LDAP_5_BIND_PASSWORD',
'AUTH_LDAP_5_START_TLS',
'AUTH_LDAP_5_CONNECTION_OPTIONS',
'AUTH_LDAP_5_USER_SEARCH',
'AUTH_LDAP_5_USER_DN_TEMPLATE',
'AUTH_LDAP_5_USER_ATTR_MAP',
'AUTH_LDAP_5_GROUP_SEARCH',
'AUTH_LDAP_5_GROUP_TYPE',
'AUTH_LDAP_5_GROUP_TYPE_PARAMS',
'AUTH_LDAP_5_REQUIRE_GROUP',
'AUTH_LDAP_5_DENY_GROUP',
'AUTH_LDAP_5_USER_FLAGS_BY_GROUP',
'AUTH_LDAP_5_ORGANIZATION_MAP',
'AUTH_LDAP_5_TEAM_MAP',
]
def remove_ldap_auth_conf(apps, scheme_editor):
setting = apps.get_model('conf', 'Setting')
setting.objects.filter(key__in=LDAP_AUTH_CONF_KEYS).delete()
class Migration(migrations.Migration):
dependencies = [
('conf', '0010_change_to_JSONField'),
]
operations = [
migrations.RunPython(remove_ldap_auth_conf),
]

View File

@@ -0,0 +1,20 @@
# Generated by Django 4.2.10 on 2024-08-27 19:31
from django.db import migrations
OIDC_AUTH_CONF_KEYS = ['SOCIAL_AUTH_OIDC_KEY', 'SOCIAL_AUTH_OIDC_SECRET', 'SOCIAL_AUTH_OIDC_OIDC_ENDPOINT', 'SOCIAL_AUTH_OIDC_VERIFY_SSL']
def remove_oidc_auth_conf(apps, scheme_editor):
setting = apps.get_model('conf', 'Setting')
setting.objects.filter(key__in=OIDC_AUTH_CONF_KEYS).delete()
class Migration(migrations.Migration):
dependencies = [
('conf', '0011_remove_ldap_auth_conf'),
]
operations = [
migrations.RunPython(remove_oidc_auth_conf),
]

View File

@@ -0,0 +1,22 @@
from django.db import migrations
RADIUS_AUTH_CONF_KEYS = [
'RADIUS_SERVER',
'RADIUS_PORT',
'RADIUS_SECRET',
]
def remove_radius_auth_conf(apps, scheme_editor):
setting = apps.get_model('conf', 'Setting')
setting.objects.filter(key__in=RADIUS_AUTH_CONF_KEYS).delete()
class Migration(migrations.Migration):
dependencies = [
('conf', '0012_remove_oidc_auth_conf'),
]
operations = [
migrations.RunPython(remove_radius_auth_conf),
]

View File

@@ -0,0 +1,39 @@
# Generated by Django 4.2.10 on 2024-08-27 14:20
from django.db import migrations
SAML_AUTH_CONF_KEYS = [
'SAML_AUTO_CREATE_OBJECTS',
'SOCIAL_AUTH_SAML_CALLBACK_URL',
'SOCIAL_AUTH_SAML_METADATA_URL',
'SOCIAL_AUTH_SAML_SP_ENTITY_ID',
'SOCIAL_AUTH_SAML_SP_PUBLIC_CERT',
'SOCIAL_AUTH_SAML_SP_PRIVATE_KEY',
'SOCIAL_AUTH_SAML_ORG_INFO',
'SOCIAL_AUTH_SAML_TECHNICAL_CONTACT',
'SOCIAL_AUTH_SAML_SUPPORT_CONTACT',
'SOCIAL_AUTH_SAML_ENABLED_IDPS',
'SOCIAL_AUTH_SAML_SECURITY_CONFIG',
'SOCIAL_AUTH_SAML_SP_EXTRA',
'SOCIAL_AUTH_SAML_EXTRA_DATA',
'SOCIAL_AUTH_SAML_ORGANIZATION_MAP',
'SOCIAL_AUTH_SAML_TEAM_MAP',
'SOCIAL_AUTH_SAML_ORGANIZATION_ATTR',
'SOCIAL_AUTH_SAML_TEAM_ATTR',
'SOCIAL_AUTH_SAML_USER_FLAGS_BY_ATTR',
]
def remove_saml_auth_conf(apps, scheme_editor):
setting = apps.get_model('conf', 'Setting')
setting.objects.filter(key__in=SAML_AUTH_CONF_KEYS).delete()
class Migration(migrations.Migration):
dependencies = [
('conf', '0013_remove_radius_auth_conf'),
]
operations = [
migrations.RunPython(remove_saml_auth_conf),
]

View File

@@ -0,0 +1,81 @@
# Generated by Django 4.2.10 on 2024-08-13 11:14
from django.db import migrations
SOCIAL_OAUTH_CONF_KEYS = [
# MICROSOFT AZURE ACTIVE DIRECTORY SETTINGS
'SOCIAL_AUTH_AZUREAD_OAUTH2_CALLBACK_URL',
'SOCIAL_AUTH_AZUREAD_OAUTH2_KEY',
'SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET',
'SOCIAL_AUTH_AZUREAD_OAUTH2_ORGANIZATION_MAP',
'SOCIAL_AUTH_AZUREAD_OAUTH2_TEAM_MAP',
# GOOGLE OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GOOGLE_OAUTH2_CALLBACK_URL',
'SOCIAL_AUTH_GOOGLE_OAUTH2_KEY',
'SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET',
'SOCIAL_AUTH_GOOGLE_OAUTH2_WHITELISTED_DOMAINS',
'SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS',
'SOCIAL_AUTH_GOOGLE_OAUTH2_ORGANIZATION_MAP',
'SOCIAL_AUTH_GOOGLE_OAUTH2_TEAM_MAP',
# GITHUB OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GITHUB_CALLBACK_URL',
'SOCIAL_AUTH_GITHUB_KEY',
'SOCIAL_AUTH_GITHUB_SECRET',
'SOCIAL_AUTH_GITHUB_ORGANIZATION_MAP',
'SOCIAL_AUTH_GITHUB_TEAM_MAP',
# GITHUB ORG OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GITHUB_ORG_CALLBACK_URL',
'SOCIAL_AUTH_GITHUB_ORG_KEY',
'SOCIAL_AUTH_GITHUB_ORG_SECRET',
'SOCIAL_AUTH_GITHUB_ORG_NAME',
'SOCIAL_AUTH_GITHUB_ORG_ORGANIZATION_MAP',
'SOCIAL_AUTH_GITHUB_ORG_TEAM_MAP',
# GITHUB TEAM OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GITHUB_TEAM_CALLBACK_URL',
'SOCIAL_AUTH_GITHUB_TEAM_KEY',
'SOCIAL_AUTH_GITHUB_TEAM_SECRET',
'SOCIAL_AUTH_GITHUB_TEAM_ID',
'SOCIAL_AUTH_GITHUB_TEAM_ORGANIZATION_MAP',
'SOCIAL_AUTH_GITHUB_TEAM_TEAM_MAP',
# GITHUB ENTERPRISE OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GITHUB_ENTERPRISE_CALLBACK_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_KEY',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_SECRET',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORGANIZATION_MAP',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_MAP',
# GITHUB ENTERPRISE ORG OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_CALLBACK_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_API_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_KEY',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_SECRET',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_NAME',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_ORGANIZATION_MAP',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_TEAM_MAP',
# GITHUB ENTERPRISE TEAM OAUTH2 AUTHENTICATION SETTINGS
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_CALLBACK_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_API_URL',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_KEY',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_SECRET',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_ID',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_ORGANIZATION_MAP',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_TEAM_MAP',
]
def remove_social_oauth_conf(apps, scheme_editor):
setting = apps.get_model('conf', 'Setting')
setting.objects.filter(key__in=SOCIAL_OAUTH_CONF_KEYS).delete()
class Migration(migrations.Migration):
dependencies = [
('conf', '0014_remove_saml_auth_conf'),
]
operations = [
migrations.RunPython(remove_social_oauth_conf),
]

View File

@@ -0,0 +1,25 @@
from django.db import migrations
TACACS_PLUS_AUTH_CONF_KEYS = [
'TACACSPLUS_HOST',
'TACACSPLUS_PORT',
'TACACSPLUS_SECRET',
'TACACSPLUS_SESSION_TIMEOUT',
'TACACSPLUS_AUTH_PROTOCOL',
'TACACSPLUS_REM_ADDR',
]
def remove_tacacs_plus_auth_conf(apps, scheme_editor):
setting = apps.get_model('conf', 'Setting')
setting.objects.filter(key__in=TACACS_PLUS_AUTH_CONF_KEYS).delete()
class Migration(migrations.Migration):
dependencies = [
('conf', '0015_remove_social_oauth_conf'),
]
operations = [
migrations.RunPython(remove_tacacs_plus_auth_conf),
]

View File

@@ -1,31 +0,0 @@
import inspect
from django.conf import settings
import logging
logger = logging.getLogger('awx.conf.migrations')
def fill_ldap_group_type_params(apps, schema_editor):
group_type = getattr(settings, 'AUTH_LDAP_GROUP_TYPE', None)
Setting = apps.get_model('conf', 'Setting')
group_type_params = {'name_attr': 'cn', 'member_attr': 'member'}
qs = Setting.objects.filter(key='AUTH_LDAP_GROUP_TYPE_PARAMS')
entry = None
if qs.exists():
entry = qs[0]
group_type_params = entry.value
else:
return # for new installs we prefer to use the default value
init_attrs = set(inspect.getfullargspec(group_type.__init__).args[1:])
for k in list(group_type_params.keys()):
if k not in init_attrs:
del group_type_params[k]
entry.value = group_type_params
logger.warning(f'Migration updating AUTH_LDAP_GROUP_TYPE_PARAMS with value {entry.value}')
entry.save()

View File

@@ -27,5 +27,5 @@ def _migrate_setting(apps, old_key, new_key, encrypted=False):
def prefill_rh_credentials(apps, schema_editor):
_migrate_setting(apps, 'REDHAT_USERNAME', 'SUBSCRIPTIONS_USERNAME', encrypted=False)
_migrate_setting(apps, 'REDHAT_PASSWORD', 'SUBSCRIPTIONS_PASSWORD', encrypted=True)
_migrate_setting(apps, 'REDHAT_USERNAME', 'SUBSCRIPTIONS_CLIENT_ID', encrypted=False)
_migrate_setting(apps, 'REDHAT_PASSWORD', 'SUBSCRIPTIONS_CLIENT_SECRET', encrypted=True)

View File

@@ -38,6 +38,7 @@ class SettingsRegistry(object):
if setting in self._registry:
raise ImproperlyConfigured('Setting "{}" is already registered.'.format(setting))
category = kwargs.setdefault('category', None)
kwargs.setdefault('required', False) # No setting is ordinarily required
category_slug = kwargs.setdefault('category_slug', slugify(category or '') or None)
if category_slug in {'all', 'changed', 'user-defaults'}:
raise ImproperlyConfigured('"{}" is a reserved category slug.'.format(category_slug))

View File

@@ -97,10 +97,13 @@ def _ctit_db_wrapper(trans_safe=False):
except DatabaseError as e:
if trans_safe:
cause = e.__cause__
if cause and hasattr(cause, 'sqlstate'):
sqlstate = getattr(cause, 'sqlstate', None)
if cause and sqlstate:
sqlstate = cause.sqlstate
sqlstate_str = psycopg.errors.lookup(sqlstate)
logger.error('SQL Error state: {} - {}'.format(sqlstate, sqlstate_str))
else:
logger.error(f'Error reading something related to database settings: {str(e)}.')
else:
logger.exception('Error modifying something related to database settings.')
finally:

View File

@@ -61,18 +61,3 @@ def on_post_delete_setting(sender, **kwargs):
key = getattr(instance, '_saved_key_', None)
if key:
handle_setting_change(key, True)
@receiver(setting_changed)
def disable_local_auth(**kwargs):
if (kwargs['setting'], kwargs['value']) == ('DISABLE_LOCAL_AUTH', True):
from django.contrib.auth.models import User
from oauth2_provider.models import RefreshToken
from awx.main.models.oauth import OAuth2AccessToken
from awx.main.management.commands.revoke_oauth2_tokens import revoke_tokens
logger.warning("Triggering token invalidation for local users.")
qs = User.objects.filter(profile__ldap_dn='', enterprise_auth__isnull=True, social_auth__isnull=True)
revoke_tokens(RefreshToken.objects.filter(revoked=None, user__in=qs))
revoke_tokens(OAuth2AccessToken.objects.filter(user__in=qs))

View File

@@ -8,7 +8,6 @@ from awx.main.utils.encryption import decrypt_field
from awx.conf import fields
from awx.conf.registry import settings_registry
from awx.conf.models import Setting
from awx.sso import fields as sso_fields
@pytest.fixture
@@ -103,24 +102,6 @@ def test_setting_singleton_update(api_request, dummy_setting):
assert response.data['FOO_BAR'] == 4
@pytest.mark.django_db
def test_setting_singleton_update_hybriddictfield_with_forbidden(api_request, dummy_setting):
# Some HybridDictField subclasses have a child of _Forbidden,
# indicating that only the defined fields can be filled in. Make
# sure that the _Forbidden validator doesn't get used for the
# fields. See also https://github.com/ansible/awx/issues/4099.
with dummy_setting('FOO_BAR', field_class=sso_fields.SAMLOrgAttrField, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.clear_setting_cache'
):
api_request(
'patch',
reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}),
data={'FOO_BAR': {'saml_admin_attr': 'Admins', 'saml_attr': 'Orgs'}},
)
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
assert response.data['FOO_BAR'] == {'saml_admin_attr': 'Admins', 'saml_attr': 'Orgs'}
@pytest.mark.django_db
def test_setting_singleton_update_dont_change_readonly_fields(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, read_only=True, default=4, category='FooBar', category_slug='foobar'), mock.patch(

View File

@@ -1,25 +0,0 @@
import pytest
from awx.conf.migrations._ldap_group_type import fill_ldap_group_type_params
from awx.conf.models import Setting
from django.apps import apps
@pytest.mark.django_db
def test_fill_group_type_params_no_op():
fill_ldap_group_type_params(apps, 'dont-use-me')
assert Setting.objects.count() == 0
@pytest.mark.django_db
def test_keep_old_setting_with_default_value():
Setting.objects.create(key='AUTH_LDAP_GROUP_TYPE', value={'name_attr': 'cn', 'member_attr': 'member'})
fill_ldap_group_type_params(apps, 'dont-use-me')
assert Setting.objects.count() == 1
s = Setting.objects.first()
assert s.value == {'name_attr': 'cn', 'member_attr': 'member'}
# NOTE: would be good to test the removal of attributes by migration
# but this requires fighting with the validator and is not done here

View File

@@ -111,7 +111,6 @@ class TestURLField:
@pytest.mark.parametrize(
"url,schemes,regex, allow_numbers_in_top_level_domain, expect_no_error",
[
("ldap://www.example.org42", "ldap", None, True, True),
("https://www.example.org42", "https", None, False, False),
("https://www.example.org", None, regex, None, True),
("https://www.example3.org", None, regex, None, False),
@@ -129,3 +128,41 @@ class TestURLField:
else:
with pytest.raises(ValidationError):
field.run_validators(url)
@pytest.mark.parametrize(
"url, expect_error",
[
("https://[1:2:3]", True),
("http://[1:2:3]", True),
("https://[2001:db8:3333:4444:5555:6666:7777:8888", True),
("https://2001:db8:3333:4444:5555:6666:7777:8888", True),
("https://[2001:db8:3333:4444:5555:6666:7777:8888]", False),
("https://[::1]", False),
("https://[::]", False),
("https://[2001:db8::1]", False),
("https://[2001:db8:0:0:0:0:1:1]", False),
("https://[fe80::2%eth0]", True), # ipv6 scope identifier
("https://[fe80:0:0:0:200:f8ff:fe21:67cf]", False),
("https://[::ffff:192.168.1.10]", False),
("https://[0:0:0:0:0:ffff:c000:0201]", False),
("https://[2001:0db8:000a:0001:0000:0000:0000:0000]", False),
("https://[2001:db8:a:1::]", False),
("https://[ff02::1]", False),
("https://[ff02:0:0:0:0:0:0:1]", False),
("https://[fc00::1]", False),
("https://[fd12:3456:789a:1::1]", False),
("https://[2001:db8::abcd:ef12:3456:7890]", False),
("https://[2001:db8:0000:abcd:0000:ef12:0000:3456]", False),
("https://[::ffff:10.0.0.1]", False),
("https://[2001:db8:cafe::]", False),
("https://[2001:db8:cafe:0:0:0:0:0]", False),
("https://[fe80::210:f3ff:fedf:4567%3]", True), # ipv6 scope identifier, numerical interface
],
)
def test_ipv6_urls(self, url, expect_error):
field = URLField()
if expect_error:
with pytest.raises(ValidationError, match="Enter a valid URL"):
field.run_validators(url)
else:
field.run_validators(url)

View File

@@ -17,9 +17,6 @@ from django.core.exceptions import ObjectDoesNotExist, FieldDoesNotExist
# Django REST Framework
from rest_framework.exceptions import ParseError, PermissionDenied
# Django OAuth Toolkit
from awx.main.models.oauth import OAuth2Application, OAuth2AccessToken
# django-ansible-base
from ansible_base.lib.utils.validation import to_python_boolean
from ansible_base.rbac.models import RoleEvaluation
@@ -441,10 +438,7 @@ class BaseAccess(object):
# Actions not possible for reason unrelated to RBAC
# Cannot copy with validation errors, or update a manual group/project
if 'write' not in getattr(self.user, 'oauth_scopes', ['write']):
user_capabilities[display_method] = False # Read tokens cannot take any actions
continue
elif display_method in ['copy', 'start', 'schedule'] and isinstance(obj, JobTemplate):
if display_method in ['copy', 'start', 'schedule'] and isinstance(obj, JobTemplate):
if obj.validation_errors:
user_capabilities[display_method] = False
continue
@@ -642,13 +636,12 @@ class UserAccess(BaseAccess):
"""
model = User
prefetch_related = (
'profile',
'resource',
)
prefetch_related = ('resource',)
def filtered_queryset(self):
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and (self.user.admin_of_organizations.exists() or self.user.auditor_of_organizations.exists()):
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and (
Organization.access_qs(self.user, 'change').exists() or Organization.access_qs(self.user, 'audit').exists()
):
qs = User.objects.all()
else:
qs = (
@@ -756,82 +749,6 @@ class UserAccess(BaseAccess):
return False
class OAuth2ApplicationAccess(BaseAccess):
"""
I can read, change or delete OAuth 2 applications when:
- I am a superuser.
- I am the admin of the organization of the user of the application.
- I am a user in the organization of the application.
I can create OAuth 2 applications when:
- I am a superuser.
- I am the admin of the organization of the application.
"""
model = OAuth2Application
select_related = ('user',)
prefetch_related = ('organization', 'oauth2accesstoken_set')
def filtered_queryset(self):
org_access_qs = Organization.access_qs(self.user, 'member')
return self.model.objects.filter(organization__in=org_access_qs)
def can_change(self, obj, data):
return self.user.is_superuser or self.check_related('organization', Organization, data, obj=obj, role_field='admin_role', mandatory=True)
def can_delete(self, obj):
return self.user.is_superuser or obj.organization in self.user.admin_of_organizations
def can_add(self, data):
if self.user.is_superuser:
return True
if not data:
return Organization.access_qs(self.user, 'change').exists()
return self.check_related('organization', Organization, data, role_field='admin_role', mandatory=True)
class OAuth2TokenAccess(BaseAccess):
"""
I can read, change or delete an app token when:
- I am a superuser.
- I am the admin of the organization of the application of the token.
- I am the user of the token.
I can create an OAuth2 app token when:
- I have the read permission of the related application.
I can read, change or delete a personal token when:
- I am the user of the token
- I am the superuser
I can create an OAuth2 Personal Access Token when:
- I am a user. But I can only create a PAT for myself.
"""
model = OAuth2AccessToken
select_related = ('user', 'application')
prefetch_related = ('refresh_token',)
def filtered_queryset(self):
org_access_qs = Organization.objects.filter(Q(admin_role__members=self.user) | Q(auditor_role__members=self.user))
return self.model.objects.filter(application__organization__in=org_access_qs) | self.model.objects.filter(user__id=self.user.pk)
def can_delete(self, obj):
if (self.user.is_superuser) | (obj.user == self.user):
return True
elif not obj.application:
return False
return self.user in obj.application.organization.admin_role
def can_change(self, obj, data):
return self.can_delete(obj)
def can_add(self, data):
if 'application' in data:
app = get_object_from_data('application', OAuth2Application, data)
if app is None:
return True
return OAuth2ApplicationAccess(self.user).can_read(app)
return True
class OrganizationAccess(NotificationAttachMixin, BaseAccess):
"""
I can see organizations when:
@@ -1309,7 +1226,9 @@ class TeamAccess(BaseAccess):
)
def filtered_queryset(self):
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and (self.user.admin_of_organizations.exists() or self.user.auditor_of_organizations.exists()):
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and (
Organization.access_qs(self.user, 'change').exists() or Organization.access_qs(self.user, 'audit').exists()
):
return self.model.objects.all()
return self.model.objects.filter(
Q(organization__in=Organization.accessible_pk_qs(self.user, 'member_role')) | Q(pk__in=self.model.accessible_pk_qs(self.user, 'read_role'))
@@ -1861,6 +1780,11 @@ class SystemJobAccess(BaseAccess):
model = SystemJob
def filtered_queryset(self):
if self.user.is_superuser or self.user.is_system_auditor:
return self.model.objects.all()
return self.model.objects.none()
def can_start(self, obj, validate_license=True):
return False # no relaunching of system jobs
@@ -2178,7 +2102,7 @@ class WorkflowJobAccess(BaseAccess):
def filtered_queryset(self):
return WorkflowJob.objects.filter(
Q(unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
| Q(organization__in=Organization.objects.filter(Q(admin_role__members=self.user)), is_bulk_job=True)
| Q(organization__in=Organization.accessible_pk_qs(self.user, 'auditor_role'))
)
def can_read(self, obj):
@@ -2576,12 +2500,11 @@ class UnifiedJobAccess(BaseAccess):
def filtered_queryset(self):
inv_pk_qs = Inventory._accessible_pk_qs(Inventory, self.user, 'read_role')
org_auditor_qs = Organization.objects.filter(Q(admin_role__members=self.user) | Q(auditor_role__members=self.user))
qs = self.model.objects.filter(
Q(unified_job_template_id__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
| Q(inventoryupdate__inventory_source__inventory__id__in=inv_pk_qs)
| Q(adhoccommand__inventory__id__in=inv_pk_qs)
| Q(organization__in=org_auditor_qs)
| Q(organization__in=Organization.accessible_pk_qs(self.user, 'auditor_role'))
)
return qs
@@ -2645,7 +2568,7 @@ class NotificationTemplateAccess(BaseAccess):
if settings.ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED:
return self.model.access_qs(self.user, 'view')
return self.model.objects.filter(
Q(organization__in=Organization.access_qs(self.user, 'add_notificationtemplate')) | Q(organization__in=self.user.auditor_of_organizations)
Q(organization__in=Organization.access_qs(self.user, 'add_notificationtemplate')) | Q(organization__in=Organization.access_qs(self.user, 'audit'))
).distinct()
@check_superuser
@@ -2680,7 +2603,7 @@ class NotificationAccess(BaseAccess):
def filtered_queryset(self):
return self.model.objects.filter(
Q(notification_template__organization__in=Organization.access_qs(self.user, 'add_notificationtemplate'))
| Q(notification_template__organization__in=self.user.auditor_of_organizations)
| Q(notification_template__organization__in=Organization.access_qs(self.user, 'audit'))
).distinct()
def can_delete(self, obj):
@@ -2739,8 +2662,6 @@ class ActivityStreamAccess(BaseAccess):
'credential_type',
'team',
'ad_hoc_command',
'o_auth2_application',
'o_auth2_access_token',
'notification_template',
'notification',
'label',
@@ -2826,14 +2747,6 @@ class ActivityStreamAccess(BaseAccess):
if team_set:
q |= Q(team__in=team_set)
app_set = OAuth2ApplicationAccess(self.user).filtered_queryset()
if app_set:
q |= Q(o_auth2_application__in=app_set)
token_set = OAuth2TokenAccess(self.user).filtered_queryset()
if token_set:
q |= Q(o_auth2_access_token__in=token_set)
return qs.filter(q).distinct()
def can_add(self, data):

View File

@@ -3,13 +3,13 @@ import logging
# AWX
from awx.main.analytics.subsystem_metrics import DispatcherMetrics, CallbackReceiverMetrics
from awx.main.dispatch.publish import task
from awx.main.dispatch.publish import task as task_awx
from awx.main.dispatch import get_task_queuename
logger = logging.getLogger('awx.main.scheduler')
@task(queue=get_task_queuename)
@task_awx(queue=get_task_queuename)
def send_subsystem_metrics():
DispatcherMetrics().send_metrics()
CallbackReceiverMetrics().send_metrics()

View File

@@ -142,7 +142,7 @@ def config(since, **kwargs):
return {
'platform': {
'system': platform.system(),
'dist': distro.linux_distribution(),
'dist': (distro.name(), distro.version(), distro.codename()),
'release': platform.release(),
'type': install_type,
},
@@ -444,11 +444,6 @@ def _events_table(since, full_path, until, tbl, where_column, project_job_create
return _copy_table(table='events', query=query(fr"replace({tbl}.event_data, '\u', '\u005cu')::jsonb"), path=full_path)
@register('events_table', '1.5', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
def events_table_unpartitioned(since, full_path, until, **kwargs):
return _events_table(since, full_path, until, '_unpartitioned_main_jobevent', 'created', **kwargs)
@register('events_table', '1.5', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
def events_table_partitioned_modified(since, full_path, until, **kwargs):
return _events_table(since, full_path, until, 'main_jobevent', 'modified', project_job_created=True, **kwargs)

View File

@@ -16,10 +16,13 @@ from rest_framework.exceptions import PermissionDenied
import requests
from awx.conf.license import get_license
from ansible_base.lib.utils.db import advisory_lock
from awx.main.models import Job
from awx.main.access import access_registry
from awx.main.utils import get_awx_http_client_headers, set_environ, datetime_hook
from awx.main.utils.pglock import advisory_lock
from awx.main.utils.analytics_proxy import OIDCClient
__all__ = ['register', 'gather', 'ship']
@@ -181,7 +184,10 @@ def gather(dest=None, module=None, subset=None, since=None, until=None, collecti
logger.log(log_level, "Automation Analytics not enabled. Use --dry-run to gather locally without sending.")
return None
if not (settings.AUTOMATION_ANALYTICS_URL and settings.REDHAT_USERNAME and settings.REDHAT_PASSWORD):
if not (
settings.AUTOMATION_ANALYTICS_URL
and ((settings.REDHAT_USERNAME and settings.REDHAT_PASSWORD) or (settings.SUBSCRIPTIONS_CLIENT_ID and settings.SUBSCRIPTIONS_CLIENT_SECRET))
):
logger.log(log_level, "Not gathering analytics, configuration is invalid. Use --dry-run to gather locally without sending.")
return None
@@ -318,10 +324,10 @@ def gather(dest=None, module=None, subset=None, since=None, until=None, collecti
settings.AUTOMATION_ANALYTICS_LAST_ENTRIES = json.dumps(last_entries, cls=DjangoJSONEncoder)
if collection_type != 'dry-run':
if succeeded:
for fpath in tarfiles:
if os.path.exists(fpath):
os.remove(fpath)
for fpath in tarfiles:
if os.path.exists(fpath):
os.remove(fpath)
with disable_activity_stream():
if not settings.AUTOMATION_ANALYTICS_LAST_GATHER or until > settings.AUTOMATION_ANALYTICS_LAST_GATHER:
# `AUTOMATION_ANALYTICS_LAST_GATHER` is set whether collection succeeds or fails;
@@ -361,21 +367,35 @@ def ship(path):
if not url:
logger.error('AUTOMATION_ANALYTICS_URL is not set')
return False
rh_user = getattr(settings, 'REDHAT_USERNAME', None)
rh_password = getattr(settings, 'REDHAT_PASSWORD', None)
if not rh_user:
logger.error('REDHAT_USERNAME is not set')
rh_id = getattr(settings, 'REDHAT_USERNAME', None)
rh_secret = getattr(settings, 'REDHAT_PASSWORD', None)
if not (rh_id and rh_secret):
rh_id = getattr(settings, 'SUBSCRIPTIONS_CLIENT_ID', None)
rh_secret = getattr(settings, 'SUBSCRIPTIONS_CLIENT_SECRET', None)
if not rh_id:
logger.error('Neither REDHAT_USERNAME nor SUBSCRIPTIONS_CLIENT_ID are set')
return False
if not rh_password:
logger.error('REDHAT_PASSWORD is not set')
if not rh_secret:
logger.error('Neither REDHAT_PASSWORD nor SUBSCRIPTIONS_CLIENT_SECRET are set')
return False
with open(path, 'rb') as f:
files = {'file': (os.path.basename(path), f, settings.INSIGHTS_AGENT_MIME)}
s = requests.Session()
s.headers = get_awx_http_client_headers()
s.headers.pop('Content-Type')
with set_environ(**settings.AWX_TASK_ENV):
response = s.post(url, files=files, verify=settings.INSIGHTS_CERT_PATH, auth=(rh_user, rh_password), headers=s.headers, timeout=(31, 31))
try:
client = OIDCClient(rh_id, rh_secret)
response = client.make_request("POST", url, headers=s.headers, files=files, verify=settings.INSIGHTS_CERT_PATH, timeout=(31, 31))
except requests.RequestException:
logger.error("Automation Analytics API request failed, trying base auth method")
response = s.post(url, files=files, verify=settings.INSIGHTS_CERT_PATH, auth=(rh_id, rh_secret), headers=s.headers, timeout=(31, 31))
# Accept 2XX status_codes
if response.status_code >= 300:
logger.error('Upload failed with status {}, {}'.format(response.status_code, response.text))

View File

@@ -128,6 +128,7 @@ def metrics():
registry=REGISTRY,
)
LICENSE_EXPIRY = Gauge('awx_license_expiry', 'Time before license expires', registry=REGISTRY)
LICENSE_INSTANCE_TOTAL = Gauge('awx_license_instance_total', 'Total number of managed hosts provided by your license', registry=REGISTRY)
LICENSE_INSTANCE_FREE = Gauge('awx_license_instance_free', 'Number of remaining managed hosts provided by your license', registry=REGISTRY)
@@ -148,6 +149,7 @@ def metrics():
}
)
LICENSE_EXPIRY.set(str(license_info.get('time_remaining', 0)))
LICENSE_INSTANCE_TOTAL.set(str(license_info.get('instance_count', 0)))
LICENSE_INSTANCE_FREE.set(str(license_info.get('free_instances', 0)))

View File

@@ -9,6 +9,7 @@ from prometheus_client.core import GaugeMetricFamily, HistogramMetricFamily
from prometheus_client.registry import CollectorRegistry
from django.conf import settings
from django.http import HttpRequest
import redis.exceptions
from rest_framework.request import Request
from awx.main.consumers import emit_channel_notification
@@ -43,11 +44,12 @@ class MetricsServer(MetricsServerSettings):
class BaseM:
def __init__(self, field, help_text):
def __init__(self, field, help_text, labels=None):
self.field = field
self.help_text = help_text
self.current_value = 0
self.metric_has_changed = False
self.labels = labels or {}
def reset_value(self, conn):
conn.hset(root_key, self.field, 0)
@@ -68,12 +70,16 @@ class BaseM:
value = conn.hget(root_key, self.field)
return self.decode_value(value)
def to_prometheus(self, instance_data):
def to_prometheus(self, instance_data, namespace=None):
output_text = f"# HELP {self.field} {self.help_text}\n# TYPE {self.field} gauge\n"
for instance in instance_data:
if self.field in instance_data[instance]:
# Build label string
labels = f'node="{instance}"'
if namespace:
labels += f',subsystem="{namespace}"'
# on upgrade, if there are stale instances, we can end up with issues where new metrics are not present
output_text += f'{self.field}{{node="{instance}"}} {instance_data[instance][self.field]}\n'
output_text += f'{self.field}{{{labels}}} {instance_data[instance][self.field]}\n'
return output_text
@@ -166,14 +172,17 @@ class HistogramM(BaseM):
self.sum.store_value(conn)
self.inf.store_value(conn)
def to_prometheus(self, instance_data):
def to_prometheus(self, instance_data, namespace=None):
output_text = f"# HELP {self.field} {self.help_text}\n# TYPE {self.field} histogram\n"
for instance in instance_data:
# Build label string
node_label = f'node="{instance}"'
subsystem_label = f',subsystem="{namespace}"' if namespace else ''
for i, b in enumerate(self.buckets):
output_text += f'{self.field}_bucket{{le="{b}",node="{instance}"}} {sum(instance_data[instance][self.field]["counts"][0:i+1])}\n'
output_text += f'{self.field}_bucket{{le="+Inf",node="{instance}"}} {instance_data[instance][self.field]["inf"]}\n'
output_text += f'{self.field}_count{{node="{instance}"}} {instance_data[instance][self.field]["inf"]}\n'
output_text += f'{self.field}_sum{{node="{instance}"}} {instance_data[instance][self.field]["sum"]}\n'
output_text += f'{self.field}_bucket{{le="{b}",{node_label}{subsystem_label}}} {sum(instance_data[instance][self.field]["counts"][0:i+1])}\n'
output_text += f'{self.field}_bucket{{le="+Inf",{node_label}{subsystem_label}}} {instance_data[instance][self.field]["inf"]}\n'
output_text += f'{self.field}_count{{{node_label}{subsystem_label}}} {instance_data[instance][self.field]["inf"]}\n'
output_text += f'{self.field}_sum{{{node_label}{subsystem_label}}} {instance_data[instance][self.field]["sum"]}\n'
return output_text
@@ -272,26 +281,32 @@ class Metrics(MetricsNamespace):
def pipe_execute(self):
if self.metrics_have_changed is True:
duration_to_save = time.perf_counter()
duration_pipe_exec = time.perf_counter()
for m in self.METRICS:
self.METRICS[m].store_value(self.pipe)
self.pipe.execute()
self.last_pipe_execute = time.time()
self.metrics_have_changed = False
duration_to_save = time.perf_counter() - duration_to_save
self.METRICS['subsystem_metrics_pipe_execute_seconds'].inc(duration_to_save)
self.METRICS['subsystem_metrics_pipe_execute_calls'].inc(1)
duration_pipe_exec = time.perf_counter() - duration_pipe_exec
duration_to_save = time.perf_counter()
duration_send_metrics = time.perf_counter()
self.send_metrics()
duration_to_save = time.perf_counter() - duration_to_save
self.METRICS['subsystem_metrics_send_metrics_seconds'].inc(duration_to_save)
duration_send_metrics = time.perf_counter() - duration_send_metrics
# Increment operational metrics
self.METRICS['subsystem_metrics_pipe_execute_seconds'].inc(duration_pipe_exec)
self.METRICS['subsystem_metrics_pipe_execute_calls'].inc(1)
self.METRICS['subsystem_metrics_send_metrics_seconds'].inc(duration_send_metrics)
def send_metrics(self):
# more than one thread could be calling this at the same time, so should
# acquire redis lock before sending metrics
lock = self.conn.lock(root_key + '-' + self._namespace + '_lock')
if not lock.acquire(blocking=False):
try:
lock = self.conn.lock(root_key + '-' + self._namespace + '_lock')
if not lock.acquire(blocking=False):
return
except redis.exceptions.ConnectionError as exc:
logger.warning(f'Connection error in send_metrics: {exc}')
return
try:
current_time = time.time()
@@ -347,7 +362,13 @@ class Metrics(MetricsNamespace):
if instance_data:
for field in self.METRICS:
if len(metrics_filter) == 0 or field in metrics_filter:
output_text += self.METRICS[field].to_prometheus(instance_data)
# Add subsystem label only for operational metrics
namespace = (
self._namespace
if field in ['subsystem_metrics_pipe_execute_seconds', 'subsystem_metrics_pipe_execute_calls', 'subsystem_metrics_send_metrics_seconds']
else None
)
output_text += self.METRICS[field].to_prometheus(instance_data, namespace)
return output_text
@@ -435,7 +456,10 @@ class CustomToPrometheusMetricsCollector(prometheus_client.registry.Collector):
logger.debug(f"No metric data not found in redis for metric namespace '{self._metrics._namespace}'")
return None
host_metrics = instance_data.get(my_hostname)
if not (host_metrics := instance_data.get(my_hostname)):
logger.debug(f"Metric data for this node '{my_hostname}' not found in redis for metric namespace '{self._metrics._namespace}'")
return None
for _, metric in self._metrics.METRICS.items():
entry = host_metrics.get(metric.field)
if not entry:
@@ -452,14 +476,14 @@ class CustomToPrometheusMetricsCollector(prometheus_client.registry.Collector):
class CallbackReceiverMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
registry.register(CustomToPrometheusMetricsCollector(DispatcherMetrics(metrics_have_changed=False)))
registry.register(CustomToPrometheusMetricsCollector(CallbackReceiverMetrics(metrics_have_changed=False)))
super().__init__(settings.METRICS_SERVICE_CALLBACK_RECEIVER, registry)
class DispatcherMetricsServer(MetricsServer):
def __init__(self):
registry = CollectorRegistry(auto_describe=True)
registry.register(CustomToPrometheusMetricsCollector(CallbackReceiverMetrics(metrics_have_changed=False)))
registry.register(CustomToPrometheusMetricsCollector(DispatcherMetrics(metrics_have_changed=False)))
super().__init__(settings.METRICS_SERVICE_DISPATCHER, registry)

View File

@@ -1,8 +1,17 @@
import os
from dispatcherd.config import setup as dispatcher_setup
from django.apps import AppConfig
from django.db import connection
from django.utils.translation import gettext_lazy as _
from awx.main.utils.common import bypass_in_test, load_all_entry_points_for
from awx.main.utils.migration import is_database_synchronized
from awx.main.utils.named_url_graph import _customize_graph, generate_graph
from awx.conf import register, fields
from awx_plugins.interfaces._temporary_private_licensing_api import detect_server_product_name
class MainConfig(AppConfig):
name = 'awx.main'
@@ -34,7 +43,70 @@ class MainConfig(AppConfig):
category_slug='named-url',
)
def _load_credential_types_feature(self):
"""
Create CredentialType records for any discovered credentials.
Note that Django docs advise _against_ interacting with the database using
the ORM models in the ready() path. Specifically, during testing.
However, we explicitly use the @bypass_in_test decorator to avoid calling this
method during testing.
Django also advises against running pattern because it runs everywhere i.e.
every management command. We use an advisory lock to ensure correctness and
we will deal performance if it becomes an issue.
"""
from awx.main.models.credential import CredentialType
if is_database_synchronized():
CredentialType.setup_tower_managed_defaults(app_config=self)
@bypass_in_test
def load_credential_types_feature(self):
from awx.main.models.credential import load_credentials
load_credentials()
return self._load_credential_types_feature()
def load_inventory_plugins(self):
from awx.main.models.inventory import InventorySourceOptions
is_awx = detect_server_product_name() == 'AWX'
extra_entry_point_groups = () if is_awx else ('inventory.supported',)
entry_points = load_all_entry_points_for(['inventory', *extra_entry_point_groups])
for entry_point_name, entry_point in entry_points.items():
cls = entry_point.load()
InventorySourceOptions.injectors[entry_point_name] = cls
def configure_dispatcherd(self):
"""This implements the default configuration for dispatcherd
If running the tasking service like awx-manage run_dispatcher,
some additional config will be applied on top of this.
This configuration provides the minimum such that code can submit
tasks to pg_notify to run those tasks.
"""
from awx.main.dispatch.config import get_dispatcherd_config
if connection.vendor != 'postgresql':
config_dict = get_dispatcherd_config(mock_publish=True)
else:
config_dict = get_dispatcherd_config()
dispatcher_setup(config_dict)
def ready(self):
super().ready()
self.configure_dispatcherd()
"""
Credential loading triggers database operations. There are cases we want to call
awx-manage collectstatic without a database. All management commands invoke the ready() code
path. Using settings.AWX_SKIP_CREDENTIAL_TYPES_DISCOVER _could_ invoke a database operation.
"""
if not os.environ.get('AWX_SKIP_CREDENTIAL_TYPES_DISCOVER', None):
self.load_credential_types_feature()
self.load_named_url_feature()
self.load_inventory_plugins()

View File

@@ -12,6 +12,7 @@ from rest_framework import serializers
from awx.conf import fields, register, register_validate
from awx.main.models import ExecutionEnvironment
from awx.main.constants import SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS
from awx.main.tasks.policy import OPA_AUTH_TYPES
logger = logging.getLogger('awx.main.conf')
@@ -46,10 +47,7 @@ register(
'MANAGE_ORGANIZATION_AUTH',
field_class=fields.BooleanField,
label=_('Organization Admins Can Manage Users and Teams'),
help_text=_(
'Controls whether any Organization Admin has the privileges to create and manage users and teams. '
'You may want to disable this ability if you are using an LDAP or SAML integration.'
),
help_text=_('Controls whether any Organization Admin has the privileges to create and manage users and teams.'),
category=_('System'),
category_slug='system',
)
@@ -93,7 +91,6 @@ register(
),
category=_('System'),
category_slug='system',
required=False,
)
register(
@@ -108,6 +105,7 @@ register(
),
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -127,8 +125,8 @@ register(
allow_blank=True,
encrypted=False,
read_only=False,
label=_('Red Hat customer username'),
help_text=_('This username is used to send data to Automation Analytics'),
label=_('Red Hat Client ID for Analytics'),
help_text=_('Client ID used to send data to Automation Analytics'),
category=_('System'),
category_slug='system',
)
@@ -140,8 +138,8 @@ register(
allow_blank=True,
encrypted=True,
read_only=False,
label=_('Red Hat customer password'),
help_text=_('This password is used to send data to Automation Analytics'),
label=_('Red Hat Client Secret for Analytics'),
help_text=_('Client secret used to send data to Automation Analytics'),
category=_('System'),
category_slug='system',
)
@@ -153,10 +151,11 @@ register(
allow_blank=True,
encrypted=False,
read_only=False,
label=_('Red Hat or Satellite username'),
help_text=_('This username is used to retrieve subscription and content information'), # noqa
label=_('Red Hat Username for Subscriptions'),
help_text=_('Username used to retrieve subscription and content information'), # noqa
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -166,10 +165,40 @@ register(
allow_blank=True,
encrypted=True,
read_only=False,
label=_('Red Hat or Satellite password'),
help_text=_('This password is used to retrieve subscription and content information'), # noqa
label=_('Red Hat Password for Subscriptions'),
help_text=_('Password used to retrieve subscription and content information'), # noqa
category=_('System'),
category_slug='system',
hidden=True,
)
register(
'SUBSCRIPTIONS_CLIENT_ID',
field_class=fields.CharField,
default='',
allow_blank=True,
encrypted=False,
read_only=False,
label=_('Red Hat Client ID for Subscriptions'),
help_text=_('Client ID used to retrieve subscription and content information'), # noqa
category=_('System'),
category_slug='system',
hidden=True,
)
register(
'SUBSCRIPTIONS_CLIENT_SECRET',
field_class=fields.CharField,
default='',
allow_blank=True,
encrypted=True,
read_only=False,
label=_('Red Hat Client Secret for Subscriptions'),
help_text=_('Client secret used to retrieve subscription and content information'), # noqa
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -240,7 +269,6 @@ register(
help_text=_('List of modules allowed to be used by ad-hoc jobs.'),
category=_('Jobs'),
category_slug='jobs',
required=False,
)
register(
@@ -251,7 +279,6 @@ register(
('never', _('Never')),
('template', _('Only On Job Template Definitions')),
],
required=True,
label=_('When can extra variables contain Jinja templates?'),
help_text=_(
'Ansible allows variable substitution via the Jinja2 templating '
@@ -276,7 +303,6 @@ register(
register(
'AWX_ISOLATION_SHOW_PATHS',
field_class=fields.StringListIsolatedPathField,
required=False,
label=_('Paths to expose to isolated jobs'),
help_text=_(
'List of paths that would otherwise be hidden to expose to isolated jobs. Enter one path per line. '
@@ -442,7 +468,6 @@ register(
register(
'AWX_ANSIBLE_CALLBACK_PLUGINS',
field_class=fields.StringListField,
required=False,
label=_('Ansible Callback Plugins'),
help_text=_('List of paths to search for extra callback plugins to be used when running jobs. Enter one path per line.'),
category=_('Jobs'),
@@ -556,7 +581,6 @@ register(
help_text=_('Port on Logging Aggregator to send logs to (if required and not provided in Logging Aggregator).'),
category=_('Logging'),
category_slug='logging',
required=False,
)
register(
'LOG_AGGREGATOR_TYPE',
@@ -578,7 +602,6 @@ register(
help_text=_('Username for external log aggregator (if required; HTTP/s only).'),
category=_('Logging'),
category_slug='logging',
required=False,
)
register(
'LOG_AGGREGATOR_PASSWORD',
@@ -590,12 +613,11 @@ register(
help_text=_('Password or authentication token for external log aggregator (if required; HTTP/s only).'),
category=_('Logging'),
category_slug='logging',
required=False,
)
register(
'LOG_AGGREGATOR_LOGGERS',
field_class=fields.StringListField,
default=['awx', 'activity_stream', 'job_events', 'system_tracking', 'broadcast_websocket'],
default=['awx', 'activity_stream', 'job_events', 'system_tracking', 'broadcast_websocket', 'job_lifecycle'],
label=_('Loggers Sending Data to Log Aggregator Form'),
help_text=_(
'List of loggers that will send HTTP logs to the collector, these can '
@@ -605,6 +627,7 @@ register(
'job_events - callback data from Ansible job events\n'
'system_tracking - facts gathered from scan jobs\n'
'broadcast_websocket - errors pertaining to websockets broadcast metrics\n'
'job_lifecycle - logs related to processing of a job\n'
),
category=_('Logging'),
category_slug='logging',
@@ -776,7 +799,7 @@ register(
allow_null=True,
category=_('System'),
category_slug='system',
required=False,
hidden=True,
)
register(
'AUTOMATION_ANALYTICS_LAST_ENTRIES',
@@ -868,6 +891,7 @@ register(
allow_null=True,
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -877,6 +901,7 @@ register(
allow_null=True,
category=_('System'),
category_slug='system',
hidden=True,
)
register(
@@ -979,3 +1004,134 @@ def csrf_trusted_origins_validate(serializer, attrs):
register_validate('system', csrf_trusted_origins_validate)
register(
'OPA_HOST',
field_class=fields.CharField,
label=_('OPA server hostname'),
default='',
help_text=_('The hostname used to connect to the OPA server. If empty, policy enforcement will be disabled.'),
category=('PolicyAsCode'),
category_slug='policyascode',
allow_blank=True,
)
register(
'OPA_PORT',
field_class=fields.IntegerField,
label=_('OPA server port'),
default=8181,
help_text=_('The port used to connect to the OPA server. Defaults to 8181.'),
category=('PolicyAsCode'),
category_slug='policyascode',
)
register(
'OPA_SSL',
field_class=fields.BooleanField,
label=_('Use SSL for OPA connection'),
default=False,
help_text=_('Enable or disable the use of SSL to connect to the OPA server. Defaults to false.'),
category=('PolicyAsCode'),
category_slug='policyascode',
)
register(
'OPA_AUTH_TYPE',
field_class=fields.ChoiceField,
label=_('OPA authentication type'),
choices=[OPA_AUTH_TYPES.NONE, OPA_AUTH_TYPES.TOKEN, OPA_AUTH_TYPES.CERTIFICATE],
default=OPA_AUTH_TYPES.NONE,
help_text=_('The authentication type that will be used to connect to the OPA server: "None", "Token", or "Certificate".'),
category=('PolicyAsCode'),
category_slug='policyascode',
)
register(
'OPA_AUTH_TOKEN',
field_class=fields.CharField,
label=_('OPA authentication token'),
default='',
help_text=_(
'The token for authentication to the OPA server. Required when OPA_AUTH_TYPE is "Token". If an authorization header is defined in OPA_AUTH_CUSTOM_HEADERS, it will be overridden by OPA_AUTH_TOKEN.'
),
category=('PolicyAsCode'),
category_slug='policyascode',
allow_blank=True,
encrypted=True,
)
register(
'OPA_AUTH_CLIENT_CERT',
field_class=fields.CharField,
label=_('OPA client certificate content'),
default='',
help_text=_('The content of the client certificate file for mTLS authentication to the OPA server. Required when OPA_AUTH_TYPE is "Certificate".'),
category=('PolicyAsCode'),
category_slug='policyascode',
allow_blank=True,
)
register(
'OPA_AUTH_CLIENT_KEY',
field_class=fields.CharField,
label=_('OPA client key content'),
default='',
help_text=_('The content of the client key for mTLS authentication to the OPA server. Required when OPA_AUTH_TYPE is "Certificate".'),
category=('PolicyAsCode'),
category_slug='policyascode',
allow_blank=True,
encrypted=True,
)
register(
'OPA_AUTH_CA_CERT',
field_class=fields.CharField,
label=_('OPA CA certificate content'),
default='',
help_text=_('The content of the CA certificate for mTLS authentication to the OPA server. Required when OPA_AUTH_TYPE is "Certificate".'),
category=('PolicyAsCode'),
category_slug='policyascode',
allow_blank=True,
)
register(
'OPA_AUTH_CUSTOM_HEADERS',
field_class=fields.DictField,
label=_('OPA custom authentication headers'),
default={},
help_text=_('Optional custom headers included in requests to the OPA server. Defaults to empty dictionary ({}).'),
category=('PolicyAsCode'),
category_slug='policyascode',
)
register(
'OPA_REQUEST_TIMEOUT',
field_class=fields.FloatField,
label=_('OPA request timeout'),
default=1.5,
help_text=_('The number of seconds after which the connection to the OPA server will time out. Defaults to 1.5 seconds.'),
category=('PolicyAsCode'),
category_slug='policyascode',
)
register(
'OPA_REQUEST_RETRIES',
field_class=fields.IntegerField,
label=_('OPA request retry count'),
default=2,
help_text=_('The number of retry attempts for connecting to the OPA server. Default is 2.'),
category=('PolicyAsCode'),
category_slug='policyascode',
)
def policy_as_code_validate(serializer, attrs):
opa_host = attrs.get('OPA_HOST', '')
if opa_host and (opa_host.startswith('http://') or opa_host.startswith('https://')):
raise serializers.ValidationError({'OPA_HOST': _("OPA_HOST should not include 'http://' or 'https://' prefixes. Please enter only the hostname.")})
return attrs
register_validate('policyascode', policy_as_code_validate)

View File

@@ -6,7 +6,6 @@ import re
from django.utils.translation import gettext_lazy as _
__all__ = [
'CLOUD_PROVIDERS',
'PRIVILEGE_ESCALATION_METHODS',
'ANSI_SGR_PATTERN',
'CAN_CANCEL',
@@ -14,7 +13,6 @@ __all__ = [
'STANDARD_INVENTORY_UPDATE_ENV',
]
CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'controller', 'insights', 'terraform', 'openshift_virtualization')
PRIVILEGE_ESCALATION_METHODS = [
('sudo', _('Sudo')),
('su', _('Su')),
@@ -79,6 +77,8 @@ LOGGER_BLOCKLIST = (
'awx.main.utils.log',
# loggers that may be called getting logging settings
'awx.conf',
# dispatcherd should only use 1 database connection
'dispatcherd',
)
# Reported version for node seen in receptor mesh but for which capacity check

View File

@@ -1,126 +0,0 @@
from .plugin import CredentialPlugin, CertFiles, raise_for_status
from urllib.parse import quote, urlencode, urljoin
from django.utils.translation import gettext_lazy as _
import requests
aim_inputs = {
'fields': [
{
'id': 'url',
'label': _('CyberArk CCP URL'),
'type': 'string',
'format': 'url',
},
{
'id': 'webservice_id',
'label': _('Web Service ID'),
'type': 'string',
'help_text': _('The CCP Web Service ID. Leave blank to default to AIMWebService.'),
},
{
'id': 'app_id',
'label': _('Application ID'),
'type': 'string',
'secret': True,
},
{
'id': 'client_key',
'label': _('Client Key'),
'type': 'string',
'secret': True,
'multiline': True,
},
{
'id': 'client_cert',
'label': _('Client Certificate'),
'type': 'string',
'secret': True,
'multiline': True,
},
{
'id': 'verify',
'label': _('Verify SSL Certificates'),
'type': 'boolean',
'default': True,
},
],
'metadata': [
{
'id': 'object_query',
'label': _('Object Query'),
'type': 'string',
'help_text': _('Lookup query for the object. Ex: Safe=TestSafe;Object=testAccountName123'),
},
{'id': 'object_query_format', 'label': _('Object Query Format'), 'type': 'string', 'default': 'Exact', 'choices': ['Exact', 'Regexp']},
{
'id': 'object_property',
'label': _('Object Property'),
'type': 'string',
'help_text': _('The property of the object to return. Available properties: Username, Password and Address.'),
},
{
'id': 'reason',
'label': _('Reason'),
'type': 'string',
'help_text': _('Object request reason. This is only needed if it is required by the object\'s policy.'),
},
],
'required': ['url', 'app_id', 'object_query'],
}
def aim_backend(**kwargs):
url = kwargs['url']
client_cert = kwargs.get('client_cert', None)
client_key = kwargs.get('client_key', None)
verify = kwargs['verify']
webservice_id = kwargs.get('webservice_id', '')
app_id = kwargs['app_id']
object_query = kwargs['object_query']
object_query_format = kwargs['object_query_format']
object_property = kwargs.get('object_property', '')
reason = kwargs.get('reason', None)
if webservice_id == '':
webservice_id = 'AIMWebService'
query_params = {
'AppId': app_id,
'Query': object_query,
'QueryFormat': object_query_format,
}
if reason:
query_params['reason'] = reason
request_qs = '?' + urlencode(query_params, quote_via=quote)
request_url = urljoin(url, '/'.join([webservice_id, 'api', 'Accounts']))
with CertFiles(client_cert, client_key) as cert:
res = requests.get(
request_url + request_qs,
timeout=30,
cert=cert,
verify=verify,
allow_redirects=False,
)
raise_for_status(res)
# CCP returns the property name capitalized, username is camel case
# so we need to handle that case
if object_property == '':
object_property = 'Content'
elif object_property.lower() == 'username':
object_property = 'UserName'
elif object_property.lower() == 'password':
object_property = 'Content'
elif object_property.lower() == 'address':
object_property = 'Address'
elif object_property not in res:
raise KeyError('Property {} not found in object, available properties: Username, Password and Address'.format(object_property))
else:
object_property = object_property.capitalize()
return res.json()[object_property]
aim_plugin = CredentialPlugin('CyberArk Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend)

View File

@@ -1,65 +0,0 @@
import boto3
from botocore.exceptions import ClientError
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
secrets_manager_inputs = {
'fields': [
{
'id': 'aws_access_key',
'label': _('AWS Access Key'),
'type': 'string',
},
{
'id': 'aws_secret_key',
'label': _('AWS Secret Key'),
'type': 'string',
'secret': True,
},
],
'metadata': [
{
'id': 'region_name',
'label': _('AWS Secrets Manager Region'),
'type': 'string',
'help_text': _('Region which the secrets manager is located'),
},
{
'id': 'secret_name',
'label': _('AWS Secret Name'),
'type': 'string',
},
],
'required': ['aws_access_key', 'aws_secret_key', 'region_name', 'secret_name'],
}
def aws_secretsmanager_backend(**kwargs):
secret_name = kwargs['secret_name']
region_name = kwargs['region_name']
aws_secret_access_key = kwargs['aws_secret_key']
aws_access_key_id = kwargs['aws_access_key']
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager', region_name=region_name, aws_secret_access_key=aws_secret_access_key, aws_access_key_id=aws_access_key_id
)
try:
get_secret_value_response = client.get_secret_value(SecretId=secret_name)
except ClientError as e:
raise e
# Secrets Manager decrypts the secret value using the associated KMS CMK
# Depending on whether the secret was a string or binary, only one of these fields will be populated
if 'SecretString' in get_secret_value_response:
secret = get_secret_value_response['SecretString']
else:
secret = get_secret_value_response['SecretBinary']
return secret
aws_secretmanager_plugin = CredentialPlugin('AWS Secrets Manager lookup', inputs=secrets_manager_inputs, backend=aws_secretsmanager_backend)

View File

@@ -1,63 +0,0 @@
from azure.keyvault.secrets import SecretClient
from azure.identity import ClientSecretCredential
from msrestazure import azure_cloud
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
# https://github.com/Azure/msrestazure-for-python/blob/master/msrestazure/azure_cloud.py
clouds = [vars(azure_cloud)[n] for n in dir(azure_cloud) if n.startswith("AZURE_") and n.endswith("_CLOUD")]
default_cloud = vars(azure_cloud)["AZURE_PUBLIC_CLOUD"]
azure_keyvault_inputs = {
'fields': [
{
'id': 'url',
'label': _('Vault URL (DNS Name)'),
'type': 'string',
'format': 'url',
},
{'id': 'client', 'label': _('Client ID'), 'type': 'string'},
{
'id': 'secret',
'label': _('Client Secret'),
'type': 'string',
'secret': True,
},
{'id': 'tenant', 'label': _('Tenant ID'), 'type': 'string'},
{
'id': 'cloud_name',
'label': _('Cloud Environment'),
'help_text': _('Specify which azure cloud environment to use.'),
'choices': list(set([default_cloud.name] + [c.name for c in clouds])),
'default': default_cloud.name,
},
],
'metadata': [
{
'id': 'secret_field',
'label': _('Secret Name'),
'type': 'string',
'help_text': _('The name of the secret to look up.'),
},
{
'id': 'secret_version',
'label': _('Secret Version'),
'type': 'string',
'help_text': _('Used to specify a specific secret version (if left empty, the latest version will be used).'),
},
],
'required': ['url', 'client', 'secret', 'tenant', 'secret_field'],
}
def azure_keyvault_backend(**kwargs):
csc = ClientSecretCredential(tenant_id=kwargs['tenant'], client_id=kwargs['client'], client_secret=kwargs['secret'])
kv = SecretClient(credential=csc, vault_url=kwargs['url'])
return kv.get_secret(name=kwargs['secret_field'], version=kwargs.get('secret_version', '')).value
azure_keyvault_plugin = CredentialPlugin('Microsoft Azure Key Vault', inputs=azure_keyvault_inputs, backend=azure_keyvault_backend)

View File

@@ -1,115 +0,0 @@
from .plugin import CredentialPlugin, raise_for_status
from django.utils.translation import gettext_lazy as _
from urllib.parse import urljoin
import requests
pas_inputs = {
'fields': [
{
'id': 'url',
'label': _('Centrify Tenant URL'),
'type': 'string',
'help_text': _('Centrify Tenant URL'),
'format': 'url',
},
{
'id': 'client_id',
'label': _('Centrify API User'),
'type': 'string',
'help_text': _('Centrify API User, having necessary permissions as mentioned in support doc'),
},
{
'id': 'client_password',
'label': _('Centrify API Password'),
'type': 'string',
'help_text': _('Password of Centrify API User with necessary permissions'),
'secret': True,
},
{
'id': 'oauth_application_id',
'label': _('OAuth2 Application ID'),
'type': 'string',
'help_text': _('Application ID of the configured OAuth2 Client (defaults to \'awx\')'),
'default': 'awx',
},
{
'id': 'oauth_scope',
'label': _('OAuth2 Scope'),
'type': 'string',
'help_text': _('Scope of the configured OAuth2 Client (defaults to \'awx\')'),
'default': 'awx',
},
],
'metadata': [
{
'id': 'account-name',
'label': _('Account Name'),
'type': 'string',
'help_text': _('Local system account or Domain account name enrolled in Centrify Vault. eg. (root or DOMAIN/Administrator)'),
},
{
'id': 'system-name',
'label': _('System Name'),
'type': 'string',
'help_text': _('Machine Name enrolled with in Centrify Portal'),
},
],
'required': ['url', 'account-name', 'system-name', 'client_id', 'client_password'],
}
# generate bearer token to authenticate with PAS portal, Input : Client ID, Client Secret
def handle_auth(**kwargs):
post_data = {"grant_type": "client_credentials", "scope": kwargs['oauth_scope']}
response = requests.post(kwargs['endpoint'], data=post_data, auth=(kwargs['client_id'], kwargs['client_password']), verify=True, timeout=(5, 30))
raise_for_status(response)
try:
return response.json()['access_token']
except KeyError:
raise RuntimeError('OAuth request to tenant was unsuccessful')
# fetch the ID of system with RedRock query, Input : System Name, Account Name
def get_ID(**kwargs):
endpoint = urljoin(kwargs['url'], '/Redrock/query')
name = " Name='{0}' and User='{1}'".format(kwargs['system_name'], kwargs['acc_name'])
query = 'Select ID from VaultAccount where {0}'.format(name)
post_headers = {"Authorization": "Bearer " + kwargs['access_token'], "X-CENTRIFY-NATIVE-CLIENT": "true"}
response = requests.post(endpoint, json={'Script': query}, headers=post_headers, verify=True, timeout=(5, 30))
raise_for_status(response)
try:
result_str = response.json()["Result"]["Results"]
return result_str[0]["Row"]["ID"]
except (IndexError, KeyError):
raise RuntimeError("Error Detected!! Check the Inputs")
# CheckOut Password from Centrify Vault, Input : ID
def get_passwd(**kwargs):
endpoint = urljoin(kwargs['url'], '/ServerManage/CheckoutPassword')
post_headers = {"Authorization": "Bearer " + kwargs['access_token'], "X-CENTRIFY-NATIVE-CLIENT": "true"}
response = requests.post(endpoint, json={'ID': kwargs['acc_id']}, headers=post_headers, verify=True, timeout=(5, 30))
raise_for_status(response)
try:
return response.json()["Result"]["Password"]
except KeyError:
raise RuntimeError("Password Not Found")
def centrify_backend(**kwargs):
url = kwargs.get('url')
acc_name = kwargs.get('account-name')
system_name = kwargs.get('system-name')
client_id = kwargs.get('client_id')
client_password = kwargs.get('client_password')
app_id = kwargs.get('oauth_application_id', 'awx')
endpoint = urljoin(url, f'/oauth2/token/{app_id}')
endpoint = {'endpoint': endpoint, 'client_id': client_id, 'client_password': client_password, 'oauth_scope': kwargs.get('oauth_scope', 'awx')}
token = handle_auth(**endpoint)
get_id_args = {'system_name': system_name, 'acc_name': acc_name, 'url': url, 'access_token': token}
acc_id = get_ID(**get_id_args)
get_pwd_args = {'url': url, 'acc_id': acc_id, 'access_token': token}
return get_passwd(**get_pwd_args)
centrify_plugin = CredentialPlugin('Centrify Vault Credential Provider Lookup', inputs=pas_inputs, backend=centrify_backend)

View File

@@ -1,112 +0,0 @@
from .plugin import CredentialPlugin, CertFiles, raise_for_status
from urllib.parse import urljoin, quote
from django.utils.translation import gettext_lazy as _
import requests
import base64
import binascii
conjur_inputs = {
'fields': [
{
'id': 'url',
'label': _('Conjur URL'),
'type': 'string',
'format': 'url',
},
{
'id': 'api_key',
'label': _('API Key'),
'type': 'string',
'secret': True,
},
{
'id': 'account',
'label': _('Account'),
'type': 'string',
},
{
'id': 'username',
'label': _('Username'),
'type': 'string',
},
{'id': 'cacert', 'label': _('Public Key Certificate'), 'type': 'string', 'multiline': True},
],
'metadata': [
{
'id': 'secret_path',
'label': _('Secret Identifier'),
'type': 'string',
'help_text': _('The identifier for the secret e.g., /some/identifier'),
},
{
'id': 'secret_version',
'label': _('Secret Version'),
'type': 'string',
'help_text': _('Used to specify a specific secret version (if left empty, the latest version will be used).'),
},
],
'required': ['url', 'api_key', 'account', 'username'],
}
def _is_base64(s: str) -> bool:
try:
return base64.b64encode(base64.b64decode(s.encode("utf-8"))) == s.encode("utf-8")
except binascii.Error:
return False
def conjur_backend(**kwargs):
url = kwargs['url']
api_key = kwargs['api_key']
account = quote(kwargs['account'], safe='')
username = quote(kwargs['username'], safe='')
secret_path = quote(kwargs['secret_path'], safe='')
version = kwargs.get('secret_version')
cacert = kwargs.get('cacert', None)
auth_kwargs = {
'headers': {'Content-Type': 'text/plain', 'Accept-Encoding': 'base64'},
'data': api_key,
'allow_redirects': False,
}
with CertFiles(cacert) as cert:
# https://www.conjur.org/api.html#authentication-authenticate-post
auth_kwargs['verify'] = cert
try:
resp = requests.post(urljoin(url, '/'.join(['authn', account, username, 'authenticate'])), **auth_kwargs)
resp.raise_for_status()
except requests.exceptions.HTTPError:
resp = requests.post(urljoin(url, '/'.join(['api', 'authn', account, username, 'authenticate'])), **auth_kwargs)
raise_for_status(resp)
token = resp.content.decode('utf-8')
lookup_kwargs = {
'headers': {'Authorization': 'Token token="{}"'.format(token if _is_base64(token) else base64.b64encode(token.encode('utf-8')).decode('utf-8'))},
'allow_redirects': False,
}
# https://www.conjur.org/api.html#secrets-retrieve-a-secret-get
path = urljoin(url, '/'.join(['secrets', account, 'variable', secret_path]))
path_conjurcloud = urljoin(url, '/'.join(['api', 'secrets', account, 'variable', secret_path]))
if version:
ver = "version={}".format(version)
path = '?'.join([path, ver])
path_conjurcloud = '?'.join([path_conjurcloud, ver])
with CertFiles(cacert) as cert:
lookup_kwargs['verify'] = cert
try:
resp = requests.get(path, timeout=30, **lookup_kwargs)
resp.raise_for_status()
except requests.exceptions.HTTPError:
resp = requests.get(path_conjurcloud, timeout=30, **lookup_kwargs)
raise_for_status(resp)
return resp.text
conjur_plugin = CredentialPlugin('CyberArk Conjur Secrets Manager Lookup', inputs=conjur_inputs, backend=conjur_backend)

View File

@@ -1,94 +0,0 @@
from .plugin import CredentialPlugin
from django.conf import settings
from django.utils.translation import gettext_lazy as _
from delinea.secrets.vault import PasswordGrantAuthorizer, SecretsVault
from base64 import b64decode
dsv_inputs = {
'fields': [
{
'id': 'tenant',
'label': _('Tenant'),
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretsvaultcloud.com'),
'type': 'string',
},
{
'id': 'tld',
'label': _('Top-level Domain (TLD)'),
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretsvaultcloud.com'),
'choices': ['ca', 'com', 'com.au', 'eu'],
'default': 'com',
},
{
'id': 'client_id',
'label': _('Client ID'),
'type': 'string',
},
{
'id': 'client_secret',
'label': _('Client Secret'),
'type': 'string',
'secret': True,
},
],
'metadata': [
{
'id': 'path',
'label': _('Secret Path'),
'type': 'string',
'help_text': _('The secret path e.g. /test/secret1'),
},
{
'id': 'secret_field',
'label': _('Secret Field'),
'help_text': _('The field to extract from the secret'),
'type': 'string',
},
{
'id': 'secret_decoding',
'label': _('Should the secret be base64 decoded?'),
'help_text': _('Specify whether the secret should be base64 decoded, typically used for storing files, such as SSH keys'),
'choices': ['No Decoding', 'Decode Base64'],
'type': 'string',
'default': 'No Decoding',
},
],
'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field', 'secret_decoding'],
}
if settings.DEBUG:
dsv_inputs['fields'].append(
{
'id': 'url_template',
'label': _('URL template'),
'type': 'string',
'default': 'https://{}.secretsvaultcloud.{}',
}
)
def dsv_backend(**kwargs):
tenant_name = kwargs['tenant']
tenant_tld = kwargs.get('tld', 'com')
tenant_url_template = kwargs.get('url_template', 'https://{}.secretsvaultcloud.{}')
client_id = kwargs['client_id']
client_secret = kwargs['client_secret']
secret_path = kwargs['path']
secret_field = kwargs['secret_field']
# providing a default value to remain backward compatible for secrets that have not specified this option
secret_decoding = kwargs.get('secret_decoding', 'No Decoding')
tenant_url = tenant_url_template.format(tenant_name, tenant_tld.strip("."))
authorizer = PasswordGrantAuthorizer(tenant_url, client_id, client_secret)
dsv_secret = SecretsVault(tenant_url, authorizer).get_secret(secret_path)
# files can be uploaded base64 decoded to DSV and thus decoding it only, when asked for
if secret_decoding == 'Decode Base64':
return b64decode(dsv_secret['data'][secret_field]).decode()
return dsv_secret['data'][secret_field]
dsv_plugin = CredentialPlugin(name='Thycotic DevOps Secrets Vault', inputs=dsv_inputs, backend=dsv_backend)

View File

@@ -1,384 +0,0 @@
import copy
import os
import pathlib
import time
from urllib.parse import urljoin
from .plugin import CredentialPlugin, CertFiles, raise_for_status
import requests
from django.utils.translation import gettext_lazy as _
base_inputs = {
'fields': [
{
'id': 'url',
'label': _('Server URL'),
'type': 'string',
'format': 'url',
'help_text': _('The URL to the HashiCorp Vault'),
},
{
'id': 'token',
'label': _('Token'),
'type': 'string',
'secret': True,
'help_text': _('The access token used to authenticate to the Vault server'),
},
{
'id': 'cacert',
'label': _('CA Certificate'),
'type': 'string',
'multiline': True,
'help_text': _('The CA certificate used to verify the SSL certificate of the Vault server'),
},
{'id': 'role_id', 'label': _('AppRole role_id'), 'type': 'string', 'multiline': False, 'help_text': _('The Role ID for AppRole Authentication')},
{
'id': 'secret_id',
'label': _('AppRole secret_id'),
'type': 'string',
'multiline': False,
'secret': True,
'help_text': _('The Secret ID for AppRole Authentication'),
},
{
'id': 'client_cert_public',
'label': _('Client Certificate'),
'type': 'string',
'multiline': True,
'help_text': _(
'The PEM-encoded client certificate used for TLS client authentication.'
' This should include the certificate and any intermediate certififcates.'
),
},
{
'id': 'client_cert_private',
'label': _('Client Certificate Key'),
'type': 'string',
'multiline': True,
'secret': True,
'help_text': _('The certificate private key used for TLS client authentication.'),
},
{
'id': 'client_cert_role',
'label': _('TLS Authentication Role'),
'type': 'string',
'multiline': False,
'help_text': _(
'The role configured in Hashicorp Vault for TLS client authentication.'
' If not provided, Hashicorp Vault may assign roles based on the certificate used.'
),
},
{
'id': 'namespace',
'label': _('Namespace name (Vault Enterprise only)'),
'type': 'string',
'multiline': False,
'help_text': _('Name of the namespace to use when authenticate and retrieve secrets'),
},
{
'id': 'kubernetes_role',
'label': _('Kubernetes role'),
'type': 'string',
'multiline': False,
'help_text': _(
'The Role for Kubernetes Authentication.'
' This is the named role, configured in Vault server, for AWX pod auth policies.'
' see https://www.vaultproject.io/docs/auth/kubernetes#configuration'
),
},
{
'id': 'username',
'label': _('Username'),
'type': 'string',
'secret': False,
'help_text': _('Username for user authentication.'),
},
{
'id': 'password',
'label': _('Password'),
'type': 'string',
'secret': True,
'help_text': _('Password for user authentication.'),
},
{
'id': 'default_auth_path',
'label': _('Path to Auth'),
'type': 'string',
'multiline': False,
'default': 'approle',
'help_text': _('The Authentication path to use if one isn\'t provided in the metadata when linking to an input field. Defaults to \'approle\''),
},
],
'metadata': [
{
'id': 'secret_path',
'label': _('Path to Secret'),
'type': 'string',
'help_text': _(
(
'The path to the secret stored in the secret backend e.g, /some/secret/. It is recommended'
' that you use the secret backend field to identify the storage backend and to use this field'
' for locating a specific secret within that store. However, if you prefer to fully identify'
' both the secret backend and one of its secrets using only this field, join their locations'
' into a single path without any additional separators, e.g, /location/of/backend/some/secret.'
)
),
},
{
'id': 'auth_path',
'label': _('Path to Auth'),
'type': 'string',
'multiline': False,
'help_text': _('The path where the Authentication method is mounted e.g, approle'),
},
],
'required': ['url', 'secret_path'],
}
hashi_kv_inputs = copy.deepcopy(base_inputs)
hashi_kv_inputs['fields'].append(
{
'id': 'api_version',
'label': _('API Version'),
'choices': ['v1', 'v2'],
'help_text': _('API v1 is for static key/value lookups. API v2 is for versioned key/value lookups.'),
'default': 'v1',
}
)
hashi_kv_inputs['metadata'] = (
[
{
'id': 'secret_backend',
'label': _('Name of Secret Backend'),
'type': 'string',
'help_text': _('The name of the kv secret backend (if left empty, the first segment of the secret path will be used).'),
}
]
+ hashi_kv_inputs['metadata']
+ [
{
'id': 'secret_key',
'label': _('Key Name'),
'type': 'string',
'help_text': _('The name of the key to look up in the secret.'),
},
{
'id': 'secret_version',
'label': _('Secret Version (v2 only)'),
'type': 'string',
'help_text': _('Used to specify a specific secret version (if left empty, the latest version will be used).'),
},
]
)
hashi_kv_inputs['required'].extend(['api_version', 'secret_key'])
hashi_ssh_inputs = copy.deepcopy(base_inputs)
hashi_ssh_inputs['metadata'] = (
[
{
'id': 'public_key',
'label': _('Unsigned Public Key'),
'type': 'string',
'multiline': True,
}
]
+ hashi_ssh_inputs['metadata']
+ [
{'id': 'role', 'label': _('Role Name'), 'type': 'string', 'help_text': _('The name of the role used to sign.')},
{
'id': 'valid_principals',
'label': _('Valid Principals'),
'type': 'string',
'help_text': _('Valid principals (either usernames or hostnames) that the certificate should be signed for.'),
},
]
)
hashi_ssh_inputs['required'].extend(['public_key', 'role'])
def handle_auth(**kwargs):
token = None
if kwargs.get('token'):
token = kwargs['token']
elif kwargs.get('username') and kwargs.get('password'):
token = method_auth(**kwargs, auth_param=userpass_auth(**kwargs))
elif kwargs.get('role_id') and kwargs.get('secret_id'):
token = method_auth(**kwargs, auth_param=approle_auth(**kwargs))
elif kwargs.get('kubernetes_role'):
token = method_auth(**kwargs, auth_param=kubernetes_auth(**kwargs))
elif kwargs.get('client_cert_public') and kwargs.get('client_cert_private'):
token = method_auth(**kwargs, auth_param=client_cert_auth(**kwargs))
else:
raise Exception('Token, Username/Password, AppRole, Kubernetes, or TLS authentication parameters must be set')
return token
def userpass_auth(**kwargs):
return {'username': kwargs['username'], 'password': kwargs['password']}
def approle_auth(**kwargs):
return {'role_id': kwargs['role_id'], 'secret_id': kwargs['secret_id']}
def kubernetes_auth(**kwargs):
jwt_file = pathlib.Path('/var/run/secrets/kubernetes.io/serviceaccount/token')
with jwt_file.open('r') as jwt_fo:
jwt = jwt_fo.read().rstrip()
return {'role': kwargs['kubernetes_role'], 'jwt': jwt}
def client_cert_auth(**kwargs):
return {'name': kwargs.get('client_cert_role')}
def method_auth(**kwargs):
# get auth method specific params
request_kwargs = {'json': kwargs['auth_param'], 'timeout': 30}
# we first try to use the 'auth_path' from the metadata
# if not found we try to fetch the 'default_auth_path' from inputs
auth_path = kwargs.get('auth_path') or kwargs['default_auth_path']
url = urljoin(kwargs['url'], 'v1')
cacert = kwargs.get('cacert', None)
sess = requests.Session()
sess.mount(url, requests.adapters.HTTPAdapter(max_retries=5))
# Namespace support
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']
request_url = '/'.join([url, 'auth', auth_path, 'login']).rstrip('/')
if kwargs['auth_param'].get('username'):
request_url = request_url + '/' + (kwargs['username'])
with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert
# TLS client certificate support
if kwargs.get('client_cert_public') and kwargs.get('client_cert_private'):
# Add client cert to requests Session before making call
with CertFiles(kwargs['client_cert_public'], key=kwargs['client_cert_private']) as client_cert:
sess.cert = client_cert
resp = sess.post(request_url, **request_kwargs)
else:
# Make call without client certificate
resp = sess.post(request_url, **request_kwargs)
resp.raise_for_status()
token = resp.json()['auth']['client_token']
return token
def kv_backend(**kwargs):
token = handle_auth(**kwargs)
url = kwargs['url']
secret_path = kwargs['secret_path']
secret_backend = kwargs.get('secret_backend', None)
secret_key = kwargs.get('secret_key', None)
cacert = kwargs.get('cacert', None)
api_version = kwargs['api_version']
request_kwargs = {
'timeout': 30,
'allow_redirects': False,
}
sess = requests.Session()
sess.mount(url, requests.adapters.HTTPAdapter(max_retries=5))
sess.headers['Authorization'] = 'Bearer {}'.format(token)
# Compatibility header for older installs of Hashicorp Vault
sess.headers['X-Vault-Token'] = token
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']
if api_version == 'v2':
if kwargs.get('secret_version'):
request_kwargs['params'] = {'version': kwargs['secret_version']}
if secret_backend:
path_segments = [secret_backend, 'data', secret_path]
else:
try:
mount_point, *path = pathlib.Path(secret_path.lstrip(os.sep)).parts
'/'.join(path)
except Exception:
mount_point, path = secret_path, []
# https://www.vaultproject.io/api/secret/kv/kv-v2.html#read-secret-version
path_segments = [mount_point, 'data'] + path
else:
if secret_backend:
path_segments = [secret_backend, secret_path]
else:
path_segments = [secret_path]
request_url = urljoin(url, '/'.join(['v1'] + path_segments)).rstrip('/')
with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert
request_retries = 0
while request_retries < 5:
response = sess.get(request_url, **request_kwargs)
# https://developer.hashicorp.com/vault/docs/enterprise/consistency
if response.status_code == 412:
request_retries += 1
time.sleep(1)
else:
break
raise_for_status(response)
json = response.json()
if api_version == 'v2':
json = json['data']
if secret_key:
try:
if (secret_key != 'data') and (secret_key not in json['data']) and ('data' in json['data']):
return json['data']['data'][secret_key]
return json['data'][secret_key]
except KeyError:
raise RuntimeError('{} is not present at {}'.format(secret_key, secret_path))
return json['data']
def ssh_backend(**kwargs):
token = handle_auth(**kwargs)
url = urljoin(kwargs['url'], 'v1')
secret_path = kwargs['secret_path']
role = kwargs['role']
cacert = kwargs.get('cacert', None)
request_kwargs = {
'timeout': 30,
'allow_redirects': False,
}
request_kwargs['json'] = {'public_key': kwargs['public_key']}
if kwargs.get('valid_principals'):
request_kwargs['json']['valid_principals'] = kwargs['valid_principals']
sess = requests.Session()
sess.mount(url, requests.adapters.HTTPAdapter(max_retries=5))
sess.headers['Authorization'] = 'Bearer {}'.format(token)
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']
# Compatability header for older installs of Hashicorp Vault
sess.headers['X-Vault-Token'] = token
# https://www.vaultproject.io/api/secret/ssh/index.html#sign-ssh-key
request_url = '/'.join([url, secret_path, 'sign', role]).rstrip('/')
with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert
request_retries = 0
while request_retries < 5:
resp = sess.post(request_url, **request_kwargs)
# https://developer.hashicorp.com/vault/docs/enterprise/consistency
if resp.status_code == 412:
request_retries += 1
time.sleep(1)
else:
break
raise_for_status(resp)
return resp.json()['data']['signed_key']
hashivault_kv_plugin = CredentialPlugin('HashiCorp Vault Secret Lookup', inputs=hashi_kv_inputs, backend=kv_backend)
hashivault_ssh_plugin = CredentialPlugin('HashiCorp Vault Signed SSH', inputs=hashi_ssh_inputs, backend=ssh_backend)

View File

@@ -1,55 +0,0 @@
import os
import tempfile
from collections import namedtuple
from requests.exceptions import HTTPError
CredentialPlugin = namedtuple('CredentialPlugin', ['name', 'inputs', 'backend'])
def raise_for_status(resp):
resp.raise_for_status()
if resp.status_code >= 300:
exc = HTTPError()
setattr(exc, 'response', resp)
raise exc
class CertFiles:
"""
A context manager used for writing a certificate and (optional) key
to $TMPDIR, and cleaning up afterwards.
This is particularly useful as a shared resource for credential plugins
that want to pull cert/key data out of the database and persist it
temporarily to the file system so that it can loaded into the openssl
certificate chain (generally, for HTTPS requests plugins make via the
Python requests library)
with CertFiles(cert_data, key_data) as cert:
# cert is string representing a path to the cert or pemfile
# temporarily written to disk
requests.post(..., cert=cert)
"""
certfile = None
def __init__(self, cert, key=None):
self.cert = cert
self.key = key
def __enter__(self):
if not self.cert:
return None
self.certfile = tempfile.NamedTemporaryFile('wb', delete=False)
self.certfile.write(self.cert.encode())
if self.key:
self.certfile.write(b'\n')
self.certfile.write(self.key.encode())
self.certfile.flush()
return str(self.certfile.name)
def __exit__(self, *args):
if self.certfile and os.path.exists(self.certfile.name):
os.remove(self.certfile.name)

View File

@@ -1,76 +0,0 @@
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
try:
from delinea.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
except ImportError:
from thycotic.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
tss_inputs = {
'fields': [
{
'id': 'server_url',
'label': _('Secret Server URL'),
'help_text': _('The Base URL of Secret Server e.g. https://myserver/SecretServer or https://mytenant.secretservercloud.com'),
'type': 'string',
},
{
'id': 'username',
'label': _('Username'),
'help_text': _('The (Application) user username'),
'type': 'string',
},
{
'id': 'domain',
'label': _('Domain'),
'help_text': _('The (Application) user domain'),
'type': 'string',
},
{
'id': 'password',
'label': _('Password'),
'help_text': _('The corresponding password'),
'type': 'string',
'secret': True,
},
],
'metadata': [
{
'id': 'secret_id',
'label': _('Secret ID'),
'help_text': _('The integer ID of the secret'),
'type': 'string',
},
{
'id': 'secret_field',
'label': _('Secret Field'),
'help_text': _('The field to extract from the secret'),
'type': 'string',
},
],
'required': ['server_url', 'username', 'password', 'secret_id', 'secret_field'],
}
def tss_backend(**kwargs):
if kwargs.get("domain"):
authorizer = DomainPasswordGrantAuthorizer(
base_url=kwargs['server_url'], username=kwargs['username'], domain=kwargs['domain'], password=kwargs['password']
)
else:
authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'])
secret_server = SecretServer(kwargs['server_url'], authorizer)
secret_dict = secret_server.get_secret(kwargs['secret_id'])
secret = ServerSecret(**secret_dict)
if isinstance(secret.fields[kwargs['secret_field']].value, str) == False:
return secret.fields[kwargs['secret_field']].value.text
else:
return secret.fields[kwargs['secret_field']].value
tss_plugin = CredentialPlugin(
'Thycotic Secret Server',
tss_inputs,
tss_backend,
)

View File

@@ -1,9 +1,9 @@
import os
import pkg_resources
import sqlite3
import sys
import traceback
import uuid
from importlib.metadata import version as _get_version
from django.core.cache import cache
from django.core.cache.backends.locmem import LocMemCache
@@ -70,7 +70,7 @@ class RecordedQueryLog(object):
else:
progname = os.path.basename(sys.argv[0])
filepath = os.path.join(self.dest, '{}.sqlite'.format(progname))
version = pkg_resources.get_distribution('awx').version
version = _get_version('awx')
log = sqlite3.connect(filepath, timeout=3)
log.execute(
'CREATE TABLE IF NOT EXISTS queries ('

View File

@@ -72,8 +72,8 @@ class PubSub(object):
ns = conn.wait(psycopg.generators.notifies(conn.pgconn))
except psycopg.errors._NO_TRACEBACK as ex:
raise ex.with_traceback(None)
enc = psycopg._encodings.pgconn_encoding(conn.pgconn)
for pgn in ns:
enc = conn.pgconn._encoding
n = psycopg.connection.Notify(pgn.relname.decode(enc), pgn.extra.decode(enc), pgn.be_pid)
yield n

View File

@@ -0,0 +1,53 @@
from django.conf import settings
from ansible_base.lib.utils.db import get_pg_notify_params
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.pool import get_auto_max_workers
def get_dispatcherd_config(for_service: bool = False, mock_publish: bool = False) -> dict:
"""Return a dictionary config for dispatcherd
Parameters:
for_service: if True, include dynamic options needed for running the dispatcher service
this will require database access, you should delay evaluation until after app setup
"""
config = {
"version": 2,
"service": {
"pool_kwargs": {
"min_workers": settings.JOB_EVENT_WORKERS,
"max_workers": get_auto_max_workers(),
},
"main_kwargs": {"node_id": settings.CLUSTER_HOST_ID},
"process_manager_cls": "ForkServerManager",
"process_manager_kwargs": {"preload_modules": ['awx.main.dispatch.hazmat']},
},
"brokers": {
"socket": {"socket_path": settings.DISPATCHERD_DEBUGGING_SOCKFILE},
},
"publish": {"default_control_broker": "socket"},
"worker": {"worker_cls": "awx.main.dispatch.worker.dispatcherd.AWXTaskWorker"},
}
if mock_publish:
config["brokers"]["noop"] = {}
config["publish"]["default_broker"] = "noop"
else:
config["brokers"]["pg_notify"] = {
"config": get_pg_notify_params(),
"sync_connection_factory": "ansible_base.lib.utils.db.psycopg_connection_from_django",
"default_publish_channel": settings.CLUSTER_HOST_ID, # used for debugging commands
}
config["publish"]["default_broker"] = "pg_notify"
if for_service:
config["producers"] = {
"ScheduledProducer": {"task_schedule": settings.DISPATCHER_SCHEDULE},
"OnStartProducer": {"task_list": {"awx.main.tasks.system.dispatch_startup": {}}},
"ControlProducer": {},
}
config["brokers"]["pg_notify"]["channels"] = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()]
return config

View File

@@ -0,0 +1,36 @@
import django
# dispatcherd publisher logic is likely to be used, but needs manual preload
from dispatcherd.brokers import pg_notify # noqa
# Cache may not be initialized until we are in the worker, so preload here
from channels_redis import core # noqa
from awx import prepare_env
from dispatcherd.utils import resolve_callable
prepare_env()
django.setup() # noqa
from django.conf import settings
# Preload all periodic tasks so their imports will be in shared memory
for name, options in settings.CELERYBEAT_SCHEDULE.items():
resolve_callable(options['task'])
# Preload in-line import from tasks
from awx.main.scheduler.kubernetes import PodManager # noqa
from django.core.cache import cache as django_cache
from django.db import connection
connection.close()
django_cache.close()

View File

@@ -88,8 +88,10 @@ class Scheduler:
# internally times are all referenced relative to startup time, add grace period
self.global_start = time.time() + 2.0
def get_and_mark_pending(self):
relative_time = time.time() - self.global_start
def get_and_mark_pending(self, reftime=None):
if reftime is None:
reftime = time.time() # mostly for tests
relative_time = reftime - self.global_start
to_run = []
for job in self.jobs:
if job.due_to_run(relative_time):
@@ -98,8 +100,10 @@ class Scheduler:
job.mark_run(relative_time)
return to_run
def time_until_next_run(self):
relative_time = time.time() - self.global_start
def time_until_next_run(self, reftime=None):
if reftime is None:
reftime = time.time() # mostly for tests
relative_time = reftime - self.global_start
next_job = min(self.jobs, key=lambda j: j.next_run)
delta = next_job.next_run - relative_time
if delta <= 0.1:
@@ -115,10 +119,11 @@ class Scheduler:
def debug(self, *args, **kwargs):
data = dict()
data['title'] = 'Scheduler status'
reftime = time.time()
now = datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S UTC')
now = datetime.fromtimestamp(reftime).strftime('%Y-%m-%d %H:%M:%S UTC')
start_time = datetime.fromtimestamp(self.global_start).strftime('%Y-%m-%d %H:%M:%S UTC')
relative_time = time.time() - self.global_start
relative_time = reftime - self.global_start
data['started_time'] = start_time
data['current_time'] = now
data['current_time_relative'] = round(relative_time, 3)

View File

@@ -7,6 +7,7 @@ import time
import traceback
from datetime import datetime
from uuid import uuid4
import json
import collections
from multiprocessing import Process
@@ -21,9 +22,14 @@ from django_guid import set_guid
from jinja2 import Template
import psutil
from ansible_base.lib.logging.runtime import log_excess_runtime
from awx.main.models import UnifiedJob
from awx.main.dispatch import reaper
from awx.main.utils.common import convert_mem_str_to_bytes, get_mem_effective_capacity, log_excess_runtime
from awx.main.utils.common import get_mem_effective_capacity, get_corrected_memory, get_corrected_cpu, get_cpu_effective_capacity
# ansible-runner
from ansible_runner.utils.capacity import get_mem_in_bytes, get_cpu_count
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
@@ -31,6 +37,9 @@ else:
logger = logging.getLogger('awx.main.dispatch')
RETIRED_SENTINEL_TASK = "[retired]"
class NoOpResultQueue(object):
def put(self, item):
pass
@@ -75,11 +84,17 @@ class PoolWorker(object):
self.queue = MPQueue(queue_size)
self.process = Process(target=target, args=(self.queue, self.finished) + args)
self.process.daemon = True
self.creation_time = time.monotonic()
self.retiring = False
def start(self):
self.process.start()
def put(self, body):
if self.retiring:
uuid = body.get('uuid', 'N/A') if isinstance(body, dict) else 'N/A'
logger.info(f"Worker pid:{self.pid} is retiring. Refusing new task {uuid}.")
raise QueueFull("Worker is retiring and not accepting new tasks") # AutoscalePool.write handles QueueFull
uuid = '?'
if isinstance(body, dict):
if not body.get('uuid'):
@@ -98,6 +113,11 @@ class PoolWorker(object):
"""
self.queue.put('QUIT')
@property
def age(self):
"""Returns the current age of the worker in seconds."""
return time.monotonic() - self.creation_time
@property
def pid(self):
return self.process.pid
@@ -144,6 +164,8 @@ class PoolWorker(object):
# the purpose of self.managed_tasks is to just track internal
# state of which events are *currently* being processed.
logger.warning('Event UUID {} appears to be have been duplicated.'.format(uuid))
if self.retiring:
self.managed_tasks[RETIRED_SENTINEL_TASK] = {'task': RETIRED_SENTINEL_TASK}
@property
def current_task(self):
@@ -259,6 +281,8 @@ class WorkerPool(object):
'{% for w in workers %}'
'. worker[pid:{{ w.pid }}]{% if not w.alive %} GONE exit={{ w.exitcode }}{% endif %}'
' sent={{ w.messages_sent }}'
' age={{ "%.0f"|format(w.age) }}s'
' retiring={{ w.retiring }}'
'{% if w.messages_finished %} finished={{ w.messages_finished }}{% endif %}'
' qsize={{ w.managed_tasks|length }}'
' rss={{ w.mb }}MB'
@@ -305,6 +329,41 @@ class WorkerPool(object):
logger.exception('could not kill {}'.format(worker.pid))
def get_auto_max_workers():
"""Method we normally rely on to get max_workers
Uses almost same logic as Instance.local_health_check
The important thing is to be MORE than Instance.capacity
so that the task-manager does not over-schedule this node
Ideally we would just use the capacity from the database plus reserve workers,
but this poses some bootstrap problems where OCP task containers
register themselves after startup
"""
# Get memory from ansible-runner
total_memory_gb = get_mem_in_bytes()
# This may replace memory calculation with a user override
corrected_memory = get_corrected_memory(total_memory_gb)
# Get same number as max forks based on memory, this function takes memory as bytes
mem_capacity = get_mem_effective_capacity(corrected_memory, is_control_node=True)
# Follow same process for CPU capacity constraint
cpu_count = get_cpu_count()
corrected_cpu = get_corrected_cpu(cpu_count)
cpu_capacity = get_cpu_effective_capacity(corrected_cpu, is_control_node=True)
# Here is what is different from health checks,
auto_max = max(mem_capacity, cpu_capacity)
# add magic number of extra workers to ensure
# we have a few extra workers to run the heartbeat
auto_max += 7
return auto_max
class AutoscalePool(WorkerPool):
"""
An extended pool implementation that automatically scales workers up and
@@ -315,22 +374,13 @@ class AutoscalePool(WorkerPool):
def __init__(self, *args, **kwargs):
self.max_workers = kwargs.pop('max_workers', None)
self.max_worker_lifetime_seconds = kwargs.pop(
'max_worker_lifetime_seconds', getattr(settings, 'WORKER_MAX_LIFETIME_SECONDS', 14400)
) # Default to 4 hours
super(AutoscalePool, self).__init__(*args, **kwargs)
if self.max_workers is None:
settings_absmem = getattr(settings, 'SYSTEM_TASK_ABS_MEM', None)
if settings_absmem is not None:
# There are 1073741824 bytes in a gigabyte. Convert bytes to gigabytes by dividing by 2**30
total_memory_gb = convert_mem_str_to_bytes(settings_absmem) // 2**30
else:
total_memory_gb = (psutil.virtual_memory().total >> 30) + 1 # noqa: round up
# Get same number as max forks based on memory, this function takes memory as bytes
self.max_workers = get_mem_effective_capacity(total_memory_gb * 2**30)
# add magic prime number of extra workers to ensure
# we have a few extra workers to run the heartbeat
self.max_workers += 7
self.max_workers = get_auto_max_workers()
# max workers can't be less than min_workers
self.max_workers = max(self.min_workers, self.max_workers)
@@ -344,6 +394,9 @@ class AutoscalePool(WorkerPool):
self.scale_up_ct = 0
self.worker_count_max = 0
# last time we wrote current tasks, to avoid too much log spam
self.last_task_list_log = time.monotonic()
def produce_subsystem_metrics(self, metrics_object):
metrics_object.set('dispatcher_pool_scale_up_events', self.scale_up_ct)
metrics_object.set('dispatcher_pool_active_task_count', sum(len(w.managed_tasks) for w in self.workers))
@@ -366,7 +419,7 @@ class AutoscalePool(WorkerPool):
def debug_meta(self):
return 'min={} max={}'.format(self.min_workers, self.max_workers)
@log_excess_runtime(logger)
@log_excess_runtime(logger, debug_cutoff=0.05, cutoff=0.2)
def cleanup(self):
"""
Perform some internal account and cleanup. This is run on
@@ -383,6 +436,7 @@ class AutoscalePool(WorkerPool):
"""
orphaned = []
for w in self.workers[::]:
is_retirement_age = self.max_worker_lifetime_seconds is not None and w.age > self.max_worker_lifetime_seconds
if not w.alive:
# the worker process has exited
# 1. take the task it was running and enqueue the error
@@ -391,6 +445,10 @@ class AutoscalePool(WorkerPool):
# send them to another worker
logger.error('worker pid:{} is gone (exit={})'.format(w.pid, w.exitcode))
if w.current_task:
if w.current_task == {'task': RETIRED_SENTINEL_TASK}:
logger.debug('scaling down worker pid:{} due to worker age: {}'.format(w.pid, w.age))
self.workers.remove(w)
continue
if w.current_task != 'QUIT':
try:
for j in UnifiedJob.objects.filter(celery_task_id=w.current_task['uuid']):
@@ -401,6 +459,7 @@ class AutoscalePool(WorkerPool):
logger.warning(f'Worker was told to quit but has not, pid={w.pid}')
orphaned.extend(w.orphaned_tasks)
self.workers.remove(w)
elif w.idle and len(self.workers) > self.min_workers:
# the process has an empty queue (it's idle) and we have
# more processes in the pool than we need (> min)
@@ -409,6 +468,22 @@ class AutoscalePool(WorkerPool):
logger.debug('scaling down worker pid:{}'.format(w.pid))
w.quit()
self.workers.remove(w)
elif w.idle and is_retirement_age:
logger.debug('scaling down worker pid:{} due to worker age: {}'.format(w.pid, w.age))
w.quit()
self.workers.remove(w)
elif is_retirement_age and not w.retiring and not w.idle:
logger.info(
f"Worker pid:{w.pid} (age: {w.age:.0f}s) exceeded max lifetime ({self.max_worker_lifetime_seconds:.0f}s). "
"Signaling for graceful retirement."
)
# Send QUIT signal; worker will finish current task then exit.
w.quit()
# mark as retiring to reject any future tasks that might be assigned in meantime
w.retiring = True
if w.alive:
# if we discover a task manager invocation that's been running
# too long, reap it (because otherwise it'll just hold the postgres
@@ -461,6 +536,14 @@ class AutoscalePool(WorkerPool):
self.worker_count_max = new_worker_ct
return ret
@staticmethod
def fast_task_serialization(current_task):
try:
return str(current_task.get('task')) + ' - ' + str(sorted(current_task.get('args', []))) + ' - ' + str(sorted(current_task.get('kwargs', {})))
except Exception:
# just make sure this does not make things worse
return str(current_task)
def write(self, preferred_queue, body):
if 'guid' in body:
set_guid(body['guid'])
@@ -482,6 +565,15 @@ class AutoscalePool(WorkerPool):
if isinstance(body, dict):
task_name = body.get('task')
logger.warning(f'Workers maxed, queuing {task_name}, load: {sum(len(w.managed_tasks) for w in self.workers)} / {len(self.workers)}')
# Once every 10 seconds write out task list for debugging
if time.monotonic() - self.last_task_list_log >= 10.0:
task_counts = {}
for worker in self.workers:
task_slug = self.fast_task_serialization(worker.current_task)
task_counts.setdefault(task_slug, 0)
task_counts[task_slug] += 1
logger.info(f'Running tasks by count:\n{json.dumps(task_counts, indent=2)}')
self.last_task_list_log = time.monotonic()
return super(AutoscalePool, self).write(preferred_queue, body)
except Exception:
for conn in connections.all():

View File

@@ -4,10 +4,13 @@ import json
import time
from uuid import uuid4
from dispatcherd.publish import submit_task
from dispatcherd.utils import resolve_callable
from django_guid import get_guid
from django.conf import settings
from . import pg_bus_conn
from awx.main.utils import is_testing
logger = logging.getLogger('awx.main.dispatch')
@@ -93,6 +96,19 @@ class task:
@classmethod
def apply_async(cls, args=None, kwargs=None, queue=None, uuid=None, **kw):
try:
from flags.state import flag_enabled
if flag_enabled('FEATURE_DISPATCHERD_ENABLED'):
# At this point we have the import string, and submit_task wants the method, so back to that
actual_task = resolve_callable(cls.name)
return submit_task(actual_task, args=args, kwargs=kwargs, queue=queue, uuid=uuid, **kw)
except Exception:
logger.exception(f"[DISPATCHER] Failed to check for alternative dispatcherd implementation for {cls.name}")
# Continue with original implementation if anything fails
pass
# Original implementation follows
queue = queue or getattr(cls.queue, 'im_func', cls.queue)
if not queue:
msg = f'{cls.name}: Queue value required and may not be None'
@@ -101,7 +117,7 @@ class task:
obj = cls.get_async_body(args=args, kwargs=kwargs, uuid=uuid, **kw)
if callable(queue):
queue = queue()
if not is_testing():
if not settings.DISPATCHER_MOCK_PUBLISH:
with pg_bus_conn() as conn:
conn.notify(queue, json.dumps(obj))
return (obj, queue)

View File

@@ -15,11 +15,13 @@ from datetime import timedelta
from django import db
from django.conf import settings
import redis.exceptions
from ansible_base.lib.logging.runtime import log_excess_runtime
from awx.main.dispatch.pool import WorkerPool
from awx.main.dispatch.periodic import Scheduler
from awx.main.dispatch import pg_bus_conn
from awx.main.utils.common import log_excess_runtime
from awx.main.utils.db import set_connection_name
import awx.main.analytics.subsystem_metrics as s_metrics
@@ -126,13 +128,16 @@ class AWXConsumerBase(object):
return
self.dispatch_task(body)
@log_excess_runtime(logger)
@log_excess_runtime(logger, debug_cutoff=0.05, cutoff=0.2)
def record_statistics(self):
if time.time() - self.last_stats > 1: # buffer stat recording to once per second
save_data = self.pool.debug()
try:
self.redis.set(f'awx_{self.name}_statistics', self.pool.debug())
self.redis.set(f'awx_{self.name}_statistics', save_data)
except redis.exceptions.ConnectionError as exc:
logger.warning(f'Redis connection error saving {self.name} status data:\n{exc}\nmissed data:\n{save_data}')
except Exception:
logger.exception(f"encountered an error communicating with redis to store {self.name} statistics")
logger.exception(f"Unknown redis error saving {self.name} status data:\nmissed data:\n{save_data}")
self.last_stats = time.time()
def run(self, *args, **kwargs):
@@ -183,11 +188,15 @@ class AWXConsumerPG(AWXConsumerBase):
schedule['metrics_gather'] = {'control': self.record_metrics, 'schedule': timedelta(seconds=20)}
self.scheduler = Scheduler(schedule)
@log_excess_runtime(logger, debug_cutoff=0.05, cutoff=0.2)
def record_metrics(self):
current_time = time.time()
self.pool.produce_subsystem_metrics(self.subsystem_metrics)
self.subsystem_metrics.set('dispatcher_availability', self.listen_cumulative_time / (current_time - self.last_metrics_gather))
self.subsystem_metrics.pipe_execute()
try:
self.subsystem_metrics.pipe_execute()
except redis.exceptions.ConnectionError as exc:
logger.warning(f'Redis connection error saving dispatcher metrics, error:\n{exc}')
self.listen_cumulative_time = 0.0
self.last_metrics_gather = current_time
@@ -203,7 +212,11 @@ class AWXConsumerPG(AWXConsumerBase):
except Exception as exc:
logger.warning(f'Failed to save dispatcher statistics {exc}')
for job in self.scheduler.get_and_mark_pending():
# Everything benchmarks to the same original time, so that skews due to
# runtime of the actions, themselves, do not mess up scheduling expectations
reftime = time.time()
for job in self.scheduler.get_and_mark_pending(reftime=reftime):
if 'control' in job.data:
try:
job.data['control']()
@@ -220,12 +233,12 @@ class AWXConsumerPG(AWXConsumerBase):
self.listen_start = time.time()
return self.scheduler.time_until_next_run()
return self.scheduler.time_until_next_run(reftime=reftime)
def run(self, *args, **kwargs):
super(AWXConsumerPG, self).run(*args, **kwargs)
logger.info(f"Running worker {self.name} listening to queues {self.queues}")
logger.info(f"Running {self.name}, workers min={self.pool.min_workers} max={self.pool.max_workers}, listening to queues {self.queues}")
init = False
while True:

View File

@@ -20,6 +20,7 @@ from awx.main.models import JobEvent, AdHocCommandEvent, ProjectUpdateEvent, Inv
from awx.main.constants import ACTIVE_STATES
from awx.main.models.events import emit_event_detail
from awx.main.utils.profiling import AWXProfiler
from awx.main.tasks.system import events_processed_hook
import awx.main.analytics.subsystem_metrics as s_metrics
from .base import BaseWorker
@@ -46,7 +47,7 @@ def job_stats_wrapup(job_identifier, event=None):
# If the status was a finished state before this update was made, send notifications
# If not, we will send notifications when the status changes
if uj.status not in ACTIVE_STATES:
uj.send_notification_templates('succeeded' if uj.status == 'successful' else 'failed')
events_processed_hook(uj)
except Exception:
logger.exception('Worker failed to save stats or emit notifications: Job {}'.format(job_identifier))
@@ -85,6 +86,7 @@ class CallbackBrokerWorker(BaseWorker):
return os.getpid()
def read(self, queue):
has_redis_error = False
try:
res = self.redis.blpop(self.queue_name, timeout=1)
if res is None:
@@ -94,14 +96,21 @@ class CallbackBrokerWorker(BaseWorker):
self.subsystem_metrics.inc('callback_receiver_events_popped_redis', 1)
self.subsystem_metrics.inc('callback_receiver_events_in_memory', 1)
return json.loads(res[1])
except redis.exceptions.ConnectionError as exc:
# Low noise log, because very common and many workers will write this
logger.error(f"redis connection error: {exc}")
has_redis_error = True
time.sleep(5)
except redis.exceptions.RedisError:
logger.exception("encountered an error communicating with redis")
has_redis_error = True
time.sleep(1)
except (json.JSONDecodeError, KeyError):
logger.exception("failed to decode JSON message from redis")
finally:
self.record_statistics()
self.record_read_metrics()
if not has_redis_error:
self.record_statistics()
self.record_read_metrics()
return {'event': 'FLUSH'}

View File

@@ -0,0 +1,14 @@
from dispatcherd.worker.task import TaskWorker
from django.db import connection
class AWXTaskWorker(TaskWorker):
def on_start(self) -> None:
"""Get worker connected so that first task it gets will be worked quickly"""
connection.ensure_connection()
def pre_task(self, message) -> None:
"""This should remedy bad connections that can not fix themselves"""
connection.close_if_unusable_or_obsolete()

View File

@@ -38,5 +38,12 @@ class PostRunError(Exception):
super(PostRunError, self).__init__(msg)
class PolicyEvaluationError(Exception):
def __init__(self, msg, status='failed', tb=''):
self.status = status
self.tb = tb
super(PolicyEvaluationError, self).__init__(msg)
class ReceptorNodeNotFound(RuntimeError):
pass

View File

@@ -14,21 +14,14 @@ from jinja2.exceptions import UndefinedError, TemplateSyntaxError, SecurityError
# Django
from django.core import exceptions as django_exceptions
from django.core.serializers.json import DjangoJSONEncoder
from django.db.models.signals import (
post_save,
post_delete,
)
from django.db.models.signals import m2m_changed
from django.db.models.signals import m2m_changed, post_save
from django.db import models
from django.db.models.fields.related import lazy_related_operation
from django.db.models.fields.related_descriptors import (
ReverseOneToOneDescriptor,
ForwardManyToOneDescriptor,
ManyToManyDescriptor,
ReverseManyToOneDescriptor,
create_forward_many_to_many_manager,
)
from django.utils.encoding import smart_str
from django.db.models import JSONField
from django.utils.functional import cached_property
from django.utils.translation import gettext_lazy as _
@@ -54,7 +47,6 @@ __all__ = [
'ImplicitRoleField',
'SmartFilterField',
'OrderedManyToManyField',
'update_role_parentage_for_instance',
'is_implicit_parent',
]
@@ -146,34 +138,6 @@ class AutoOneToOneField(models.OneToOneField):
setattr(cls, related.get_accessor_name(), AutoSingleRelatedObjectDescriptor(related))
def resolve_role_field(obj, field):
ret = []
field_components = field.split('.', 1)
if hasattr(obj, field_components[0]):
obj = getattr(obj, field_components[0])
else:
return []
if obj is None:
return []
if len(field_components) == 1:
# use extremely generous duck typing to accomidate all possible forms
# of the model that may be used during various migrations
if obj._meta.model_name != 'role' or obj._meta.app_label != 'main':
raise Exception(smart_str('{} refers to a {}, not a Role'.format(field, type(obj))))
ret.append(obj.id)
else:
if type(obj) is ManyToManyDescriptor:
for o in obj.all():
ret += resolve_role_field(o, field_components[1])
else:
ret += resolve_role_field(obj, field_components[1])
return ret
def is_implicit_parent(parent_role, child_role):
"""
Determine if the parent_role is an implicit parent as defined by
@@ -210,34 +174,6 @@ def is_implicit_parent(parent_role, child_role):
return False
def update_role_parentage_for_instance(instance):
"""update_role_parentage_for_instance
updates the parents listing for all the roles
of a given instance if they have changed
"""
parents_removed = set()
parents_added = set()
for implicit_role_field in getattr(instance.__class__, '__implicit_role_fields'):
cur_role = getattr(instance, implicit_role_field.name)
original_parents = set(json.loads(cur_role.implicit_parents))
new_parents = implicit_role_field._resolve_parent_roles(instance)
removals = original_parents - new_parents
if removals:
cur_role.parents.remove(*list(removals))
parents_removed.add(cur_role.pk)
additions = new_parents - original_parents
if additions:
cur_role.parents.add(*list(additions))
parents_added.add(cur_role.pk)
new_parents_list = list(new_parents)
new_parents_list.sort()
new_parents_json = json.dumps(new_parents_list)
if cur_role.implicit_parents != new_parents_json:
cur_role.implicit_parents = new_parents_json
cur_role.save(update_fields=['implicit_parents'])
return (parents_added, parents_removed)
class ImplicitRoleDescriptor(ForwardManyToOneDescriptor):
pass
@@ -269,65 +205,6 @@ class ImplicitRoleField(models.ForeignKey):
getattr(cls, '__implicit_role_fields').append(self)
post_save.connect(self._post_save, cls, True, dispatch_uid='implicit-role-post-save')
post_delete.connect(self._post_delete, cls, True, dispatch_uid='implicit-role-post-delete')
function = lambda local, related, field: self.bind_m2m_changed(field, related, local)
lazy_related_operation(function, cls, "self", field=self)
def bind_m2m_changed(self, _self, _role_class, cls):
if not self.parent_role:
return
field_names = self.parent_role
if type(field_names) is not list:
field_names = [field_names]
for field_name in field_names:
if field_name.startswith('singleton:'):
continue
field_name, sep, field_attr = field_name.partition('.')
# Non existent fields will occur if ever a parent model is
# moved inside a migration, needed for job_template_organization_field
# migration in particular
# consistency is assured by unit test awx.main.tests.functional
field = getattr(cls, field_name, None)
if field and type(field) is ReverseManyToOneDescriptor or type(field) is ManyToManyDescriptor:
if '.' in field_attr:
raise Exception('Referencing deep roles through ManyToMany fields is unsupported.')
if type(field) is ReverseManyToOneDescriptor:
sender = field.through
else:
sender = field.related.through
reverse = type(field) is ManyToManyDescriptor
m2m_changed.connect(self.m2m_update(field_attr, reverse), sender, weak=False)
def m2m_update(self, field_attr, _reverse):
def _m2m_update(instance, action, model, pk_set, reverse, **kwargs):
if action == 'post_add' or action == 'pre_remove':
if _reverse:
reverse = not reverse
if reverse:
for pk in pk_set:
obj = model.objects.get(pk=pk)
if action == 'post_add':
getattr(instance, field_attr).children.add(getattr(obj, self.name))
if action == 'pre_remove':
getattr(instance, field_attr).children.remove(getattr(obj, self.name))
else:
for pk in pk_set:
obj = model.objects.get(pk=pk)
if action == 'post_add':
getattr(instance, self.name).parents.add(getattr(obj, field_attr))
if action == 'pre_remove':
getattr(instance, self.name).parents.remove(getattr(obj, field_attr))
return _m2m_update
def _post_save(self, instance, created, *args, **kwargs):
Role_ = utils.get_current_apps().get_model('main', 'Role')
@@ -337,68 +214,24 @@ class ImplicitRoleField(models.ForeignKey):
Model = utils.get_current_apps().get_model('main', instance.__class__.__name__)
latest_instance = Model.objects.get(pk=instance.pk)
# Avoid circular import
from awx.main.models.rbac import batch_role_ancestor_rebuilding, Role
# Create any missing role objects
missing_roles = []
for implicit_role_field in getattr(latest_instance.__class__, '__implicit_role_fields'):
cur_role = getattr(latest_instance, implicit_role_field.name, None)
if cur_role is None:
missing_roles.append(Role_(role_field=implicit_role_field.name, content_type_id=ct_id, object_id=latest_instance.id))
with batch_role_ancestor_rebuilding():
# Create any missing role objects
missing_roles = []
for implicit_role_field in getattr(latest_instance.__class__, '__implicit_role_fields'):
cur_role = getattr(latest_instance, implicit_role_field.name, None)
if cur_role is None:
missing_roles.append(Role_(role_field=implicit_role_field.name, content_type_id=ct_id, object_id=latest_instance.id))
if len(missing_roles) > 0:
Role_.objects.bulk_create(missing_roles)
updates = {}
role_ids = []
for role in Role_.objects.filter(content_type_id=ct_id, object_id=latest_instance.id):
setattr(latest_instance, role.role_field, role)
updates[role.role_field] = role.id
role_ids.append(role.id)
type(latest_instance).objects.filter(pk=latest_instance.pk).update(**updates)
if len(missing_roles) > 0:
Role_.objects.bulk_create(missing_roles)
updates = {}
role_ids = []
for role in Role_.objects.filter(content_type_id=ct_id, object_id=latest_instance.id):
setattr(latest_instance, role.role_field, role)
updates[role.role_field] = role.id
role_ids.append(role.id)
type(latest_instance).objects.filter(pk=latest_instance.pk).update(**updates)
Role.rebuild_role_ancestor_list(role_ids, [])
update_role_parentage_for_instance(latest_instance)
instance.refresh_from_db()
def _resolve_parent_roles(self, instance):
if not self.parent_role:
return set()
paths = self.parent_role if type(self.parent_role) is list else [self.parent_role]
parent_roles = set()
for path in paths:
if path.startswith("singleton:"):
singleton_name = path[10:]
Role_ = utils.get_current_apps().get_model('main', 'Role')
qs = Role_.objects.filter(singleton_name=singleton_name)
if qs.count() >= 1:
role = qs[0]
else:
role = Role_.objects.create(singleton_name=singleton_name, role_field=singleton_name)
parents = [role.id]
else:
parents = resolve_role_field(instance, path)
for parent in parents:
parent_roles.add(parent)
return parent_roles
def _post_delete(self, instance, *args, **kwargs):
role_ids = []
for implicit_role_field in getattr(instance.__class__, '__implicit_role_fields'):
role_ids.append(getattr(instance, implicit_role_field.name + '_id'))
Role_ = utils.get_current_apps().get_model('main', 'Role')
child_ids = [x for x in Role_.parents.through.objects.filter(to_role_id__in=role_ids).distinct().values_list('from_role_id', flat=True)]
Role_.objects.filter(id__in=role_ids).delete()
# Avoid circular import
from awx.main.models.rbac import Role
Role.rebuild_role_ancestor_list([], child_ids)
instance.refresh_from_db()
class SmartFilterField(models.TextField):
@@ -832,7 +665,7 @@ class CredentialTypeInjectorField(JSONSchemaField):
'type': 'string',
# The environment variable _value_ can be any ascii,
# but pexpect will choke on any unicode
'pattern': '^[\x00-\x7F]*$',
'pattern': '^[\x00-\x7f]*$',
},
},
'additionalProperties': False,
@@ -1039,7 +872,7 @@ class OrderedManyToManyField(models.ManyToManyField):
descriptor = getattr(instance, self.name)
order_with_respect_to = descriptor.source_field_name
for i, ig in enumerate(sender.objects.filter(**{order_with_respect_to: instance.pk})):
for i, ig in enumerate(sender.objects.filter(**{order_with_respect_to: instance.pk}).order_by('id')):
if ig.position != i:
ig.position = i
ig.save()

View File

@@ -1,26 +0,0 @@
import logging
from django.core import management
from django.core.management.base import BaseCommand
from awx.main.models import OAuth2AccessToken
from oauth2_provider.models import RefreshToken
class Command(BaseCommand):
def init_logging(self):
log_levels = dict(enumerate([logging.ERROR, logging.INFO, logging.DEBUG, 0]))
self.logger = logging.getLogger('awx.main.commands.cleanup_tokens')
self.logger.setLevel(log_levels.get(self.verbosity, 0))
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(message)s'))
self.logger.addHandler(handler)
self.logger.propagate = False
def execute(self, *args, **options):
self.verbosity = int(options.get('verbosity', 1))
self.init_logging()
total_accesstokens = OAuth2AccessToken.objects.all().count()
total_refreshtokens = RefreshToken.objects.all().count()
management.call_command('cleartokens')
self.logger.info("Expired OAuth 2 Access Tokens deleted: {}".format(total_accesstokens - OAuth2AccessToken.objects.all().count()))
self.logger.info("Expired OAuth 2 Refresh Tokens deleted: {}".format(total_refreshtokens - RefreshToken.objects.all().count()))

View File

@@ -1,34 +0,0 @@
# Django
from django.core.management.base import BaseCommand, CommandError
from django.contrib.auth.models import User
from django.core.exceptions import ObjectDoesNotExist
# AWX
from awx.api.serializers import OAuth2TokenSerializer
class Command(BaseCommand):
"""Command that creates an OAuth2 token for a certain user. Returns the value of created token."""
help = 'Creates an OAuth2 token for a user.'
def add_arguments(self, parser):
parser.add_argument('--user', dest='user', type=str)
def handle(self, *args, **options):
if not options['user']:
raise CommandError('Username not supplied. Usage: awx-manage create_oauth2_token --user=username.')
try:
user = User.objects.get(username=options['user'])
except ObjectDoesNotExist:
raise CommandError('The user does not exist.')
config = {'user': user, 'scope': 'write'}
serializer_obj = OAuth2TokenSerializer()
class FakeRequest(object):
def __init__(self):
self.user = user
serializer_obj.context['request'] = FakeRequest()
token_record = serializer_obj.create(config)
self.stdout.write(token_record.token)

View File

@@ -4,6 +4,7 @@
from django.core.management.base import BaseCommand
from django.db import transaction
from crum import impersonate
from ansible_base.resource_registry.signals.handlers import no_reverse_sync
from awx.main.models import User, Organization, Project, Inventory, CredentialType, Credential, Host, JobTemplate
from awx.main.signals import disable_computed_fields
@@ -16,8 +17,9 @@ class Command(BaseCommand):
def handle(self, *args, **kwargs):
# Wrap the operation in an atomic block, so we do not on accident
# create the organization but not create the project, etc.
with transaction.atomic():
self._handle()
with no_reverse_sync():
with transaction.atomic():
self._handle()
def _handle(self):
changed = False

View File

@@ -4,8 +4,9 @@
from django.db import transaction
from django.core.management.base import BaseCommand, CommandError
from ansible_base.lib.utils.db import advisory_lock
from awx.main.models import Instance
from awx.main.utils.pglock import advisory_lock
class Command(BaseCommand):

View File

@@ -63,7 +63,7 @@ class AWXInstance:
def instance_pretty(self):
instance = (
self.instance.hostname,
urljoin(settings.TOWER_URL_BASE, f"/#/instances/{self.instance.pk}/details"),
urljoin(settings.TOWER_URL_BASE, f"{settings.OPTIONAL_UI_URL_PREFIX}/infrastructure/instances/{self.instance.pk}/details"),
)
return f"[\"{instance[0]}\"]({instance[1]})"

View File

@@ -1,195 +0,0 @@
import json
import os
import sys
import re
from typing import Any
from django.core.management.base import BaseCommand
from django.conf import settings
from awx.conf import settings_registry
class Command(BaseCommand):
help = 'Dump the current auth configuration in django_ansible_base.authenticator format, currently supports LDAP and SAML'
DAB_SAML_AUTHENTICATOR_KEYS = {
"SP_ENTITY_ID": True,
"SP_PUBLIC_CERT": True,
"SP_PRIVATE_KEY": True,
"ORG_INFO": True,
"TECHNICAL_CONTACT": True,
"SUPPORT_CONTACT": True,
"SP_EXTRA": False,
"SECURITY_CONFIG": False,
"EXTRA_DATA": False,
"ENABLED_IDPS": True,
"CALLBACK_URL": False,
}
DAB_LDAP_AUTHENTICATOR_KEYS = {
"SERVER_URI": True,
"BIND_DN": False,
"BIND_PASSWORD": False,
"CONNECTION_OPTIONS": False,
"GROUP_TYPE": True,
"GROUP_TYPE_PARAMS": True,
"GROUP_SEARCH": False,
"START_TLS": False,
"USER_DN_TEMPLATE": True,
"USER_ATTR_MAP": True,
"USER_SEARCH": False,
}
def is_enabled(self, settings, keys):
missing_fields = []
for key, required in keys.items():
if required and not settings.get(key):
missing_fields.append(key)
if missing_fields:
return False, missing_fields
return True, None
def get_awx_ldap_settings(self) -> dict[str, dict[str, Any]]:
awx_ldap_settings = {}
for awx_ldap_setting in settings_registry.get_registered_settings(category_slug='ldap'):
key = awx_ldap_setting.removeprefix("AUTH_LDAP_")
value = getattr(settings, awx_ldap_setting, None)
awx_ldap_settings[key] = value
grouped_settings = {}
for key, value in awx_ldap_settings.items():
match = re.search(r'(\d+)', key)
index = int(match.group()) if match else 0
new_key = re.sub(r'\d+_', '', key)
if index not in grouped_settings:
grouped_settings[index] = {}
grouped_settings[index][new_key] = value
if new_key == "GROUP_TYPE" and value:
grouped_settings[index][new_key] = type(value).__name__
if new_key == "SERVER_URI" and value:
value = value.split(", ")
grouped_settings[index][new_key] = value
if type(value).__name__ == "LDAPSearch":
data = []
data.append(value.base_dn)
data.append("SCOPE_SUBTREE")
data.append(value.filterstr)
grouped_settings[index][new_key] = data
return grouped_settings
def get_awx_saml_settings(self) -> dict[str, Any]:
awx_saml_settings = {}
for awx_saml_setting in settings_registry.get_registered_settings(category_slug='saml'):
awx_saml_settings[awx_saml_setting.removeprefix("SOCIAL_AUTH_SAML_")] = getattr(settings, awx_saml_setting, None)
return awx_saml_settings
def format_config_data(self, enabled, awx_settings, type, keys, name):
config = {
"type": f"ansible_base.authentication.authenticator_plugins.{type}",
"name": name,
"enabled": enabled,
"create_objects": True,
"users_unique": False,
"remove_users": True,
"configuration": {},
}
for k in keys:
v = awx_settings.get(k)
config["configuration"].update({k: v})
if type == "saml":
idp_to_key_mapping = {
"url": "IDP_URL",
"x509cert": "IDP_X509_CERT",
"entity_id": "IDP_ENTITY_ID",
"attr_email": "IDP_ATTR_EMAIL",
"attr_groups": "IDP_GROUPS",
"attr_username": "IDP_ATTR_USERNAME",
"attr_last_name": "IDP_ATTR_LAST_NAME",
"attr_first_name": "IDP_ATTR_FIRST_NAME",
"attr_user_permanent_id": "IDP_ATTR_USER_PERMANENT_ID",
}
for idp_name in awx_settings.get("ENABLED_IDPS", {}):
for key in idp_to_key_mapping:
value = awx_settings["ENABLED_IDPS"][idp_name].get(key)
if value is not None:
config["name"] = idp_name
config["configuration"].update({idp_to_key_mapping[key]: value})
return config
def add_arguments(self, parser):
parser.add_argument(
"output_file",
nargs="?",
type=str,
default=None,
help="Output JSON file path",
)
def handle(self, *args, **options):
try:
data = []
# dump SAML settings
awx_saml_settings = self.get_awx_saml_settings()
awx_saml_enabled, saml_missing_fields = self.is_enabled(awx_saml_settings, self.DAB_SAML_AUTHENTICATOR_KEYS)
if awx_saml_enabled:
awx_saml_name = awx_saml_settings["ENABLED_IDPS"]
data.append(
self.format_config_data(
awx_saml_enabled,
awx_saml_settings,
"saml",
self.DAB_SAML_AUTHENTICATOR_KEYS,
awx_saml_name,
)
)
else:
data.append({"SAML_missing_fields": saml_missing_fields})
# dump LDAP settings
awx_ldap_group_settings = self.get_awx_ldap_settings()
for awx_ldap_name, awx_ldap_settings in awx_ldap_group_settings.items():
awx_ldap_enabled, ldap_missing_fields = self.is_enabled(awx_ldap_settings, self.DAB_LDAP_AUTHENTICATOR_KEYS)
if awx_ldap_enabled:
data.append(
self.format_config_data(
awx_ldap_enabled,
awx_ldap_settings,
"ldap",
self.DAB_LDAP_AUTHENTICATOR_KEYS,
f"LDAP_{awx_ldap_name}",
)
)
else:
data.append({f"LDAP_{awx_ldap_name}_missing_fields": ldap_missing_fields})
# write to file if requested
if options["output_file"]:
# Define the path for the output JSON file
output_file = options["output_file"]
# Ensure the directory exists
os.makedirs(os.path.dirname(output_file), exist_ok=True)
# Write data to the JSON file
with open(output_file, "w") as f:
json.dump(data, f, indent=4)
self.stdout.write(self.style.SUCCESS(f"Auth config data dumped to {output_file}"))
else:
self.stdout.write(json.dumps(data, indent=4))
except Exception as e:
self.stdout.write(self.style.ERROR(f"An error occurred: {str(e)}"))
sys.exit(1)

View File

@@ -21,6 +21,9 @@ from django.utils.encoding import smart_str
# DRF error class to distinguish license exceptions
from rest_framework.exceptions import PermissionDenied
# django-ansible-base
from ansible_base.lib.utils.db import advisory_lock
# AWX inventory imports
from awx.main.models.inventory import Inventory, InventorySource, InventoryUpdate, Host
from awx.main.utils.mem_inventory import MemInventory, dict_to_mem_data
@@ -30,9 +33,9 @@ from awx.main.utils.safe_yaml import sanitize_jinja
from awx.main.models.rbac import batch_role_ancestor_rebuilding
from awx.main.utils import ignore_inventory_computed_fields, get_licenser
from awx.main.utils.execution_environments import get_default_execution_environment
from awx.main.utils.inventory_vars import update_group_variables
from awx.main.signals import disable_activity_stream
from awx.main.constants import STANDARD_INVENTORY_UPDATE_ENV
from awx.main.utils.pglock import advisory_lock
logger = logging.getLogger('awx.main.commands.inventory_import')
@@ -455,19 +458,19 @@ class Command(BaseCommand):
"""
Update inventory variables from "all" group.
"""
# TODO: We disable variable overwrite here in case user-defined inventory variables get
# mangled. But we still need to figure out a better way of processing multiple inventory
# update variables mixing with each other.
# issue for this: https://github.com/ansible/awx/issues/11623
if self.inventory.kind == 'constructed' and self.inventory_source.overwrite_vars:
# NOTE: we had to add a exception case to not merge variables
# to make constructed inventory coherent
db_variables = self.all_group.variables
else:
db_variables = self.inventory.variables_dict
db_variables.update(self.all_group.variables)
db_variables = update_group_variables(
group_id=None, # `None` denotes the 'all' group (which doesn't have a pk).
newvars=self.all_group.variables,
dbvars=self.inventory.variables_dict,
invsrc_id=self.inventory_source.id,
inventory_id=self.inventory.id,
overwrite_vars=self.overwrite_vars,
)
if db_variables != self.inventory.variables_dict:
self.inventory.variables = json.dumps(db_variables)
self.inventory.save(update_fields=['variables'])

View File

@@ -10,7 +10,7 @@ from django.db.models.signals import post_save
from awx.conf import settings_registry
from awx.conf.models import Setting
from awx.conf.signals import on_post_save_setting
from awx.main.models import UnifiedJob, Credential, NotificationTemplate, Job, JobTemplate, WorkflowJob, WorkflowJobTemplate, OAuth2Application
from awx.main.models import UnifiedJob, Credential, NotificationTemplate, Job, JobTemplate, WorkflowJob, WorkflowJobTemplate
from awx.main.utils.encryption import encrypt_field, decrypt_field, encrypt_value, decrypt_value, get_encryption_key
@@ -45,7 +45,6 @@ class Command(BaseCommand):
self._notification_templates()
self._credentials()
self._unified_jobs()
self._oauth2_app_secrets()
self._settings()
self._survey_passwords()
return self.new_key
@@ -74,13 +73,6 @@ class Command(BaseCommand):
uj.start_args = encrypt_field(uj, 'start_args', secret_key=self.new_key)
uj.save()
def _oauth2_app_secrets(self):
for app in OAuth2Application.objects.iterator():
raw = app.client_secret
app.client_secret = raw
encrypted = encrypt_value(raw, secret_key=self.new_key)
OAuth2Application.objects.filter(pk=app.pk).update(client_secret=encrypted)
def _settings(self):
# don't update the cache, the *actual* value isn't changing
post_save.disconnect(on_post_save_setting, sender=Setting)

Some files were not shown because too many files have changed in this diff Show More